issue_owner_repo listlengths 2 2 | issue_body stringlengths 0 261k ⌀ | issue_title stringlengths 1 925 | issue_comments_url stringlengths 56 81 | issue_comments_count int64 0 2.5k | issue_created_at stringlengths 20 20 | issue_updated_at stringlengths 20 20 | issue_html_url stringlengths 37 62 | issue_github_id int64 387k 2.46B | issue_number int64 1 127k |
|---|---|---|---|---|---|---|---|---|---|
[
"langchain-ai",
"langchain"
] | ### Feature request
I'm writing to propose the integration of Redis Cache support for ChatModels within Langchain. Chat functionality is an essential and perhaps the most common scenario in many applications. Enhancing Langchain with Redis Cache support for ChatModels can bring about the following benefits.
### Motivation
1. Performance Improvement:
Utilizing Redis Cache can significantly reduce the latency, providing users with a faster and more responsive chat experience.
Caching chat messages and related data can lower the database load and enhance overall system performance.
2. Scalability:
Redis Cache can enable Langchain to efficiently manage large-scale real-time chat systems.
The support for various data structures in Redis enables sophisticated caching strategies, accommodating different chat scenarios and requirements.
3. Reliability:
Redis's replication and persistence features can offer a reliable caching layer that ensures data consistency and availability.
It supports various eviction policies that allow for a flexible and efficient utilization of memory.
4. Integration Ease:
Redis is widely used, well-documented, and has client libraries in most programming languages, making integration with Langchain straightforward.
The community around Redis is vibrant and active, which can be beneficial for ongoing support and enhancement.
### Your contribution
I am considering... | Redis Cache to support ChatModel | https://api.github.com/repos/langchain-ai/langchain/issues/8666/comments | 3 | 2023-08-03T05:50:39Z | 2024-04-26T15:14:01Z | https://github.com/langchain-ai/langchain/issues/8666 | 1,834,321,052 | 8,666 |
[
"langchain-ai",
"langchain"
] | ### System Info
* python3.9
* langchain 0.0242
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [x] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```python
m3e: HuggingFaceEmbeddings = SentenceTransformerEmbeddings(model_name='moka-ai/m3e-base')
vectorStore: VectorStore = Clickhouse(embedding=m3e, config=settings)
retriever = vectorStore.as_retriever(search_type='similarity_score_threshold', search_kwargs={'score_threshold': 20.0})
documents = retriever.get_relevant_documents('音乐')
```
retriever invoke `_get_relevant_documents in vectorstores/base.py`, and `docs_and_similarities` has doc and score.
I found `score_threshold` is not used in `similarity_search_with_relevance_scores`.
Such as pgvector, it is implement `similarity_search_with_score`, so retriever invoke
chain is `similarity_search_with_relevance_scores(base.py)` -> `_similarity_search_with_relevance_scores(base.py)` -> `similarity_search_with_score(pgvector)`. So, score filter will be working in `similarity_search_with_relevance_scores(base.py)`.
But Clickhouse vectorstore return from `similarity_search_with_relevance_scores` and without score filter because clickhouse implement `similarity_search_with_relevance_scores` from `base.py`.
---
I have two suggestions for this issue.
1. use `score_threshold` in `similarity_search_with_relevance_scores`
```sql
SELECT document,
metadata, dist
FROM default.langchain
WHERE dist > {score_threshold}
ORDER BY L2Distance(embedding, [....])
AS dist ASC
LIMIT 4
```
2. rename function `similarity_search_with_relevance_scores` to `_similarity_search_with_relevance_scores`.
make invoke chain to `similarity_search_with_relevance_scores(base.py)` -> `_similarity_search_with_relevance_scores(clickhouse)`
got data that score less than score_threshold
### Expected behavior
score filtering normally in clickhouse
BTW, I can submit pr if necessary | ClickHouse VectorStore score_threshold not working. | https://api.github.com/repos/langchain-ai/langchain/issues/8664/comments | 2 | 2023-08-03T04:44:00Z | 2023-11-03T02:35:28Z | https://github.com/langchain-ai/langchain/issues/8664 | 1,834,261,445 | 8,664 |
[
"langchain-ai",
"langchain"
] | ### System Info
python 3.10
langchain 0.0.249
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
loader = DirectoryLoader("content/zh/knowledge/", show_progress=True,loader_cls=TextLoader)
raw_documents = loader.load()
text_splitter = RecursiveCharacterTextSplitter(
chunk_size=500,
chunk_overlap=0,
separators=['\n\n','\n','。','?','!','|']
)
# embeddings = OpenAIEmbeddings(model="text-embedding-ada-002")
embeddings = HuggingFaceEmbeddings(model_name="moka-ai/m3e-base")
count = int(len(raw_documents) / 100) +1
for index in range(count):
print('当前:%s' % index)
documents = text_splitter.split_documents(raw_documents[index*100:min((index+1)*100,len(raw_documents))-1])
vectorstore = FAISS.from_documents(documents=documents, embedding=embeddings,kwargs={"distance_strategy":"MAX_INNER_PRODUCT"})
# Save vectorstore
with open("vectorstore.pkl", "wb") as f:
pickle.dump(vectorstore, f)
### Expected behavior
Use FAISS kwargs={"distance_strategy":"MAX_INNER_PRODUCT"} Error! Not work!
<img width="1512" alt="image" src="https://github.com/langchain-ai/langchain/assets/4583537/fee0f9b9-49bc-4713-adb9-de8c9be76606">
| Use FAISS {"distance_strategy":"MAX_INNER_PRODUCT"} Not work! | https://api.github.com/repos/langchain-ai/langchain/issues/8662/comments | 2 | 2023-08-03T03:57:32Z | 2023-08-03T06:53:06Z | https://github.com/langchain-ai/langchain/issues/8662 | 1,834,227,144 | 8,662 |
[
"langchain-ai",
"langchain"
] | ### System Info
```
llama_print_timings: load time = 127.16 ms
llama_print_timings: sample time = 0.00 ms / 1 runs ( 0.00 ms per token, inf tokens per second)
llama_print_timings: prompt eval time = 1753.93 ms / 237 tokens ( 7.40 ms per token, 135.13 tokens per second)
llama_print_timings: eval time = 0.00 ms / 1 runs ( 0.00 ms per token, inf tokens per second)
llama_print_timings: total time = 1779.79 ms
GGML_ASSERT: /tmp/pip-install-7c_dsjad/llama-cpp-python_04ef08a182034167b5fee4a62dd2cdc2/vendor/llama.cpp/ggml-cuda.cu:3572: src0->type == GGML_TYPE_F16
中止 (核心已傾印)
```
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [x] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```
vectorstore = Chroma.from_documents(documents=all_splits, embedding=LlamaCppEmbeddings(model_path="/home/mefae1/llm/chinese-alpaca-2-7b/ggml-model-q4_0.bin",
n_gpu_layers=40,
n_batch=16,
n_ctx=2048
))
```
### Expected behavior
return vectorstore and not error | LlamaCppEmbeddings Error `type == GGML_TYPE_F16` | https://api.github.com/repos/langchain-ai/langchain/issues/8660/comments | 3 | 2023-08-03T03:15:23Z | 2023-11-09T16:13:07Z | https://github.com/langchain-ai/langchain/issues/8660 | 1,834,199,574 | 8,660 |
[
"langchain-ai",
"langchain"
] | ### Issue with current documentation:
```
# need to use GPT-4 here as GPT-3.5 does not understand, however hard you insist, that
# it should use the calculator to perform the final calculation
llm = ChatOpenAI(temperature=0, model="gpt-4")
```
This docs say that it need gpt -4,are there other models that can be used?For example, some open source large models LLM。H
### Idea or request for content:
_No response_ | DOC: <Running Agent as an Iterator>' prefix> | https://api.github.com/repos/langchain-ai/langchain/issues/8656/comments | 1 | 2023-08-03T01:19:51Z | 2023-11-09T16:11:04Z | https://github.com/langchain-ai/langchain/issues/8656 | 1,834,120,844 | 8,656 |
[
"langchain-ai",
"langchain"
] | ### Issue with current documentation:
The documentation links seems to be broken, the Agents Use-cases links are no longer working and giving "Page Not Found".
Example Links - https://python.langchain.com/docs/use_cases/autonomous_agents/baby_agi_with_agent.html
### Idea or request for content:
_No response_ | DOC: Agent Usecase Links are broken | https://api.github.com/repos/langchain-ai/langchain/issues/8649/comments | 1 | 2023-08-02T22:11:00Z | 2023-11-08T16:06:39Z | https://github.com/langchain-ai/langchain/issues/8649 | 1,833,968,332 | 8,649 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
Following is the link of langchain documents. In this I need to integrate OPENAI_FUNCTIONS
https://python.langchain.com/docs/modules/agents/how_to/custom_llm_chat_agent
### Suggestion:
_No response_ | Integrate OPENAI_FUNCTIONS in Agent | https://api.github.com/repos/langchain-ai/langchain/issues/8648/comments | 1 | 2023-08-02T22:02:54Z | 2023-11-08T16:06:44Z | https://github.com/langchain-ai/langchain/issues/8648 | 1,833,961,567 | 8,648 |
[
"langchain-ai",
"langchain"
] | ### Feature request
An implementation of a `FakeEmbeddingModel` that generates identical vectors given identical input texts.
### Motivation
Currently langchain has a `FakeEmbedding` model that generates a vector of random numbers, that is irrelevant to the content that needs to be embedded.
This model is pretty useful in e.g., unit tests, because it doesn't need to load any actual models, or connect to the internet.
However, there is one realistic feature it misses, which makes it "fake" compared to a realistic embedding model -- given input texts are the same, the generated embedding vectors should be the same as well.
This will make unit tests that involve using a `FakeEmbeddingModel` more realistic. .E.g., ou can test if the similarity search makes sense -- searching by the exact identical text from your vector store should give me an exact match, because the embeddings are identical, which gives a distance of 0.
### Your contribution
I can submit a PR for it.
We can use the hash of the text as a seed when generating random numbers.
In this case, if input texts are identical, we should get exactly the same embedding vectors. | Deterministic fake embedding model | https://api.github.com/repos/langchain-ai/langchain/issues/8644/comments | 2 | 2023-08-02T18:36:33Z | 2023-08-04T17:51:01Z | https://github.com/langchain-ai/langchain/issues/8644 | 1,833,717,208 | 8,644 |
[
"langchain-ai",
"langchain"
] | ### Feature request
Current GraphQL Tool is very static https://python.langchain.com/docs/modules/agents/tools/integrations/graphql, one has to set up graphql url, headers etc at the time of configuring the tool.
This tool should follow [Request*Tool](https://github.com/hwchase17/langchain/blob/2b3da522eb866cb79dfdc9116a7c466b866cb3d0/langchain/tools/requests/tool.py#L53), where individual request information is sent as input to this tool. This way, GraphQL tool will become a generic and could be used to query different endpoints, or with different headers
This issue was wrongly created in the l[angchainjs repo](https://github.com/hwchase17/langchainjs/issues/1713) instead of the python langchain repo (this one).
### Motivation
We have a use case where the URL and headers would change between invocation of this tool. And these could be very dynamic, every user might have a different "Authorization" header. So we want to register the tool once to the agent, but override these values at the time of invocation.
### Your contribution
Yes, my team is willing to contribute.
Here is the pull request https://github.com/langchain-ai/langchain/pull/8616 | GraphQL Execution Tool should support URL/headers as input at runtime | https://api.github.com/repos/langchain-ai/langchain/issues/8638/comments | 2 | 2023-08-02T17:02:44Z | 2023-11-08T16:06:50Z | https://github.com/langchain-ai/langchain/issues/8638 | 1,833,584,670 | 8,638 |
[
"langchain-ai",
"langchain"
] | ### Feature request
When connecting to ChatGPT API, it is possible to get responses by the chatbot as "As an AI model...", which ruins the chatbot experience on your application.
A way to filter this and generate a default response might be better.
### Motivation
For applications using OpenAI API, this would enable filtering those requests.
### Your contribution
Not really | Catch and Filter OpenAI "As an AI model" responses | https://api.github.com/repos/langchain-ai/langchain/issues/8637/comments | 6 | 2023-08-02T16:50:53Z | 2024-02-12T16:16:29Z | https://github.com/langchain-ai/langchain/issues/8637 | 1,833,568,563 | 8,637 |
[
"langchain-ai",
"langchain"
] | ### Feature request
It would be great if there was an explicit way to support parallel upserts in the `Pinecone` vectorstore wrapper. This could either be done by altering the existing `add_texts` method or adding a new "async" equivalent.
### Motivation
The native Pinecone client supports [parallel upsert](https://docs.pinecone.io/docs/insert-data#sending-upserts-in-parallel) operations which can improve throughput. This is technically supported with the current `langchain` vectorstore wrapper using the following combination of arguments `index.add_texts(texts, async_req=True, batch_size=None)`.
However, because `add_texts` [returns](https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/vectorstores/pinecone.py#L102-L105) the list of `ids` instead of the response to `self._index.upsert`, we cannot call `get` to suspend the program until all of the responses are available (see code snippet from the Pinecone docs below).
```
# Upsert data with 100 vectors per upsert request asynchronously
# - Create pinecone.Index with pool_threads=30 (limits to 30 simultaneous requests)
# - Pass async_req=True to index.upsert()
with pinecone.Index('example-index', pool_threads=30) as index:
# Send requests in parallel
async_results = [
index.upsert(vectors=ids_vectors_chunk, async_req=True)
for ids_vectors_chunk in chunks(example_data_generator, batch_size=100)
]
# Wait for and retrieve responses (this raises in case of error)
[async_result.get() for async_result in async_results]
```
### Your contribution
I can try to put a PR up for this. Although I wonder if there was a deliberate decision to not return the raw response for [`upsert`](https://github.com/pinecone-io/pinecone-python-client/blob/main/pinecone/index.py#L73-L78) in order to have a cleaner abstraction. | Support parallel upserts in Pinecone | https://api.github.com/repos/langchain-ai/langchain/issues/8635/comments | 1 | 2023-08-02T15:54:23Z | 2023-09-05T14:59:11Z | https://github.com/langchain-ai/langchain/issues/8635 | 1,833,455,388 | 8,635 |
[
"langchain-ai",
"langchain"
] | ### System Info
OS: Windows
Name: langchain
Version: 0.0.249
Python 3.11.2
### Who can help?
@hw
### Information
- [X] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [X] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
When loading the pubmed Tool it has to be called with "pupmed". Though this was possibly a typo since pupmed isn't mentioned in the documents.
### Expected behavior
Expect to call the pubmed tool with "pubmed" and not "pupmed" or documentation should be more clear. | Pubmed needs to be called with "pupo | https://api.github.com/repos/langchain-ai/langchain/issues/8631/comments | 2 | 2023-08-02T14:22:29Z | 2023-11-08T16:06:55Z | https://github.com/langchain-ai/langchain/issues/8631 | 1,833,291,208 | 8,631 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
Hi!
How to make a Chatbot that uses its own data access to the internet to get more info (like new updated)? I've tried and searched everywhere but can't make it work.
Here the code:
`
from langchain.chains import ConversationalRetrievalChain
from langchain.memory import ConversationBufferMemory
from langchain.vectorstores import DocArrayInMemorySearch
from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.document_loaders import (
UnstructuredWordDocumentLoader,
TextLoader,
UnstructuredPowerPointLoader,
)
from langchain.tools import Tool
from langchain.utilities import GoogleSearchAPIWrapper
from langchain.chat_models import ChatOpenAI
import os
import openai
import sys
from dotenv import load_dotenv, find_dotenv
sys.path.append('../..')
_ = load_dotenv(find_dotenv()) # read local .env file
google_api_key = os.environ.get("GOOGLE_API_KEY")
google_cse_id = os.environ.get("GOOGLE_CSE_ID")
openai.api_key = os.environ['OPENAI_API_KEY']
os.environ["LANGCHAIN_TRACING_V2"] = "true"
os.environ["LANGCHAIN_ENDPOINT"] = "https://api.langchain.plus"
os.environ["LANGCHAIN_API_KEY"] = os.environ['LANGCHAIN_API_KEY']
os.environ["GOOGLE_API_KEY"] = google_api_key
os.environ["GOOGLE_CSE_ID"] = google_cse_id
folder_path_docx = "DB\\DB VARIADO\\DOCS"
folder_path_txt = " DB\\BLOG-POSTS"
folder_path_pptx_1 = "DB\\PPT JUNIO"
folder_path_pptx_2 = "DB\\DB VARIADO\\PPTX"
loaded_content = []
for file in os.listdir(folder_path_docx):
if file.endswith(".docx"):
file_path = os.path.join(folder_path_docx, file)
loader = UnstructuredWordDocumentLoader(file_path)
docx = loader.load()
loaded_content.extend(docx)
for file in os.listdir(folder_path_txt):
if file.endswith(".txt"):
file_path = os.path.join(folder_path_txt, file)
loader = TextLoader(file_path, encoding='utf-8')
text = loader.load()
loaded_content.extend(text)
for file in os.listdir(folder_path_pptx_1):
if file.endswith(".pptx"):
file_path = os.path.join(folder_path_pptx_1, file)
loader = UnstructuredPowerPointLoader(file_path)
slides_1 = loader.load()
loaded_content.extend(slides_1)
for file in os.listdir(folder_path_pptx_2):
if file.endswith(".pptx"):
file_path = os.path.join(folder_path_pptx_2, file)
loader = UnstructuredPowerPointLoader(file_path)
slides_2 = loader.load()
loaded_content.extend(slides_2)
embedding = OpenAIEmbeddings()
embeddings_content = []
for one_loaded_content in loaded_content:
embedding_content = embedding.embed_query(one_loaded_content.page_content)
embeddings_content.append(embedding_content)
db = DocArrayInMemorySearch.from_documents(loaded_content, embedding)
retriever = db.as_retriever(search_type="similarity", search_kwargs={"k": 3})
search = GoogleSearchAPIWrapper()
def custom_search(query):
max_results = 3
internet_results = search.run(query)[:max_results]
return internet_results
chain = ConversationalRetrievalChain.from_llm(
llm=ChatOpenAI(model_name="gpt-4", temperature=0),
chain_type="map_reduce",
retriever=retriever,
return_source_documents=True,
return_generated_question=True,
)
history = []
while True:
query = input("Hola, soy Chatbot. ¿Qué te gustaría saber? ")
internet_results = custom_search(query)
combined_results = loaded_content + [internet_results]
response = chain(
{"question": query, "chat_history": history, "documents": combined_results})
print(response["answer"])
history.append(("system", query))
history.append(("assistant", response["answer"]))
`
This is the error message I get: "The document does not provide information on... ". So it seems it doesn't access to the internet or something else (?)
Really appreciate your suggestion or your help!
### Suggestion:
_No response_ | How to connect a Chatbot that has its own data but has also access to internet for search? | https://api.github.com/repos/langchain-ai/langchain/issues/8625/comments | 9 | 2023-08-02T11:32:47Z | 2024-07-25T13:33:09Z | https://github.com/langchain-ai/langchain/issues/8625 | 1,833,002,315 | 8,625 |
[
"langchain-ai",
"langchain"
] | ### Feature request
When I use RetrievalQA, I need to add and reorder the content retrieved by retriever
```
qa = RetrievalQA.from_chain_type(llm=chat,
chain_type="stuff",
retriever=docsearch.as_retriever(search_kwargs={"k": 2}),
chain_type_kwargs={"prompt": PROMPT,"memory":memory })
```
I want the retrieved content from retriever to be reordered by "source".
The following code is used by me to manipulate the retrieved content to achieve my needs.
```
retriever = docsearch.as_retriever(search_kwargs={"k": 6})
search_result = retriever.get_relevant_documents(quey)
def sort_paragraphs(paragraphs):
sorted_paragraphs = sorted(paragraphs, key=lambda x: x.metadata["source"])
return sorted_paragraphs
paragraphs = sort_paragraphs(search_result)
sorted_paragraphs = ""
for i in paragraphs:
sorted_paragraphs = sorted_paragraphs + i.page_content + "\n"
```
How can I define an retriever that I can use in my chain to organize the search content as I wish
### Motivation
Some custom retrieval logic is required to seek optimal performance.
### Your contribution
I would like to have tutorials related to custom retriever documentation
@hwchase17
https://twitter.com/hwchase17/status/1646272240202432512
I think langchain needs to be a custom retriever
| How to create a custom retriever | https://api.github.com/repos/langchain-ai/langchain/issues/8623/comments | 12 | 2023-08-02T10:30:14Z | 2024-04-02T09:29:33Z | https://github.com/langchain-ai/langchain/issues/8623 | 1,832,908,335 | 8,623 |
[
"langchain-ai",
"langchain"
] | ### System Info
The class `MarkdownHeaderTextSplitter` is not a `TextSplitter`, and must implements all the corresponding methods.
```
class MarkdownHeaderTextSplitter:
...
```
@hwchase17 @eyurtsev
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
headers_to_split_on = [
("#", "Header 1"),
("##", "Header 2"),
("###", "Header 3"),
]
splitter = MarkdownHeaderTextSplitter(
headers_to_split_on=headers_to_split_on
)
splitter.split_documents(docs)
### Expected behavior
Accept the call | MarkdownHeaderTextSplitter is not a TextSplitter | https://api.github.com/repos/langchain-ai/langchain/issues/8620/comments | 6 | 2023-08-02T08:06:38Z | 2023-11-10T07:39:18Z | https://github.com/langchain-ai/langchain/issues/8620 | 1,832,660,433 | 8,620 |
[
"langchain-ai",
"langchain"
] | ### Feature request
Recently Microsoft have announced their [first iteration of running Llama using Onnx format](https://github.com/microsoft/Llama-2-Onnx/tree/main). Hence it will be awesome if LangChain comes up with an early support for Onnx runtime models.
### Motivation
There are two reasons for this.
1. Onnx has always been the standard for running inference in CPU / GPU (Onnx GPU), so this idea of providing LLMs supported for Onnx runtime format will move forward fast
2. Current implementations of running the same is an overhead, LangChain can provide the abstraction easily.
### Your contribution
I can try starting out to experiment with this of whether we can implement or not by using the existing LLM interface. However the bottleneck here becomes the .onnx format weights which are to be requested to Microsoft. I filled out the application, waiting for approval. Let me know if we can work on this issue. | Langchain Support for Onnx Llama | https://api.github.com/repos/langchain-ai/langchain/issues/8619/comments | 5 | 2023-08-02T08:03:06Z | 2024-03-13T19:56:57Z | https://github.com/langchain-ai/langchain/issues/8619 | 1,832,655,316 | 8,619 |
[
"langchain-ai",
"langchain"
] | ### System Info
langchain 0.0.249
Python 3.11.2
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
I follow the egs of openai_functions_agent at
https://python.langchain.com/docs/modules/agents/
https://python.langchain.com/docs/modules/agents/agent_types/openai_functions_agent
https://python.langchain.com/docs/modules/agents/how_to/custom-functions-with-openai-functions-agent
```
from langchain.chat_models import ChatOpenAI
from langchain.schema import SystemMessage
from langchain.agents import OpenAIFunctionsAgent
from langchain.agents import tool
from langchain.agents import AgentExecutor
llm = ChatOpenAI(temperature=0)
@tool
def get_word_length(word: str) -> int:
"""Returns the length of a word."""
import pdb # never go in here
pdb.set_trace()
return len(word)
tools = [get_word_length]
system_message = SystemMessage(content="You are very powerful assistant, but bad at calculating lengths of words")
prompt = OpenAIFunctionsAgent.create_prompt(system_message=system_message)
agent = OpenAIFunctionsAgent(llm=llm, tools=tools, prompt=prompt)
agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True)
agent_executor.run("how many letters in the word educa?")
```
But the openai_functions_agent do not invoke the tool as expected。
```
> Entering new AgentExecutor chain...
[{'role': 'system', 'content': 'You are very powerful assistant, but bad at calculating lengths of words'}, {'role': 'user', 'content': 'how many letters in the word educa?'}]
There are 5 letters in the word "educa".
> Finished chain.
```
I tried using other tools but they were not used . It seems that OPENAI_FUNCTIONS has bugs。
### Expected behavior
AgentType.OPENAI_FUNCTIONS works as docs show | AgentType.OPENAI_FUNCTIONS did not work | https://api.github.com/repos/langchain-ai/langchain/issues/8618/comments | 2 | 2023-08-02T07:02:13Z | 2023-11-08T16:07:05Z | https://github.com/langchain-ai/langchain/issues/8618 | 1,832,564,675 | 8,618 |
[
"langchain-ai",
"langchain"
] | ### System Info
langchain: 0.0.246
python: 3.11
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
I want to get a summary from chat history between ai-bot and user.
- ChatOpenAI model gpt-3.5-turbo
- LLMChain
But got an Error "openai.error.InvalidRequestError: This is a chat model and not supported in the v1/completions endpoint. Did you mean to use v1/chat/completions?"
My Code
`
memory = get_memory_from_chat_prompt(messages)
prompt = PromptTemplate(
input_variables=SUMMARIZE_INPUT_VARIABLES,
template=SUMMARIZE_CHAT_TEMPLATE
)
llm = ChatOpenAI(temperature=.1, model_name='gpt-3.5-turbo', max_tokens=1000)
human_input="""Please provide a summary. You can include the key points or main findings in a concise manner."""
answer_chain = LLMChain(llm=llm, prompt=prompt, memory=memory)
result = answer_chain.predict(human_input=human_input)
`
Error:
`
File "/Users/congle/working/bamboo/bot/app/package/aianswer.py", line 78, in get_summarize_from_messages
result = answer_chain.predict(human_input=human_input)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/congle/.local/share/virtualenvs/bot-f4x5-mrU/lib/python3.11/site-packages/langchain/chains/llm.py", line 252, in predict
return self(kwargs, callbacks=callbacks)[self.output_key]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/congle/.local/share/virtualenvs/bot-f4x5-mrU/lib/python3.11/site-packages/langchain/chains/base.py", line 260, in __call__
final_outputs: Dict[str, Any] = self.prep_outputs(
^^^^^^^^^^^^^^^^^^
File "/Users/congle/.local/share/virtualenvs/bot-f4x5-mrU/lib/python3.11/site-packages/langchain/chains/base.py", line 354, in prep_outputs
self.memory.save_context(inputs, outputs)
File "/Users/congle/.local/share/virtualenvs/bot-f4x5-mrU/lib/python3.11/site-packages/langchain/memory/summary_buffer.py", line 60, in save_context
self.prune()
File "/Users/congle/.local/share/virtualenvs/bot-f4x5-mrU/lib/python3.11/site-packages/langchain/memory/summary_buffer.py", line 71, in prune
self.moving_summary_buffer = self.predict_new_summary(
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/congle/.local/share/virtualenvs/bot-f4x5-mrU/lib/python3.11/site-packages/langchain/memory/summary.py", line 37, in predict_new_summary
return chain.predict(summary=existing_summary, new_lines=new_lines)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/congle/.local/share/virtualenvs/bot-f4x5-mrU/lib/python3.11/site-packages/langchain/chains/llm.py", line 252, in predict
return self(kwargs, callbacks=callbacks)[self.output_key]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/congle/.local/share/virtualenvs/bot-f4x5-mrU/lib/python3.11/site-packages/langchain/chains/base.py", line 258, in __call__
raise e
File "/Users/congle/.local/share/virtualenvs/bot-f4x5-mrU/lib/python3.11/site-packages/langchain/chains/base.py", line 252, in __call__
self._call(inputs, run_manager=run_manager)
File "/Users/congle/.local/share/virtualenvs/bot-f4x5-mrU/lib/python3.11/site-packages/langchain/chains/llm.py", line 92, in _call
response = self.generate([inputs], run_manager=run_manager)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/congle/.local/share/virtualenvs/bot-f4x5-mrU/lib/python3.11/site-packages/langchain/chains/llm.py", line 102, in generate
return self.llm.generate_prompt(
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/congle/.local/share/virtualenvs/bot-f4x5-mrU/lib/python3.11/site-packages/langchain/llms/base.py", line 451, in generate_prompt
return self.generate(prompt_strings, stop=stop, callbacks=callbacks, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/congle/.local/share/virtualenvs/bot-f4x5-mrU/lib/python3.11/site-packages/langchain/llms/base.py", line 582, in generate
output = self._generate_helper(
^^^^^^^^^^^^^^^^^^^^^^
File "/Users/congle/.local/share/virtualenvs/bot-f4x5-mrU/lib/python3.11/site-packages/langchain/llms/base.py", line 488, in _generate_helper
raise e
File "/Users/congle/.local/share/virtualenvs/bot-f4x5-mrU/lib/python3.11/site-packages/langchain/llms/base.py", line 475, in _generate_helper
self._generate(
File "/Users/congle/.local/share/virtualenvs/bot-f4x5-mrU/lib/python3.11/site-packages/langchain/llms/openai.py", line 400, in _generate
response = completion_with_retry(
^^^^^^^^^^^^^^^^^^^^^^
File "/Users/congle/.local/share/virtualenvs/bot-f4x5-mrU/lib/python3.11/site-packages/langchain/llms/openai.py", line 116, in completion_with_retry
return _completion_with_retry(**kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/congle/.local/share/virtualenvs/bot-f4x5-mrU/lib/python3.11/site-packages/tenacity/__init__.py", line 289, in wrapped_f
return self(f, *args, **kw)
^^^^^^^^^^^^^^^^^^^^
File "/Users/congle/.local/share/virtualenvs/bot-f4x5-mrU/lib/python3.11/site-packages/tenacity/__init__.py", line 379, in __call__
do = self.iter(retry_state=retry_state)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/congle/.local/share/virtualenvs/bot-f4x5-mrU/lib/python3.11/site-packages/tenacity/__init__.py", line 314, in iter
return fut.result()
^^^^^^^^^^^^
File "/opt/homebrew/Cellar/python@3.11/3.11.4_1/Frameworks/Python.framework/Versions/3.11/lib/python3.11/concurrent/futures/_base.py", line 449, in result
return self.__get_result()
^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/Cellar/python@3.11/3.11.4_1/Frameworks/Python.framework/Versions/3.11/lib/python3.11/concurrent/futures/_base.py", line 401, in __get_result
raise self._exception
File "/Users/congle/.local/share/virtualenvs/bot-f4x5-mrU/lib/python3.11/site-packages/tenacity/__init__.py", line 382, in __call__
result = fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/Users/congle/.local/share/virtualenvs/bot-f4x5-mrU/lib/python3.11/site-packages/langchain/llms/openai.py", line 114, in _completion_with_retry
return llm.client.create(**kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/congle/.local/share/virtualenvs/bot-f4x5-mrU/lib/python3.11/site-packages/openai/api_resources/completion.py", line 25, in create
return super().create(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/congle/.local/share/virtualenvs/bot-f4x5-mrU/lib/python3.11/site-packages/openai/api_resources/abstract/engine_api_resource.py", line 153, in create
response, _, api_key = requestor.request(
^^^^^^^^^^^^^^^^^^
File "/Users/congle/.local/share/virtualenvs/bot-f4x5-mrU/lib/python3.11/site-packages/openai/api_requestor.py", line 298, in request
resp, got_stream = self._interpret_response(result, stream)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/congle/.local/share/virtualenvs/bot-f4x5-mrU/lib/python3.11/site-packages/openai/api_requestor.py", line 700, in _interpret_response
self._interpret_response_line(
File "/Users/congle/.local/share/virtualenvs/bot-f4x5-mrU/lib/python3.11/site-packages/openai/api_requestor.py", line 763, in _interpret_response_line
raise self.handle_error_response(
openai.error.InvalidRequestError: This is a chat model and not supported in the v1/completions endpoint. Did you mean to use v1/chat/completions?
`
### Expected behavior
I expect to get summary from chat history by model gpt-3.5. Thanks for your help! | Invalid model when use chain | https://api.github.com/repos/langchain-ai/langchain/issues/8613/comments | 0 | 2023-08-02T05:17:12Z | 2023-08-02T07:13:12Z | https://github.com/langchain-ai/langchain/issues/8613 | 1,832,449,850 | 8,613 |
[
"langchain-ai",
"langchain"
] | ### Issue with current documentation:
I am following the instruction from this doc:
DOC: Structure answers with OpenAI functions
-https://python.langchain.com/docs/use_cases/question_answering/integrations/openai_functions_retrieval_qa
But this gives some error.
# Code
```python
from langchain.chains import RetrievalQA
from langchain.document_loaders import TextLoader
from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.text_splitter import CharacterTextSplitter
from langchain.vectorstores import Chroma
from langchain.chat_models import AzureChatOpenAI
from langchain.chat_models import ChatOpenAI
from langchain.chains.combine_documents.stuff import StuffDocumentsChain
from langchain.prompts import PromptTemplate
from langchain.chains import create_qa_with_sources_chain
path_txt = r"C:\Users\a126291\OneDrive - AmerisourceBergen(ABC)\data\langchain\state_of_the_union.txt"
def get_config_dict():
import os
import yaml
with open(os.path.expanduser('~/.config/config.yaml')) as fh:
config = yaml.safe_load(fh)
# openai
keys = ["OPENAI_API_KEY","OPENAI_API_TYPE","OPENAI_API_BASE","OPENAI_API_VERSION"]
for key in keys:
os.environ[key] = config.get(key)
return config
config = get_config_dict()
#========= qa chain
loader = TextLoader(path_txt, encoding="utf-8")
documents = loader.load()
text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
texts = text_splitter.split_documents(documents)
for i, text in enumerate(texts):
text.metadata["source"] = f"{i}-pl"
embeddings = OpenAIEmbeddings(model="text-embedding-ada-002", chunk_size=1)
docsearch = Chroma.from_documents(texts, embeddings)
# vectorstore = Chroma.from_documents(texts, embeddings)
# retriever = vectorstore.as_retriever()
llm = AzureChatOpenAI(**config['kw_azure_llm'],temperature=0.4)
#------- query
qa_chain = create_qa_with_sources_chain(llm)
doc_prompt = PromptTemplate(
template="Content: {page_content}\nSource: {source}",
input_variables=["page_content", "source"],
)
final_qa_chain = StuffDocumentsChain(
llm_chain=qa_chain,
document_variable_name="context",
document_prompt=doc_prompt,
)
retrieval_qa = RetrievalQA(
retriever=docsearch.as_retriever(), combine_documents_chain=final_qa_chain
)
query = "What did the president say about russia"
retrieval_qa.run(query)
```
# Error: InvalidRequestError: Unrecognized request arguments supplied: function_call, functions
```bash
---------------------------------------------------------------------------
InvalidRequestError Traceback (most recent call last)
Cell In[36], line 69
64 retrieval_qa = RetrievalQA(
65 retriever=docsearch.as_retriever(), combine_documents_chain=final_qa_chain
66 )
68 query = "What did the president say about russia"
---> 69 retrieval_qa.run(query)
File ~\venv\py311openai\Lib\site-packages\langchain\chains\base.py:290, in Chain.run(self, callbacks, tags, *args, **kwargs)
288 if len(args) != 1:
289 raise ValueError("`run` supports only one positional argument.")
--> 290 return self(args[0], callbacks=callbacks, tags=tags)[_output_key]
292 if kwargs and not args:
293 return self(kwargs, callbacks=callbacks, tags=tags)[_output_key]
File ~\venv\py311openai\Lib\site-packages\langchain\chains\base.py:166, in Chain.__call__(self, inputs, return_only_outputs, callbacks, tags, include_run_info)
164 except (KeyboardInterrupt, Exception) as e:
165 run_manager.on_chain_error(e)
--> 166 raise e
167 run_manager.on_chain_end(outputs)
168 final_outputs: Dict[str, Any] = self.prep_outputs(
169 inputs, outputs, return_only_outputs
170 )
File ~\venv\py311openai\Lib\site-packages\langchain\chains\base.py:160, in Chain.__call__(self, inputs, return_only_outputs, callbacks, tags, include_run_info)
154 run_manager = callback_manager.on_chain_start(
155 dumpd(self),
156 inputs,
157 )
158 try:
159 outputs = (
--> 160 self._call(inputs, run_manager=run_manager)
161 if new_arg_supported
162 else self._call(inputs)
163 )
164 except (KeyboardInterrupt, Exception) as e:
165 run_manager.on_chain_error(e)
File ~\venv\py311openai\Lib\site-packages\langchain\chains\retrieval_qa\base.py:120, in BaseRetrievalQA._call(self, inputs, run_manager)
117 question = inputs[self.input_key]
119 docs = self._get_docs(question)
--> 120 answer = self.combine_documents_chain.run(
121 input_documents=docs, question=question, callbacks=_run_manager.get_child()
122 )
124 if self.return_source_documents:
125 return {self.output_key: answer, "source_documents": docs}
File ~\venv\py311openai\Lib\site-packages\langchain\chains\base.py:293, in Chain.run(self, callbacks, tags, *args, **kwargs)
290 return self(args[0], callbacks=callbacks, tags=tags)[_output_key]
292 if kwargs and not args:
--> 293 return self(kwargs, callbacks=callbacks, tags=tags)[_output_key]
295 if not kwargs and not args:
296 raise ValueError(
297 "`run` supported with either positional arguments or keyword arguments,"
298 " but none were provided."
299 )
File ~\venv\py311openai\Lib\site-packages\langchain\chains\base.py:166, in Chain.__call__(self, inputs, return_only_outputs, callbacks, tags, include_run_info)
164 except (KeyboardInterrupt, Exception) as e:
165 run_manager.on_chain_error(e)
--> 166 raise e
167 run_manager.on_chain_end(outputs)
168 final_outputs: Dict[str, Any] = self.prep_outputs(
169 inputs, outputs, return_only_outputs
170 )
File ~\venv\py311openai\Lib\site-packages\langchain\chains\base.py:160, in Chain.__call__(self, inputs, return_only_outputs, callbacks, tags, include_run_info)
154 run_manager = callback_manager.on_chain_start(
155 dumpd(self),
156 inputs,
157 )
158 try:
159 outputs = (
--> 160 self._call(inputs, run_manager=run_manager)
161 if new_arg_supported
162 else self._call(inputs)
163 )
164 except (KeyboardInterrupt, Exception) as e:
165 run_manager.on_chain_error(e)
File ~\venv\py311openai\Lib\site-packages\langchain\chains\combine_documents\base.py:84, in BaseCombineDocumentsChain._call(self, inputs, run_manager)
82 # Other keys are assumed to be needed for LLM prediction
83 other_keys = {k: v for k, v in inputs.items() if k != self.input_key}
---> 84 output, extra_return_dict = self.combine_docs(
85 docs, callbacks=_run_manager.get_child(), **other_keys
86 )
87 extra_return_dict[self.output_key] = output
88 return extra_return_dict
File ~\venv\py311openai\Lib\site-packages\langchain\chains\combine_documents\stuff.py:87, in StuffDocumentsChain.combine_docs(self, docs, callbacks, **kwargs)
85 inputs = self._get_inputs(docs, **kwargs)
86 # Call predict on the LLM.
---> 87 return self.llm_chain.predict(callbacks=callbacks, **inputs), {}
File ~\venv\py311openai\Lib\site-packages\langchain\chains\llm.py:252, in LLMChain.predict(self, callbacks, **kwargs)
237 def predict(self, callbacks: Callbacks = None, **kwargs: Any) -> str:
238 """Format prompt with kwargs and pass to LLM.
239
240 Args:
(...)
250 completion = llm.predict(adjective="funny")
251 """
--> 252 return self(kwargs, callbacks=callbacks)[self.output_key]
File ~\venv\py311openai\Lib\site-packages\langchain\chains\base.py:166, in Chain.__call__(self, inputs, return_only_outputs, callbacks, tags, include_run_info)
164 except (KeyboardInterrupt, Exception) as e:
165 run_manager.on_chain_error(e)
--> 166 raise e
167 run_manager.on_chain_end(outputs)
168 final_outputs: Dict[str, Any] = self.prep_outputs(
169 inputs, outputs, return_only_outputs
170 )
File ~\venv\py311openai\Lib\site-packages\langchain\chains\base.py:160, in Chain.__call__(self, inputs, return_only_outputs, callbacks, tags, include_run_info)
154 run_manager = callback_manager.on_chain_start(
155 dumpd(self),
156 inputs,
157 )
158 try:
159 outputs = (
--> 160 self._call(inputs, run_manager=run_manager)
161 if new_arg_supported
162 else self._call(inputs)
163 )
164 except (KeyboardInterrupt, Exception) as e:
165 run_manager.on_chain_error(e)
File ~\venv\py311openai\Lib\site-packages\langchain\chains\llm.py:92, in LLMChain._call(self, inputs, run_manager)
87 def _call(
88 self,
89 inputs: Dict[str, Any],
90 run_manager: Optional[CallbackManagerForChainRun] = None,
91 ) -> Dict[str, str]:
---> 92 response = self.generate([inputs], run_manager=run_manager)
93 return self.create_outputs(response)[0]
File ~\venv\py311openai\Lib\site-packages\langchain\chains\llm.py:102, in LLMChain.generate(self, input_list, run_manager)
100 """Generate LLM result from inputs."""
101 prompts, stop = self.prep_prompts(input_list, run_manager=run_manager)
--> 102 return self.llm.generate_prompt(
103 prompts,
104 stop,
105 callbacks=run_manager.get_child() if run_manager else None,
106 **self.llm_kwargs,
107 )
File ~\venv\py311openai\Lib\site-packages\langchain\chat_models\base.py:167, in BaseChatModel.generate_prompt(self, prompts, stop, callbacks, **kwargs)
159 def generate_prompt(
160 self,
161 prompts: List[PromptValue],
(...)
164 **kwargs: Any,
165 ) -> LLMResult:
166 prompt_messages = [p.to_messages() for p in prompts]
--> 167 return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs)
File ~\venv\py311openai\Lib\site-packages\langchain\chat_models\base.py:102, in BaseChatModel.generate(self, messages, stop, callbacks, tags, **kwargs)
100 except (KeyboardInterrupt, Exception) as e:
101 run_manager.on_llm_error(e)
--> 102 raise e
103 llm_output = self._combine_llm_outputs([res.llm_output for res in results])
104 generations = [res.generations for res in results]
File ~\venv\py311openai\Lib\site-packages\langchain\chat_models\base.py:94, in BaseChatModel.generate(self, messages, stop, callbacks, tags, **kwargs)
90 new_arg_supported = inspect.signature(self._generate).parameters.get(
91 "run_manager"
92 )
93 try:
---> 94 results = [
95 self._generate(m, stop=stop, run_manager=run_manager, **kwargs)
96 if new_arg_supported
97 else self._generate(m, stop=stop)
98 for m in messages
99 ]
100 except (KeyboardInterrupt, Exception) as e:
101 run_manager.on_llm_error(e)
File ~\venv\py311openai\Lib\site-packages\langchain\chat_models\base.py:95, in <listcomp>(.0)
90 new_arg_supported = inspect.signature(self._generate).parameters.get(
91 "run_manager"
92 )
93 try:
94 results = [
---> 95 self._generate(m, stop=stop, run_manager=run_manager, **kwargs)
96 if new_arg_supported
97 else self._generate(m, stop=stop)
98 for m in messages
99 ]
100 except (KeyboardInterrupt, Exception) as e:
101 run_manager.on_llm_error(e)
File ~\venv\py311openai\Lib\site-packages\langchain\chat_models\openai.py:359, in ChatOpenAI._generate(self, messages, stop, run_manager, **kwargs)
351 message = _convert_dict_to_message(
352 {
353 "content": inner_completion,
(...)
356 }
357 )
358 return ChatResult(generations=[ChatGeneration(message=message)])
--> 359 response = self.completion_with_retry(messages=message_dicts, **params)
360 return self._create_chat_result(response)
File ~\venv\py311openai\Lib\site-packages\langchain\chat_models\openai.py:307, in ChatOpenAI.completion_with_retry(self, **kwargs)
303 @retry_decorator
304 def _completion_with_retry(**kwargs: Any) -> Any:
305 return self.client.create(**kwargs)
--> 307 return _completion_with_retry(**kwargs)
File ~\venv\py311openai\Lib\site-packages\tenacity\__init__.py:289, in BaseRetrying.wraps.<locals>.wrapped_f(*args, **kw)
287 @functools.wraps(f)
288 def wrapped_f(*args: t.Any, **kw: t.Any) -> t.Any:
--> 289 return self(f, *args, **kw)
File ~\venv\py311openai\Lib\site-packages\tenacity\__init__.py:379, in Retrying.__call__(self, fn, *args, **kwargs)
377 retry_state = RetryCallState(retry_object=self, fn=fn, args=args, kwargs=kwargs)
378 while True:
--> 379 do = self.iter(retry_state=retry_state)
380 if isinstance(do, DoAttempt):
381 try:
File ~\venv\py311openai\Lib\site-packages\tenacity\__init__.py:314, in BaseRetrying.iter(self, retry_state)
312 is_explicit_retry = fut.failed and isinstance(fut.exception(), TryAgain)
313 if not (is_explicit_retry or self.retry(retry_state)):
--> 314 return fut.result()
316 if self.after is not None:
317 self.after(retry_state)
File ~\AppData\Local\Programs\Python\Python311\Lib\concurrent\futures\_base.py:449, in Future.result(self, timeout)
447 raise CancelledError()
448 elif self._state == FINISHED:
--> 449 return self.__get_result()
451 self._condition.wait(timeout)
453 if self._state in [CANCELLED, CANCELLED_AND_NOTIFIED]:
File ~\AppData\Local\Programs\Python\Python311\Lib\concurrent\futures\_base.py:401, in Future.__get_result(self)
399 if self._exception:
400 try:
--> 401 raise self._exception
402 finally:
403 # Break a reference cycle with the exception in self._exception
404 self = None
File ~\venv\py311openai\Lib\site-packages\tenacity\__init__.py:382, in Retrying.__call__(self, fn, *args, **kwargs)
380 if isinstance(do, DoAttempt):
381 try:
--> 382 result = fn(*args, **kwargs)
383 except BaseException: # noqa: B902
384 retry_state.set_exception(sys.exc_info()) # type: ignore[arg-type]
File ~\venv\py311openai\Lib\site-packages\langchain\chat_models\openai.py:305, in ChatOpenAI.completion_with_retry.<locals>._completion_with_retry(**kwargs)
303 @retry_decorator
304 def _completion_with_retry(**kwargs: Any) -> Any:
--> 305 return self.client.create(**kwargs)
File ~\venv\py311openai\Lib\site-packages\openai\api_resources\chat_completion.py:25, in ChatCompletion.create(cls, *args, **kwargs)
23 while True:
24 try:
---> 25 return super().create(*args, **kwargs)
26 except TryAgain as e:
27 if timeout is not None and time.time() > start + timeout:
File ~\venv\py311openai\Lib\site-packages\openai\api_resources\abstract\engine_api_resource.py:153, in EngineAPIResource.create(cls, api_key, api_base, api_type, request_id, api_version, organization, **params)
127 @classmethod
128 def create(
129 cls,
(...)
136 **params,
137 ):
138 (
139 deployment_id,
140 engine,
(...)
150 api_key, api_base, api_type, api_version, organization, **params
151 )
--> 153 response, _, api_key = requestor.request(
154 "post",
155 url,
156 params=params,
157 headers=headers,
158 stream=stream,
159 request_id=request_id,
160 request_timeout=request_timeout,
161 )
163 if stream:
164 # must be an iterator
165 assert not isinstance(response, OpenAIResponse)
File ~\venv\py311openai\Lib\site-packages\openai\api_requestor.py:298, in APIRequestor.request(self, method, url, params, headers, files, stream, request_id, request_timeout)
277 def request(
278 self,
279 method,
(...)
286 request_timeout: Optional[Union[float, Tuple[float, float]]] = None,
287 ) -> Tuple[Union[OpenAIResponse, Iterator[OpenAIResponse]], bool, str]:
288 result = self.request_raw(
289 method.lower(),
290 url,
(...)
296 request_timeout=request_timeout,
297 )
--> 298 resp, got_stream = self._interpret_response(result, stream)
299 return resp, got_stream, self.api_key
File ~\venv\py311openai\Lib\site-packages\openai\api_requestor.py:700, in APIRequestor._interpret_response(self, result, stream)
692 return (
693 self._interpret_response_line(
694 line, result.status_code, result.headers, stream=True
695 )
696 for line in parse_stream(result.iter_lines())
697 ), True
698 else:
699 return (
--> 700 self._interpret_response_line(
701 result.content.decode("utf-8"),
702 result.status_code,
703 result.headers,
704 stream=False,
705 ),
706 False,
707 )
File ~\venv\py311openai\Lib\site-packages\openai\api_requestor.py:763, in APIRequestor._interpret_response_line(self, rbody, rcode, rheaders, stream)
761 stream_error = stream and "error" in resp.data
762 if stream_error or not 200 <= rcode < 300:
--> 763 raise self.handle_error_response(
764 rbody, rcode, resp.data, rheaders, stream_error=stream_error
765 )
766 return resp
InvalidRequestError: Unrecognized request arguments supplied: function_call, functions
```
### Idea or request for content:
_No response_ | InvalidRequestError: Unrecognized request arguments supplied: function_call, functions | https://api.github.com/repos/langchain-ai/langchain/issues/8593/comments | 20 | 2023-08-01T19:06:48Z | 2024-04-30T16:31:10Z | https://github.com/langchain-ai/langchain/issues/8593 | 1,831,864,866 | 8,593 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
Hello there,
I am currently attempting to add combined memory to ConversationalRetrievalChain (CRC). However, I am unsure if combined memory is compatible with CRC. I have made a few modifications, but unfortunately, I have encountered errors preventing it from running properly. I was hoping that someone could assist me in resolving this issue. Thank you.
```
from langchain.prompts import PromptTemplate
from langchain.chains import ConversationalRetrievalChain
from langchain.memory import ConversationBufferWindowMemory, CombinedMemory, ConversationSummaryMemory
chat_history = []
conv_memory = ConversationBufferWindowMemory(
memory_key="chat_history_lines",
input_key="input",
output_key='answer',
k=1
)
summary_memory = ConversationSummaryMemory(llm=turbo_llm, input_key="input", output_key='answer')
# Combined
memory = CombinedMemory(memories=[conv_memory, summary_memory])
_DEFAULT_TEMPLATE = """The following is a friendly conversation between a human and an AI. The AI is talkative and provides lots of specific details from its context. If the AI does not know the answer to a question, it truthfully says it does not know.
{context}
Summary of conversation:
{history}
Current conversation:
{chat_history_lines}
Human: {input}
AI:"""
PROMPT_= PromptTemplate(
input_variables=["history", "input", "context", "chat_history_lines"],
template=_DEFAULT_TEMPLATE
)
qa = ConversationalRetrievalChain.from_llm(
llm=turbo_llm,
chain_type="stuff",
memory=memory,
retriever=retriever,
return_source_documents=True,
return_generated_question=True,
combine_docs_chain_kwargs={'prompt': PROMPT_})
chat_history = []
query = "What did the president say about Ketanji Brown Jackson"
result = qa({"question": query, "chat_history": chat_history})`
```
Error:
`ValueError: Missing some input keys: {'input'}`
when I replace "input" with "question" in _DEFAULT_TEMPLATE and PROMPT_
then the error is :
`ValueError: One output key expected, got dict_keys(['answer', 'source_documents', 'generated_question'])
`
### Suggestion:
_No response_ | Missing some input keys: {'input'} when using Combined memory and ConversationalRetrievalChain | https://api.github.com/repos/langchain-ai/langchain/issues/8590/comments | 4 | 2023-08-01T18:08:07Z | 2023-11-08T16:07:09Z | https://github.com/langchain-ai/langchain/issues/8590 | 1,831,784,534 | 8,590 |
[
"langchain-ai",
"langchain"
] | ### System Info
langchain = "^0.0.248"
### Who can help?
@hwchase17 @agola11
### Information
- [x] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [x] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```python
import os
import hashlib
import langchain
from gptcache import Cache # type: ignore
from gptcache.manager.factory import manager_factory # type: ignore
from gptcache.processor.pre import get_prompt # type: ignore
from langchain.cache import GPTCache
from langchain.llms import OpenAI
from langchain.chat_models import ChatOpenAI
os.environ["OPENAI_API_KEY"] = "sk-..."
llm = ChatOpenAI(temperature=0) # type: ignore
def get_hashed_name(name):
return hashlib.sha256(name.encode()).hexdigest()
def init_gptcache(cache_obj: Cache, llm: str):
hashed_llm = get_hashed_name(llm)
cache_obj.init(
pre_embedding_func=get_prompt,
data_manager=manager_factory(manager="map", data_dir=f"map_cache_{hashed_llm}"),
)
langchain.llm_cache = GPTCache(init_gptcache)
# The first time, it is not yet in cache, so it should take longer
response = llm("Tell me a joke")
print(f"Response 1: {response}")
# The first time, it is not yet in cache, so it should take longer
response = llm("Tell me a joke")
print(f"Response 2: {response}")
```
### Expected behavior
Response 1:
Why did the chicken cross the road?
To get to the other side.
Response 2:
Why did the chicken cross the road?
To get to the other side. | GPTCache implementation isnt working with ChatOpenAI | https://api.github.com/repos/langchain-ai/langchain/issues/8584/comments | 3 | 2023-08-01T14:51:37Z | 2023-12-04T16:06:03Z | https://github.com/langchain-ai/langchain/issues/8584 | 1,831,458,479 | 8,584 |
[
"langchain-ai",
"langchain"
] | ### System Info
Verified in dockerimage python:3.9
### Who can help?
@hwchase17
@agola11
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
how to reproduce?
1. docker run -it python:3.9 bash
2. pip install langchain
3. pip install openai
4. run script
```
from langchain.chat_models import ChatOpenAI
from langchain.prompts.chat import SystemMessage
chat_model = ChatOpenAI(n=3)
prompt = "What is the capital of France?"
message = SystemMessage(content=prompt)
responses = chat_model.predict_messages([message], n=3)
print(responses)
```
output:
content='The capital of France is Paris.' additional_kwargs={} example=False
### Expected behavior
Hello,
When running the script I expect to get 3 responses, because I set the parameter `n=3` during initialization of ChatOpenAI and in the call of predict_messages.
still the response is a single answer!
please let me know how to correctly use the parameter n or fix the current behavior!
best regards,
LW | Cannot create "n" responses | https://api.github.com/repos/langchain-ai/langchain/issues/8581/comments | 4 | 2023-08-01T13:55:11Z | 2023-08-02T16:36:44Z | https://github.com/langchain-ai/langchain/issues/8581 | 1,831,347,798 | 8,581 |
[
"langchain-ai",
"langchain"
] | ### Feature request
`ConversationSummaryMemory` and `ConversationSummaryBufferMemory` can be used as memory within async conversational chains, but they themselves are fully blocking. In particular, `Chain.acall()` is async, but it calls `Chain.prep_outputs()`, which calls `self.memory.save_context()`. In the case of `ConversationSummaryBufferMemory`, `save_context()` calls `prune()`, which calls `SummarizerMixin.predict_new_summary()`, which creates an `LLMChain` and then calls `LLMChain.predict()`. We really need to be able to specify an async handler when we create the memory object, pass that through to the `LLMChain`, and ultimately `await LLMChain.apredict()`.
### Motivation
The blocking nature of the `LLMChain.predict()` calls means that otherwise-async chains end up blocking regularly. If you use an ASGI server like `uvicorn`, this means that lots of requests can end up waiting. You can wrap `uvicorn` in a WSGI server like `gunicorn` so that there are multiple processes available, but blocking one whole process every time the memory object needs to summarize can still block any websocket users connected to the blocked process. It's just not good.
### Your contribution
We're going to have to solve this problem for ourselves, and we'd be happy to try to contribute a solution back to the community. However, we'd be first-time contributors and we're not sure whether there are others already making similar improvements. | Need support for async memory, especially for ConversationSummaryMemory and ConversationSummaryBufferMemory | https://api.github.com/repos/langchain-ai/langchain/issues/8580/comments | 7 | 2023-08-01T13:32:50Z | 2023-11-07T22:50:05Z | https://github.com/langchain-ai/langchain/issues/8580 | 1,831,301,181 | 8,580 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
Hi everyone,
I'm having trouble working with the agent structure in Langchain.
I would like to have an agent that has memory and can return intermediate steps.
However It doesn't work, when I make an agent executor, the return_intermediate_steps=True makes it crash as the memory cannot read the output and when I use the agent directly, the return_intermediate_steps=True does nothing.
Do you know if there is a way to have an agent with memory and that can return intermediate steps at the same time ?
Thanks for your help
### Suggestion:
_No response_ | Issue: Agent with memory and intermediate steps | https://api.github.com/repos/langchain-ai/langchain/issues/8579/comments | 7 | 2023-08-01T13:22:18Z | 2024-02-13T05:59:52Z | https://github.com/langchain-ai/langchain/issues/8579 | 1,831,277,961 | 8,579 |
[
"langchain-ai",
"langchain"
] | ### System Info
### langchain Version **0.0.249**
### Python Version **3.8**
### Other notes
Using this on notebook in Azure Synapse Studio
### Error
```
When import happening getting bellow error message
TypeError Traceback (most recent call last)
/tmp/ipykernel_17180/722847199.py in <module>
----> 1 from langchain.vectorstores import Pinecone
2 from langchain.embeddings.openai import OpenAIEmbeddings
3
4 import datetime
5 from datetime import date, timedelta
~/cluster-env/clonedenv/lib/python3.8/site-packages/langchain/__init__.py in <module>
4 from typing import Optional
5
----> 6 from langchain.agents import MRKLChain, ReActChain, SelfAskWithSearchChain
7 from langchain.cache import BaseCache
8 from langchain.chains import (
~/cluster-env/clonedenv/lib/python3.8/site-packages/langchain/agents/__init__.py in <module>
1 """Interface for agents."""
----> 2 from langchain.agents.agent import (
3 Agent,
4 AgentExecutor,
5 AgentOutputParser,
~/cluster-env/clonedenv/lib/python3.8/site-packages/langchain/agents/agent.py in <module>
13 from pydantic import BaseModel, root_validator
14
---> 15 from langchain.agents.agent_iterator import AgentExecutorIterator
16 from langchain.agents.agent_types import AgentType
17 from langchain.agents.tools import InvalidTool
~/cluster-env/clonedenv/lib/python3.8/site-packages/langchain/agents/agent_iterator.py in <module>
19 )
20
---> 21 from langchain.callbacks.manager import (
22 AsyncCallbackManager,
23 AsyncCallbackManagerForChainRun,
~/cluster-env/clonedenv/lib/python3.8/site-packages/langchain/callbacks/__init__.py in <module>
1 """Callback handlers that allow listening to events in LangChain."""
2
----> 3 from langchain.callbacks.aim_callback import AimCallbackHandler
4 from langchain.callbacks.argilla_callback import ArgillaCallbackHandler
5 from langchain.callbacks.arize_callback import ArizeCallbackHandler
~/cluster-env/clonedenv/lib/python3.8/site-packages/langchain/callbacks/aim_callback.py in <module>
3
4 from langchain.callbacks.base import BaseCallbackHandler
----> 5 from langchain.schema import AgentAction, AgentFinish, LLMResult
6
7
~/cluster-env/clonedenv/lib/python3.8/site-packages/langchain/schema/__init__.py in <module>
1 from langchain.schema.agent import AgentAction, AgentFinish
2 from langchain.schema.document import BaseDocumentTransformer, Document
----> 3 from langchain.schema.memory import BaseChatMessageHistory, BaseMemory
4 from langchain.schema.messages import (
5 AIMessage,
~/cluster-env/clonedenv/lib/python3.8/site-packages/langchain/schema/memory.py in <module>
5
6 from langchain.load.serializable import Serializable
----> 7 from langchain.schema.messages import AIMessage, BaseMessage, HumanMessage
8
9
~/cluster-env/clonedenv/lib/python3.8/site-packages/langchain/schema/messages.py in <module>
136
137
--> 138 class HumanMessageChunk(HumanMessage, BaseMessageChunk):
139 pass
140
~/cluster-env/clonedenv/lib/python3.8/site-packages/pydantic/main.cpython-38-x86_64-linux-gnu.so in pydantic.main.ModelMetaclass.__new__()
~/cluster-env/clonedenv/lib/python3.8/abc.py in __new__(mcls, name, bases, namespace, **kwargs)
83 """
84 def __new__(mcls, name, bases, namespace, **kwargs):
---> 85 cls = super().__new__(mcls, name, bases, namespace, **kwargs)
86 _abc_init(cls)
87 return cls
TypeError: multiple bases have instance lay-out conflict
```
### Other installed packages in the system
Package Version
----------------------------- -------------------
absl-py 0.13.0
adal 1.2.7
adlfs 0.7.7
aiohttp 3.8.5
aiosignal 1.3.1
annotated-types 0.5.0
appdirs 1.4.4
applicationinsights 0.11.10
argon2-cffi 21.3.0
argon2-cffi-bindings 21.2.0
astor 0.8.1
astunparse 1.6.3
async-timeout 4.0.2
attrs 21.2.0
azure-common 1.1.27
azure-core 1.16.0
azure-datalake-store 0.0.51
azure-graphrbac 0.61.1
azure-identity 1.4.1
azure-mgmt-authorization 0.61.0
azure-mgmt-containerregistry 8.0.0
azure-mgmt-core 1.3.0
azure-mgmt-keyvault 2.2.0
azure-mgmt-resource 13.0.0
azure-mgmt-storage 11.2.0
azure-storage-blob 12.8.1
azure-synapse-ml-predict 1.0.0
azureml-core 1.34.0
azureml-dataprep 2.22.2
azureml-dataprep-native 38.0.0
azureml-dataprep-rslex 1.20.2
azureml-dataset-runtime 1.34.0
azureml-mlflow 1.34.0
azureml-opendatasets 1.34.0
azureml-synapse 0.0.1
azureml-telemetry 1.34.0
backcall 0.2.0
backports.functools-lru-cache 1.6.4
backports.tempfile 1.0
backports.weakref 1.0.post1
beautifulsoup4 4.9.3
bleach 5.0.1
blinker 1.4
bokeh 2.3.2
Brotli 1.0.9
brotlipy 0.7.0
cachetools 4.2.2
certifi 2021.5.30
cffi 1.14.5
chardet 4.0.0
charset-normalizer 3.2.0
click 8.0.1
cloudpickle 1.6.0
conda-package-handling 1.7.3
configparser 5.0.2
contextlib2 0.6.0.post1
cryptography 3.4.7
cycler 0.10.0
Cython 0.29.23
cytoolz 0.11.0
dash 1.20.0
dash-core-components 1.16.0
dash-cytoscape 0.2.0
dash-html-components 1.1.3
dash-renderer 1.9.1
dash-table 4.11.3
dask 2021.6.2
databricks-cli 0.12.1
dataclasses-json 0.5.14
debugpy 1.3.0
decorator 4.4.2
defusedxml 0.7.1
dill 0.3.4
distlib 0.3.6
distro 1.7.0
dnspython 2.4.1
docker 4.4.4
dotnetcore2 2.1.23
entrypoints 0.3
et-xmlfile 1.1.0
fastjsonschema 2.16.1
filelock 3.8.0
fire 0.4.0
Flask 2.0.1
Flask-Compress 0.0.0
flatbuffers 1.12
frozenlist 1.4.0
fsspec 2021.10.0
fsspec-wrapper 0.1.6
fusepy 3.0.1
future 0.18.2
gast 0.3.3
gensim 3.8.3
geographiclib 1.52
geopy 2.1.0
gevent 21.1.2
gitdb 4.0.7
GitPython 3.1.18
google-auth 1.32.1
google-auth-oauthlib 0.4.1
google-pasta 0.2.0
greenlet 1.1.0
grpcio 1.37.1
h5py 2.10.0
html5lib 1.1
hummingbird-ml 0.4.0
idna 2.10
imagecodecs 2021.3.31
imageio 2.9.0
importlib-metadata 4.6.1
importlib-resources 5.9.0
ipykernel 6.0.1
ipython 7.23.1
ipython-genutils 0.2.0
ipywidgets 7.6.3
isodate 0.6.0
itsdangerous 2.0.1
jdcal 1.4.1
jedi 0.18.0
jeepney 0.6.0
Jinja2 3.0.1
jmespath 0.10.0
joblib 1.0.1
jsonpickle 2.0.0
jsonschema 4.15.0
jupyter-client 6.1.12
jupyter-core 4.7.1
jupyterlab-pygments 0.2.2
jupyterlab-widgets 3.0.3
Keras-Applications 1.0.8
Keras-Preprocessing 1.1.2
keras2onnx 1.6.5
kiwisolver 1.3.1
koalas 1.8.0
KqlmagicCustom 0.1.114.post8
langchain 0.0.249
langsmith 0.0.16
liac-arff 2.5.0
library-metadata-cooker 0.0.7
lightgbm 3.2.1
lime 0.2.0.1
llvmlite 0.36.0
locket 0.2.1
loguru 0.7.0
lxml 4.6.5
Markdown 3.3.4
MarkupSafe 2.0.1
marshmallow 3.20.1
matplotlib 3.4.2
matplotlib-inline 0.1.2
mistune 2.0.4
mleap 0.17.0
mlflow-skinny 1.18.0
msal 1.12.0
msal-extensions 0.2.2
msrest 0.6.21
msrestazure 0.6.4
multidict 5.1.0
mypy 0.780
mypy-extensions 0.4.3
nbclient 0.6.7
nbconvert 7.0.0
nbformat 5.4.0
ndg-httpsclient 0.5.1
nest-asyncio 1.5.5
networkx 2.5.1
nltk 3.6.2
notebook 6.4.12
notebookutils 3.1.2-20230518.1
numba 0.53.1
numexpr 2.8.4
numpy 1.24.4
oauthlib 3.1.1
olefile 0.46
onnx 1.9.0
onnxconverter-common 1.7.0
onnxmltools 1.7.0
onnxruntime 1.7.2
openai 0.27.8
openapi-schema-pydantic 1.2.4
openpyxl 3.0.7
opt-einsum 3.3.0
packaging 21.0
pandas 1.2.3
pandasql 0.7.3
pandocfilters 1.5.0
parso 0.8.2
partd 1.2.0
pathspec 0.8.1
patsy 0.5.1
pexpect 4.8.0
pickleshare 0.7.5
Pillow 8.2.0
pinecone-client 2.2.2
pip 23.2.1
pkgutil_resolve_name 1.3.10
platformdirs 2.5.2
plotly 4.14.3
pmdarima 1.8.2
pooch 1.4.0
portalocker 1.7.1
prettytable 2.4.0
prometheus-client 0.14.1
prompt-toolkit 3.0.19
protobuf 3.15.8
psutil 5.8.0
ptyprocess 0.7.0
py4j 0.10.9
pyarrow 3.0.0
pyasn1 0.4.8
pyasn1-modules 0.2.8
pycairo 1.20.1
pycosat 0.6.3
pycparser 2.20
pydantic 1.8.2
pydantic_core 2.4.0
Pygments 2.9.0
PyGObject 3.40.1
PyJWT 2.1.0
pyodbc 4.0.30
pyOpenSSL 20.0.1
pyparsing 2.4.7
pyperclip 1.8.2
PyQt5 5.12.3
PyQt5_sip 4.19.18
PyQtChart 5.12
PyQtWebEngine 5.12.1
pyrsistent 0.18.1
PySocks 1.7.1
pyspark 3.1.2
python-dateutil 2.8.1
pytz 2021.1
pyu2f 0.1.5
PyWavelets 1.1.1
PyYAML 5.4.1
pyzmq 22.1.0
regex 2023.6.3
requests 2.31.0
requests-oauthlib 1.3.0
retrying 1.3.3
rsa 4.7.2
ruamel.yaml 0.17.4
ruamel.yaml.clib 0.2.6
ruamel-yaml-conda 0.15.100
SALib 1.3.11
scikit-image 0.18.1
scikit-learn 0.23.2
scipy 1.5.3
seaborn 0.11.1
SecretStorage 3.3.1
Send2Trash 1.8.0
setuptools 49.6.0.post20210108
shap 0.39.0
six 1.16.0
skl2onnx 1.8.0
sklearn-pandas 2.2.0
slicer 0.0.7
smart-open 5.1.0
smmap 3.0.5
soupsieve 2.2.1
SQLAlchemy 1.4.20
sqlanalyticsconnectorpy 1.0.1
statsmodels 0.12.2
synapseml-cognitive 0.10.2.dev1
synapseml-core 0.10.2.dev1
synapseml-deep-learning 0.10.2.dev1
synapseml-internal 0.0.0.dev1
synapseml-lightgbm 0.10.2.dev1
synapseml-opencv 0.10.2.dev1
synapseml-vw 0.10.2.dev1
tabulate 0.8.9
tenacity 8.2.2
tensorboard 2.4.1
tensorboard-plugin-wit 1.8.0
tensorflow 2.4.1
tensorflow-estimator 2.4.0
termcolor 1.1.0
terminado 0.15.0
textblob 0.15.3
threadpoolctl 2.1.0
tifffile 2021.4.8
tiktoken 0.4.0
tinycss2 1.1.1
toolz 0.11.1
torch 1.8.1
torchvision 0.9.1
tornado 6.1
tqdm 4.65.0
traitlets 5.0.5
typed-ast 1.4.3
typing_extensions 4.5.0
typing-inspect 0.8.0
urllib3 1.26.4
virtualenv 20.14.0
wcwidth 0.2.5
webencodings 0.5.1
websocket-client 1.1.0
Werkzeug 2.0.1
wheel 0.36.2
widgetsnbextension 3.5.2
wrapt 1.12.1
xgboost 1.4.0
XlsxWriter 3.0.3
yarl 1.6.3
zipp 3.5.0
zope.event 4.5.0
zope.interface 5.4.0
``
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [X] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [X] Async
### Reproduction
Go to Azure Synapse Studio
Open a Notebook
Select Pyspark(Python) as Language
Please note your node should run python 3.8
Then put bellow
### Installed bellow to the session.
```
!pip install --upgrade pip
!pip install tqdm
!pip install pinecone-client
!pip install typing-extensions==4.5.0
!pip install langchain
!pip install openai
!pip install tiktoken
```
### Here are my imports
```
from langchain.vectorstores import Pinecone
from langchain.embeddings.openai import OpenAIEmbeddings
import datetime
from datetime import date, timedelta
import time
import csv
import openai
import requests
import pandas as pd
from pyspark.sql import SparkSession
from pyspark.sql.functions import col, lit, current_timestamp
from pyspark.sql import functions as F
from pyspark.sql import Row
from pyspark.sql.utils import AnalysisException
from pyspark.sql.types import StructType, StructField, StringType, TimestampType, LongType
from pyspark.sql.window import Window
from pyspark.sql import types as T
import concurrent.futures
import pinecone
import tiktoken
```
Run the notebook.
### Expected behavior
Import should be happend without any issues. | multiple bases have instance lay-out conflict on HumanMessageChunk class on langchain 0.0.249 | https://api.github.com/repos/langchain-ai/langchain/issues/8577/comments | 24 | 2023-08-01T12:08:08Z | 2024-08-08T16:06:49Z | https://github.com/langchain-ai/langchain/issues/8577 | 1,831,132,425 | 8,577 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
This repo was split up between the core package and some experimental package. While this is a good idea, it's kinda odd to have both packages in the same repo. Even further you're releasing versions for both of these packages within the same repo. So 5 days ago there was a 0.0.5 release while later on you had 0.0.249.
I've actually never seen such a structure before and it's messing up scripted automation to follow releases.
### Suggestion:
Create a new repo for `LangChain Experimental` and have those releases there. | Issue: Structure of this repo is confusing | https://api.github.com/repos/langchain-ai/langchain/issues/8572/comments | 1 | 2023-08-01T08:48:57Z | 2023-11-07T16:05:58Z | https://github.com/langchain-ai/langchain/issues/8572 | 1,830,771,603 | 8,572 |
[
"langchain-ai",
"langchain"
] | ### Feature request
Enhance the retrieval callback hooks to include:
- document IDs from the underlying vector store
- document embeddings used during retrieval
### Motivation
I want to build a callback handler that enables LangChain users to visualize their data in [Phoenix](https://github.com/Arize-ai/phoenix), an open-source tool that provides debugging workflows for retrieval-augmented generation. At the moment, I am only able to get retrieved document text from the callback system, not the IDs or embeddings of the retrieved documents.
### Your contribution
I am willing to implement, test, and document this feature with guidance from the LangChain team. I am also happy to provide feedback on an implementation by the LangChain team by building an example callback handler using the enhancement retrieval hook functionality. | Enhance retrieval callback hooks to capture retrieved document IDs and embeddings | https://api.github.com/repos/langchain-ai/langchain/issues/8569/comments | 4 | 2023-08-01T07:44:33Z | 2024-02-13T16:14:38Z | https://github.com/langchain-ai/langchain/issues/8569 | 1,830,662,153 | 8,569 |
[
"langchain-ai",
"langchain"
] | ### Feature request
Make cosine similarity (or other retrieval metric, e.g., Euclidean distance) between query and retrieved documents available at the `retriever` callback hook.
Currently, these scores are computed during retrieval and are discarded before they become available to the retrieval callback hook:
https://github.com/langchain-ai/langchain/blob/125ae6d9dec440e04a60f63f0f4fc8411f482df8/libs/langchain/langchain/vectorstores/base.py#L500
https://github.com/langchain-ai/langchain/blob/125ae6d9dec440e04a60f63f0f4fc8411f482df8/libs/langchain/langchain/schema/retriever.py#L174
### Motivation
I want to build a callback handler that enables LangChain users to visualize their data in [Phoenix](https://github.com/Arize-ai/phoenix), an open-source tool that provides debugging workflows for retrieval-augmented generation. At the moment, it is not possible to get the similarity scores between queries and retrieved documents out of LangChain's callback system, for example, when using the `RetrievalQA` chain. Here is an [example notebook](https://github.com/Arize-ai/phoenix/blob/main/tutorials/langchain_pinecone_search_and_retrieval_tutorial.ipynb) where I sub-class `Pinecone` to get out the similarity scores:
```
class PineconeWrapper(Pinecone):
query_text_to_document_score_tuples: Dict[str, List[Tuple[Document, float]]] = {}
def similarity_search_with_score(
self,
query: str,
k: int = 4,
filter: Optional[dict] = None,
namespace: Optional[str] = None,
) -> List[Tuple[Document, float]]:
document_score_tuples = super().similarity_search_with_score(
query=query,
k=k,
filter=filter,
namespace=namespace,
)
self.query_text_to_document_score_tuples[query] = document_score_tuples
return document_score_tuples
@property
def retrieval_dataframe(self) -> pd.DataFrame:
query_texts = []
document_texts = []
retrieval_ranks = []
scores = []
for query_text, document_score_tuples in self.query_text_to_document_score_tuples.items():
for retrieval_rank, (document, score) in enumerate(document_score_tuples):
query_texts.append(query_text)
document_texts.append(document.page_content)
retrieval_ranks.append(retrieval_rank)
scores.append(score)
return pd.DataFrame.from_dict(
{
"query_text": query_texts,
"document_text": document_texts,
"retrieval_rank": retrieval_ranks,
"score": scores,
}
)
```
I would like the LangChain callback system to support this use-case.
### Your contribution
I am willing to implement, test, and document this feature with guidance from the LangChain team. I am also happy to provide feedback on an implementation by the LangChain team by building an example callback handler using the enhancement retrieval hook functionality. | Enhance retrieval callback hooks to include information on cosine similarity scores or other retrieval metrics | https://api.github.com/repos/langchain-ai/langchain/issues/8567/comments | 6 | 2023-08-01T07:18:29Z | 2024-06-01T00:07:32Z | https://github.com/langchain-ai/langchain/issues/8567 | 1,830,623,474 | 8,567 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
I am currently working on the task of **article rewriting**, and both the QA chain and Summarize chain are not quite suitable for my task.
Since I need to rewrite **multiple "lengthy"** articles, I would like to know how to use the chain_type in LLMchain. Alternatively, are there any other methods to achieve segment-level rewriting? Thank you.
### Suggestion:
_No response_ | Issue: How to use chain_type args in LLMchain? | https://api.github.com/repos/langchain-ai/langchain/issues/8565/comments | 2 | 2023-08-01T06:55:55Z | 2023-11-08T16:07:15Z | https://github.com/langchain-ai/langchain/issues/8565 | 1,830,592,050 | 8,565 |
[
"langchain-ai",
"langchain"
] | ### Feature request
Add embedding support to the callback system. Here is one approach I have in mind.
- [ ] Add `on_embedding_start` method on `CallbackManagerMixin` in `libs/langchain/langchain/callbacks/base.py`.
- [ ] Implement `EmbeddingManagerMixin` with `on_embedding_end` and `on_embedding_error` methods in `libs/langchain/langchain/callbacks/base.py`.
- [ ] Add embedding callback hook to `Embeddings` abstract base class in `libs/langchain/langchain/embeddings/base.py`.
- [ ] Tweak concrete embeddings implementations in `libs/langchain/langchain/embeddings` as necessary.
One minimally invasive approach would be:
- Implement concrete `embed_documents`, `embed_query`, `aembed_documents`, and `aembed_query` methods on the abstract `Embeddings` base class that contain the embeddings callback hook. Add abstract methods `_embed_documents` and `_embed_query` methods and unimplemented `_aembed_documents` and `_aembed_query` methods to the base class.
- Rename existing concrete implementations of `embed_documents`, `embed_query`, `aembed_documents`, and `aembed_query` to `_embed_documents`, `_embed_query`, `_aembed_documents`, and `_aembed_query`.
### Motivation
Embeddings are useful for LLM application monitoring and debugging. I want to build a callback handler that enables LangChain users to visualize their data in [Phoenix](https://github.com/Arize-ai/phoenix), an open-source tool that provides debugging workflows for retrieval-augmented generation. At the moment, it is not possible to get the query embeddings out of LangChain's callback system, for example, when using the `RetrievalQA` chain. Here is an [example notebook](https://github.com/Arize-ai/phoenix/blob/main/tutorials/langchain_pinecone_search_and_retrieval_tutorial.ipynb) where I sub-class `OpenAIEmbeddings` to get out the embedding data:
```
class OpenAIEmbeddingsWrapper(OpenAIEmbeddings):
"""
A wrapper around OpenAIEmbeddings that stores the query and document
embeddings.
"""
query_text_to_embedding: Dict[str, List[float]] = {}
document_text_to_embedding: Dict[str, List[float]] = {}
def embed_query(self, text: str) -> List[float]:
embedding = super().embed_query(text)
self.query_text_to_embedding[text] = embedding
return embedding
def embed_documents(self, texts: List[str], chunk_size: Optional[int] = 0) -> List[List[float]]:
embeddings = super().embed_documents(texts, chunk_size)
for text, embedding in zip(texts, embeddings):
self.document_text_to_embedding[text] = embedding
return embeddings
@property
def query_embedding_dataframe(self) -> pd.DataFrame:
return self._convert_text_to_embedding_map_to_dataframe(self.query_text_to_embedding)
@property
def document_embedding_dataframe(self) -> pd.DataFrame:
return self._convert_text_to_embedding_map_to_dataframe(self.document_text_to_embedding)
@staticmethod
def _convert_text_to_embedding_map_to_dataframe(
text_to_embedding: Dict[str, List[float]]
) -> pd.DataFrame:
texts, embeddings = map(list, zip(*text_to_embedding.items()))
embedding_arrays = [np.array(embedding) for embedding in embeddings]
return pd.DataFrame.from_dict(
{
"text": texts,
"text_vector": embedding_arrays,
}
)
```
I would like the LangChain callback system to support this use-case.
This feature has been [requested for TypeScript](https://github.com/hwchase17/langchainjs/issues/586) and has an [open PR](https://github.com/hwchase17/langchainjs/pull/1859). An additional motivation is to maintain parity with the TypeScript library.
### Your contribution
I am willing to implement, test, and document this feature with guidance from the LangChain team. I am also happy to provide feedback on an implementation by the LangChain team by building an example callback handler using the embeddings hook. | Add callback support for embeddings | https://api.github.com/repos/langchain-ai/langchain/issues/8564/comments | 2 | 2023-08-01T06:13:53Z | 2023-11-07T16:06:08Z | https://github.com/langchain-ai/langchain/issues/8564 | 1,830,536,926 | 8,564 |
[
"langchain-ai",
"langchain"
] | ### Feature request
Currently, langchain supports petals, but I think we should also support using a petals API endpoint. https://github.com/petals-infra/chat.petals.dev
### Motivation
The idea here is that users don't need to run the base application on their system, and can just use the API directly.
I think this is useful as a developer for speed, and just testing things quickly. I also think it would be the easiest and most reliabile way for any langchain user to get access to a high quality LLM on low-spec hardware.
### Your contribution
I am happy to add this myself if I can be guided on what parts of the code need changed. | Petals API Support | https://api.github.com/repos/langchain-ai/langchain/issues/8563/comments | 2 | 2023-08-01T05:38:35Z | 2023-11-28T11:01:45Z | https://github.com/langchain-ai/langchain/issues/8563 | 1,830,501,646 | 8,563 |
[
"langchain-ai",
"langchain"
] | ### Feature request
Most databases include comment metadata on both tables and columns in a table. It would be nice to be able to pass this additional context to the LLM to get a better response.
The implementation could include adding two parameters to the [SQLDatabase](https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/utilities/sql_database.py) object. For example:
- `table_metadata_sql` (perhaps `table_comment_sql`) runs a query and expects a table with two columns `table_name` and `comment`
- `column_metadata_sql` (perhaps `column_comment_sql` runs a query and expects a table with three columns `table_name`, `col_name`, and `comment`
Perhaps these two params could be combined into a single `metadata_sql` which returns for column table with `table_name`, `table_comment`, `column_name`, and `column_comment`
### Motivation
The main motivation behind this is to provide the LLM additional context beyond the CREATE TABLE and sample rows. Although this will be a more costly request (more tokens), I believe in many instances will lead to better SQL being generated. I also want to encourage better documentation of database objects in the data warehouse or data lake.
### Your contribution
Happy to submit a PR for this. Please weigh in on any design decisions:
- how many parameters to use 1 or 2?
- where to include the comments in the prompt, part of the CREATE TABLE as SQL comments? | Add metadata parameter(s) to SQLDatabase class | https://api.github.com/repos/langchain-ai/langchain/issues/8558/comments | 2 | 2023-08-01T01:08:42Z | 2023-11-07T16:06:18Z | https://github.com/langchain-ai/langchain/issues/8558 | 1,830,288,016 | 8,558 |
[
"langchain-ai",
"langchain"
] | ### System Info
LangChain: 0.0.248
Python: 3.9.17
OS version: Linux 6.1.27-43.48.amzn2023.x86_64
### Who can help?
I will submit a PR for a solution to this problem
### Information
- [ ] The official example notebooks/scripts
- [x] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Code:
```python
from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.vectorstores import Chroma
from langchain.document_loaders import UnstructuredMarkdownLoader
from langchain.text_splitter import CharacterTextSplitter
def testElement():
loader = UnstructuredMarkdownLoader(
"filepath", mode="elements")
documents = loader.load()
text_splitter = CharacterTextSplitter(chunk_size=500, chunk_overlap=0)
split_docs = text_splitter.split_documents(documents)
embeddings = OpenAIEmbeddings()
docsearch = Chroma.from_documents(split_docs, embeddings)
```
Also need to have a link format in the markdown file to be load, for example:
```
- [Google Developer Documentation Style Guide](https://developers.google.com/style)
```
Error Message:
```
138 # isinstance(True, int) evaluates to True, so we need to check for bools separately
139 if not isinstance(value, (str, int, float)) or isinstance(value, bool):
--> 140 raise ValueError(
141 f"Expected metadata value to be a str, int, or float, got {value} which is a {type(value)}"
142 )
ValueError: Expected metadata value to be a str, int, or float, got [{'text': 'Git', 'url': '#git'}] which is a <class 'list'>
```
### Expected behavior
I expect to see the split documents loaded into Chroma, however, this raise error for not passing type check for metadata. | ValueError: Expected metadata value to be a str, int, or float, got [{'text': 'Git', 'url': '#git'}] which is a <class 'list'> when storing into Chroma vector stores using using element mode of UnstructuredMarkdownLoader | https://api.github.com/repos/langchain-ai/langchain/issues/8556/comments | 17 | 2023-08-01T00:31:42Z | 2024-04-14T15:47:48Z | https://github.com/langchain-ai/langchain/issues/8556 | 1,830,261,522 | 8,556 |
[
"langchain-ai",
"langchain"
] | ### System Info
Python: v3.11
Langchain: v0.0.248
### Who can help?
I'll submit a PR for it tonight, just wanted to get the Issue in before.
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [x] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Background
I'm using Azure Cognitive Search as my vector store. In `azuresearch`, if you call `addtexts` with no metadata, you get an exception
Steps to reproduce behavior:
1. create AzureSearch object
2. call `add_texts` without specifying the metadata parameter
3. You'll get an error
> UnboundLocalError: cannot access local variable 'additional_fields' where it is not associated with a value
### Expected behavior
No error occurs it adds the text to the `texts` parameter to the Vector store. | Vector Store: Azure Cognitive Search :: add_texts throws an error if called with no metadata | https://api.github.com/repos/langchain-ai/langchain/issues/8544/comments | 2 | 2023-07-31T21:13:43Z | 2023-11-06T16:05:33Z | https://github.com/langchain-ai/langchain/issues/8544 | 1,830,035,282 | 8,544 |
[
"langchain-ai",
"langchain"
] | ### System Info
**LangChain:** 0.0.248
**Python:** 3.10.10
**OS version:** Linux 5.10.178-162.673.amzn2.x86_64
### Who can help?
@agola11
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [X] Callbacks/Tracing
- [ ] Async
### Reproduction
**Code:**
```
try:
with get_openai_callback() as cb:
llm_chain = LLMChain(llm=llm, prompt=prompt_main)
all_text = str(template) + str(prompt) + str(usescases) + str(transcript)
threshold = (llm.get_num_tokens(text=all_text) + 800)
dataframe_copy.loc[index, "Total Tokens"] = threshold
if int(threshold) <= 4000:
chatgpt_output = llm_chain.run({"prompt":prompt, "use_cases_dictionary":usescases, "transcript":transcript})
chatgpt_output = text_post_processing(chatgpt_output)
dataframe_copy.loc[index, "ChatGPT Output"] = chatgpt_output
dataframe_copy.loc[index, "Cost (USD)"] = cb.total_cost
else:
dataframe_copy.loc[index, "ChatGPT Output"] = " "
dataframe_copy.loc[index, "Cost (USD)"] = " "
except Exception as e:
dataframe_copy.loc[index, "ChatGPT Output"] = " "
dataframe_copy.loc[index, "Cost (USD)"] = " "
continue
```
**Error Message:**
`Retrying langchain.chat_models.openai.ChatOpenAI.completion_with_retry.<locals>._completion_with_retry in 4.0 seconds as it raised APIError: Bad gateway. {"error":{"code":502,"message":"Bad gateway.","param":null,"type":"cf_bad_gateway"}} 502 {'error': {'code': 502, 'message': 'Bad gateway.', 'param': None, 'type': 'cf_bad_gateway'}} {'Date': 'Mon, 31 Jul 2023 20:24:53 GMT', 'Content-Type': 'application/json', 'Content-Length': '84', 'Connection': 'keep-alive', 'X-Frame-Options': 'SAMEORIGIN', 'Referrer-Policy': 'same-origin', 'Cache-Control': 'private, max-age=0, no-store, no-cache, must-revalidate, post-check=0, pre-check=0', 'Expires': 'Thu, 01 Jan 1970 00:00:01 GMT', 'Server': 'cloudflare', 'CF-RAY': '7ef889a50eaca7f3-SYD', 'alt-svc': 'h3=":443"; ma=86400'}.
Error in OpenAICallbackHandler.on_retry callback: 'OpenAICallbackHandler' object has no attribute 'on_retry'`

### Expected behavior
I went through the callback [documentation ](https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.openai_info.OpenAICallbackHandler.html) and yes the "on_retry" method wasn't included over there. So I guess the team needs to modify the core code for OpenAICallbackHandler because it's calling "on_retry" for some reason. | Error: 'OpenAICallbackHandler' object has no attribute 'on_retry'` | https://api.github.com/repos/langchain-ai/langchain/issues/8542/comments | 3 | 2023-07-31T21:01:43Z | 2023-08-14T23:45:18Z | https://github.com/langchain-ai/langchain/issues/8542 | 1,830,021,223 | 8,542 |
[
"langchain-ai",
"langchain"
] | ### Issue with current documentation:
https://api.python.langchain.com/en/latest/utilities/langchain.utilities.sql_database.SQLDatabase.html
### Idea or request for content:
Documentation is missing, although i can import this class in the lateste version. | DOC: Missing documentation langchain.utilities.SQLDatabase | https://api.github.com/repos/langchain-ai/langchain/issues/8535/comments | 1 | 2023-07-31T19:01:18Z | 2023-11-06T16:05:38Z | https://github.com/langchain-ai/langchain/issues/8535 | 1,829,840,969 | 8,535 |
[
"langchain-ai",
"langchain"
] | ### System Info
LangChain ==0.0.248
Platform Windows 10
Python == 3.10.9
Whenever I use from langchain.schema import HumanMessage I get error,
**ImportError: cannot import name 'HumanMessage' from 'langchain.schema'**
I have tried updating llama-index but still getting same error
@agola11
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [x] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
from langchain.llms import OpenAI
llm = OpenAI(openai_api_key="key")
from langchain.schema import HumanMessage
### Expected behavior
It should work | Cannot import name 'HumanMessage' from 'langchain.schema' | https://api.github.com/repos/langchain-ai/langchain/issues/8527/comments | 8 | 2023-07-31T17:47:47Z | 2023-08-01T16:35:32Z | https://github.com/langchain-ai/langchain/issues/8527 | 1,829,736,076 | 8,527 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
search = SerpAPIWrapper()
tools = define_tools(llm_chain,search)
tools = [
Tool(
name="Search",
func=search.run,
description="useful for when you need to answer questions about current events",
# return_direct = True
),
]
prompt = CustomPromptTemplate(
template=template,
tools=tools,
input_variables=["input", "intermediate_steps", "chat_history"]
)
output_parser = CustomOutputParser()
llm = ChatOpenAI(temperature=0, model_name = "gpt-4-0613", streaming = True, callbacks = [MyCustomHandler(new_payload=new_payload)])
llm_chain = LLMChain(llm=llm, prompt=prompt)
agent = LLMSingleActionAgent(llm_chain=llm_chain, output_parser=output_parser, stop=["\nObservation:"], allowed_tools=tools)
agent_executor = AgentExecutor.from_agent_and_tools(agent=agent,
tools=tools,
verbose=True, memory = memory, early_stopping_method='generate')
### Suggestion:
_No response_ | how to Stream Search tool responses | https://api.github.com/repos/langchain-ai/langchain/issues/8526/comments | 2 | 2023-07-31T17:26:05Z | 2023-11-16T16:06:41Z | https://github.com/langchain-ai/langchain/issues/8526 | 1,829,703,530 | 8,526 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
I am just trying to from langchain import LLMMathChain, SerpAPIWrapper, SQLDatabase, SQLDatabaseChain
Traceback (most recent call last):
File "/home/huangj/01_LangChain/LangChainCHSample/05_05_SQL_Chain.py", line 1, in <module>
from langchain import OpenAI, SQLDatabase, SQLDatabaseChain
ImportError: cannot import name 'SQLDatabaseChain' from 'langchain' (/home/huangj/01_LangChain/langchain_env/lib/python3.8/site-packages/langchain/__init__.py)
### Suggestion:
_No response_ | Issue: ImportError: cannot import name 'SQLDatabaseChain' from 'langchain' | https://api.github.com/repos/langchain-ai/langchain/issues/8524/comments | 9 | 2023-07-31T16:53:56Z | 2024-01-26T19:04:33Z | https://github.com/langchain-ai/langchain/issues/8524 | 1,829,657,483 | 8,524 |
[
"langchain-ai",
"langchain"
] | ### Feature request
MMR Support for Vertex AI Matching Engine
### Motivation
The results of Matching Engine are not optimal
### Your contribution
MMR Support for Vertex AI Matching Engine
| MMR Support for Matching Engine | https://api.github.com/repos/langchain-ai/langchain/issues/8514/comments | 1 | 2023-07-31T13:08:29Z | 2023-11-06T16:05:43Z | https://github.com/langchain-ai/langchain/issues/8514 | 1,829,156,986 | 8,514 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
when i want to use openai, i install it with command"pip3 install openai", but i really wanti to use chatglm, when i run "pip3 install chartglm ", it does not work",please help to answer the question
### Suggestion:
_No response_ | Issue: <Please write a comprehensive title after the 'Issue: ' prefix> | https://api.github.com/repos/langchain-ai/langchain/issues/8512/comments | 4 | 2023-07-31T11:47:12Z | 2023-11-07T16:06:23Z | https://github.com/langchain-ai/langchain/issues/8512 | 1,829,012,261 | 8,512 |
[
"langchain-ai",
"langchain"
] | ### System Info
I am trying to use `RedisChatMessageHistory` within an agent, but I'm encountering this error:

My redis url I am using looks like that:
`redis_url = "redis+sentinel://:password@host:port/service_name/db"`
### Who can help?
@hwchase17
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [X] Memory
- [X] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```
message_history = RedisChatMessageHistory(url=redis_url, ttl=600, session_id="test_id")
message_history.add_user_message("hi!")
message_history.add_ai_message("whats up?")
memory = ConversationBufferWindowMemory(memory_key="memory", chat_memory=message_history, return_messages=True, k=15)
```
### Expected behavior
See on Redis the messages I added in my code. | Add memory to an Agent using RedisChatMessageHistory with sentinels throwing an error | https://api.github.com/repos/langchain-ai/langchain/issues/8511/comments | 3 | 2023-07-31T09:34:40Z | 2023-11-08T16:07:19Z | https://github.com/langchain-ai/langchain/issues/8511 | 1,828,789,616 | 8,511 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
So It's possible to parse "complex" objects with
```python
response_schemas = [
ResponseSchema(name="date", description="The date of the event"),
ResponseSchema(name="place", description="The place where the event will happen"),
]
output_parser = StructuredOutputParser.from_response_schemas(response_schemas)
```
and
```python
class Pair_of_numbers(BaseModel):
L: int = Field(description="A number")
R: int = Field(description="A number that ends in 2, 3 or 4")
@validator("R")
def question_ends_with_question_mark(cls, field):
if int(str(field)[-1]) not in [2, 3, 4]:
raise ValueError("No bro :(")
return field
parser = PydanticOutputParser(pydantic_object=Pair_of_numbers)
```
But I can't seem to find information on how to parse a list of custom objects/dicts? There is the [list parser](https://python.langchain.com/docs/modules/model_io/output_parsers/comma_separated), but this is just for simple strings. It's possible there is a way to achieve this wrapping a pydantic `BaseModel` in a python `List`, but so far I've had no luck 😫
### Suggestion:
_No response_ | Issue: output_parsers to work with lists of custom objects? | https://api.github.com/repos/langchain-ai/langchain/issues/8510/comments | 3 | 2023-07-31T09:34:19Z | 2024-07-14T04:12:39Z | https://github.com/langchain-ai/langchain/issues/8510 | 1,828,789,032 | 8,510 |
[
"langchain-ai",
"langchain"
] | ### Issue with current documentation:
I did not find this page on https://python.langchain.com/docs/get_started/introduction.html
I'm just wondering, what's the motive ?
### Idea or request for content:
_No response_ | DOC: why remove concepts.md from the latest document page | https://api.github.com/repos/langchain-ai/langchain/issues/8506/comments | 1 | 2023-07-31T07:18:44Z | 2023-11-06T16:05:58Z | https://github.com/langchain-ai/langchain/issues/8506 | 1,828,562,645 | 8,506 |
[
"langchain-ai",
"langchain"
] | ### System Info
langchain/document_loaders/web_base.py > works for me only when i change:
```
return await response.text()
```
with:
```
body = await response.read()
return body.decode('utf-8', errors='ignore')
```
otherwise:
der Code produziert leider einen Fehler:
/home/codespace/.py
thon/current/bin/python3 /workspaces/b3rn_zero_ai/notebooks/ignite_vectorstore.py
Fetching pages: 13%|###8 | 33/256 [00:03<00:19, 11.18it/s]Traceback (most recent call last):
File "/workspaces/b3rn_zero_ai/notebooks/ignite_vectorstore.py", line 68, in <module>
documents = loader.load()
File "/home/codespace/.python/current/lib/python3.10/site-packages/langchain/document_loaders/sitemap.py", line 142, in load
results = self.scrape_all([el["loc"].strip() for el in els if "loc" in el])
File "/home/codespace/.python/current/lib/python3.10/site-packages/langchain/document_loaders/web_base.py", line 168, in scrape_all
results = asyncio.run(self.fetch_all(urls))
File "/home/codespace/.local/lib/python3.10/site-packages/nest_asyncio.py", line 35, in run
return loop.run_until_complete(task)
File "/home/codespace/.local/lib/python3.10/site-packages/nest_asyncio.py", line 90, in run_until_complete
return f.result()
File "/home/codespace/.python/current/lib/python3.10/asyncio/futures.py", line 201, in result
raise self._exception.with_traceback(self._exception_tb)
File "/home/codespace/.python/current/lib/python3.10/asyncio/tasks.py", line 232, in __step
result = coro.send(None)
File "/home/codespace/.python/current/lib/python3.10/site-packages/langchain/document_loaders/web_base.py", line 148, in fetch_all
return await tqdm_asyncio.gather(
File "/home/codespace/.python/current/lib/python3.10/site-packages/tqdm/asyncio.py", line 79, in gather
res = [await f for f in cls.as_completed(ifs, loop=loop, timeout=timeout,
File "/home/codespace/.python/current/lib/python3.10/site-packages/tqdm/asyncio.py", line 79, in <listcomp>
res = [await f for f in cls.as_completed(ifs, loop=loop, timeout=timeout,
File "/home/codespace/.python/current/lib/python3.10/asyncio/tasks.py", line 571, in _wait_for_one
return f.result() # May raise f.exception().
File "/home/codespace/.python/current/lib/python3.10/asyncio/futures.py", line 201, in result
raise self._exception.with_traceback(self._exception_tb)
File "/home/codespace/.python/current/lib/python3.10/asyncio/tasks.py", line 234, in __step
result = coro.throw(exc)
File "/home/codespace/.python/current/lib/python3.10/site-packages/tqdm/asyncio.py", line 76, in wrap_awaitable
return i, await f
File "/home/codespace/.python/current/lib/python3.10/asyncio/futures.py", line 285, in __await__
yield self # This tells Task to wait for completion.
File "/home/codespace/.python/current/lib/python3.10/asyncio/tasks.py", line 304, in __wakeup
future.result()
File "/home/codespace/.python/current/lib/python3.10/asyncio/futures.py", line 201, in result
raise self._exception.with_traceback(self._exception_tb)
File "/home/codespace/.python/current/lib/python3.10/asyncio/tasks.py", line 232, in __step
result = coro.send(None)
File "/home/codespace/.python/current/lib/python3.10/site-packages/langchain/document_loaders/web_base.py", line 136, in _fetch_with_rate_limit
return await self._fetch(url)
File "/home/codespace/.python/current/lib/python3.10/site-packages/langchain/document_loaders/web_base.py", line 120, in _fetch
return await response.text()
File "/home/codespace/.python/current/lib/python3.10/site-packages/aiohttp/client_reqrep.py", line 1086, in text
return self._body.decode( # type: ignore[no-any-return,union-attr]
UnicodeDecodeError: 'utf-8' codec can't decode byte 0xb5 in position 11: invalid start byte
Fetching pages: 15%|####4 | 38/256 [00:04<00:23, 9.25it/s]
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
I tried to make embedding from a website in "french" language.
### Expected behavior
we need a solution when : UnicodeDecodeError: 'utf-8' codec can't decode byte 0xb5 in position 11: invalid start byte | langchain/document_loaders/web_base.py | https://api.github.com/repos/langchain-ai/langchain/issues/8505/comments | 2 | 2023-07-31T07:02:32Z | 2023-11-06T16:06:03Z | https://github.com/langchain-ai/langchain/issues/8505 | 1,828,539,383 | 8,505 |
[
"langchain-ai",
"langchain"
] | I have a operation manual pdf file about a website and i want to use langchain let dolly can responed the question about the website
below is my code:
```
from langchain.embeddings import HuggingFaceEmbeddings
from PyPDF2 import PdfReader
from langchain.text_splitter import CharacterTextSplitter
from langchain.vectorstores import ElasticVectorSearch, Pinecone, Weaviate, FAISS
from langchain.chains.question_answering import load_qa_chain
from langchain.llms import HuggingFacePipeline
from transformers import AutoTokenizer, AutoModelForCausalLM, pipeline
from langchain.prompts import PromptTemplate
import torch
hf_embed = HuggingFaceEmbeddings(model_name="sentence-transformers/all-mpnet-base-v2")
from google.colab import drive
drive.mount('/content/gdrive', force_remount=True)
root_dir = "/content/gdrive/My Drive/"
reader = PdfReader('/content/gdrive/My Drive/data/operation Manual.pdf')
raw_text = ''
for i, page in enumerate(reader.pages):
text = page.extract_text()
if text:
raw_text += text
text_splitter = CharacterTextSplitter(
separator = "\n",
chunk_size = 1000,
chunk_overlap = 200,
length_function = len,
)
texts = text_splitter.split_text(raw_text)
docsearch = FAISS.from_texts(texts, hf_embed)
model_name = "databricks/dolly-v2-3b"
instruct_pipeline = pipeline(model=model_name, torch_dtype=torch.bfloat16, trust_remote_code=True, device_map="auto",
return_full_text=True, max_new_tokens=256, top_p=0.95, top_k=50)
hf_pipe = HuggingFacePipeline(pipeline=instruct_pipeline)
prompt_template = """Use the following pieces of context to answer the question at the end. If you don't know the answer, just say that you don't know, don't try to make up an answer.
{context}
Question: {question}
"""
PROMPT = PromptTemplate(
template=prompt_template, input_variables=["context", "question"]
)
query = "I forgot my login password."
docs = docsearch.similarity_search(query)
chain = load_qa_chain(llm = hf_pipe, chain_type="stuff", prompt=PROMPT)
chain({"input_documents": docs, "question": query}, return_only_outputs=True)
```
I am refer to the article in the databricks website [https://www.databricks.com/resources/demos/tutorials/data-science-and-ai/build-your-chat-bot-with-dolly?itm_data=demo_center]
the output is not the answer that in the operation Manual and it have to spend much time (about 2.5 hours) to output answer. where did i do wrong? | How to use langchain to create my own databricks dolly chat robot | https://api.github.com/repos/langchain-ai/langchain/issues/8503/comments | 0 | 2023-07-31T06:08:19Z | 2023-07-31T06:14:31Z | https://github.com/langchain-ai/langchain/issues/8503 | 1,828,472,740 | 8,503 |
[
"langchain-ai",
"langchain"
] | ### System Info
**Langchain**:0.0.247
**Python**:3.10.5
**System**: macOS 13.4 arm64
### Who can help?
@hwchase17
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
There is the minimal produce demo
```python
from langchain.chat_models import PromptLayerChatOpenAI
from dotenv import load_dotenv
load_dotenv()
import promptlayer
from langchain.schema import HumanMessage
promptlayer.api_key = "xxxxxxxxxxx"
llm = PromptLayerChatOpenAI(model="gpt-3.5-turbo", verbose=True, pl_tags=["test"])
print(llm([HumanMessage(content="I am a cat and I want")]))
```
I found it on the [**PromptLayer**](https://promptlayer.com/)
<img width="978" alt="image" src="https://github.com/langchain-ai/langchain/assets/3949397/770f6c3f-ec8a-42df-967d-8e207b3208d4">
### Expected behavior
Why is the API key carried in this PromptLayer? This is insecure, it should be removed. Then, I discovered the root of the problem
The paras are generated from this function (**_generate** and **_agenerate**).
And I got it inherit from **ChatOpenAI** model function __create_message_dicts_ and __client_params_, then I found the api_key
https://github.com/langchain-ai/langchain/blob/08f5e6b8012f5eda2609103f33676199a3781a15/libs/langchain/langchain/chat_models/openai.py#L487
but the api_key is necessary for ChatOpenAI model to get the result.
so it should be changed with this two line
https://github.com/langchain-ai/langchain/blob/08f5e6b8012f5eda2609103f33676199a3781a15/libs/langchain/langchain/chat_models/promptlayer_openai.py#L59
https://github.com/langchain-ai/langchain/blob/08f5e6b8012f5eda2609103f33676199a3781a15/libs/langchain/langchain/chat_models/promptlayer_openai.py#L98 | API Key Leakage in the PromptLayerChatOpenAI Model | https://api.github.com/repos/langchain-ai/langchain/issues/8499/comments | 1 | 2023-07-31T02:14:44Z | 2023-11-02T08:48:45Z | https://github.com/langchain-ai/langchain/issues/8499 | 1,828,243,007 | 8,499 |
[
"langchain-ai",
"langchain"
] | ### System Info
LangChain 0.0.247, Python 3.11, Linux.
### Who can help?
@rlancemartin
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Described [here](https://python.langchain.com/docs/integrations/document_loaders/youtube_audio).
```python
from langchain.document_loaders.generic import GenericLoader
from langchain.document_loaders.parsers import OpenAIWhisperParser
from langchain.document_loaders.blob_loaders.youtube_audio import YoutubeAudioLoader
# Two Karpathy lecture videos
urls = ["https://youtu.be/kCc8FmEb1nY", "https://youtu.be/VMj-3S1tku0"]
# Directory to save audio files
save_dir = "~/Downloads/YouTube"
# Transcribe the videos to text
loader = GenericLoader(YoutubeAudioLoader(urls, save_dir), OpenAIWhisperParser())
docs = loader.load()
print(docs)
```
I get an empty list.
```log
[youtube] Extracting URL: https://youtu.be/kCc8FmEb1nY
[youtube] kCc8FmEb1nY: Downloading webpage
[youtube] kCc8FmEb1nY: Downloading ios player API JSON
[youtube] kCc8FmEb1nY: Downloading android player API JSON
[youtube] kCc8FmEb1nY: Downloading m3u8 information
[info] kCc8FmEb1nY: Downloading 1 format(s): 140
[download] Destination: /home/dm/Downloads/YouTube/Let's build GPT: from scratch, in code, spelled out..m4a
[download] 100% of 107.73MiB in 00:00:11 at 9.19MiB/s
[FixupM4a] Correcting container of "/home/dm/Downloads/YouTube/Let's build GPT: from scratch, in code, spelled out..m4a"
[ExtractAudio] Not converting audio /home/dm/Downloads/YouTube/Let's build GPT: from scratch, in code, spelled out..m4a; file is already in target format m4a
[youtube] Extracting URL: https://youtu.be/VMj-3S1tku0
[youtube] VMj-3S1tku0: Downloading webpage
[youtube] VMj-3S1tku0: Downloading ios player API JSON
[youtube] VMj-3S1tku0: Downloading android player API JSON
[youtube] VMj-3S1tku0: Downloading m3u8 information
[info] VMj-3S1tku0: Downloading 1 format(s): 140
[download] Destination: /home/dm/Downloads/YouTube/The spelled-out intro to neural networks and backpropagation: building micrograd.m4a
[download] 100% of 135.08MiB in 00:00:13 at 9.65MiB/s
[FixupM4a] Correcting container of "/home/dm/Downloads/YouTube/The spelled-out intro to neural networks and backpropagation: building micrograd.m4a"
[ExtractAudio] Not converting audio /home/dm/Downloads/YouTube/The spelled-out intro to neural networks and backpropagation: building micrograd.m4a; file is already in target format m4a
[]
```
### Expected behavior
A non-empty list of documents is expected. | Loading documents from a YouTube url doesn't work. | https://api.github.com/repos/langchain-ai/langchain/issues/8498/comments | 2 | 2023-07-30T20:47:14Z | 2023-07-31T06:48:09Z | https://github.com/langchain-ai/langchain/issues/8498 | 1,828,082,841 | 8,498 |
[
"langchain-ai",
"langchain"
] | ### System Info
python 3.11
### Who can help?
@rlancemartin
File langchain/document_loaders/async_html.py:136--> results = asyncio.run(self.fetch_all(self.web_paths))
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
from langchain.retrievers.web_research import WebResearchRetriever
import pinecone
import os
from langchain.vectorstores import Pinecone
from langchain.utilities import GoogleSearchAPIWrapper
from common_util.llms import LLM_FACT, EMBEDDINGS
from dotenv import load_dotenv
from common_util.namespaceEnum import PineconeNamespaceEnum
load_dotenv()
index_name = "langchain-demo"
pinecone.init(api_key=os.getenv("PINECONE_API_KEY"), environment=os.getenv("PINECONE_ENV"))
# Vectorstore
vectorstore = Pinecone.from_existing_index(index_name, EMBEDDINGS, namespace=PineconeNamespaceEnum.WEB_SEARCH.value)
# LLM
llm = LLM_FACT
# Search
os.environ["GOOGLE_CSE_ID"] = os.getenv("GOOGLE_CSE_ID")
os.environ["GOOGLE_API_KEY"] = os.getenv("GOOGLE_API_KEY")
search = GoogleSearchAPIWrapper()
web_research_retriever = WebResearchRetriever.from_llm(
vectorstore=vectorstore,
llm=llm,
search=search,
)
from langchain.chains import RetrievalQAWithSourcesChain
user_input = "How do LLM Powered Autonomous Agents work?"
qa_chain = RetrievalQAWithSourcesChain.from_chain_type(llm,retriever=web_research_retriever)
result = qa_chain({"question": user_input})
result
### Expected behavior
should search the web async | RuntimeError: asyncio.run() cannot be called from a running event loop | https://api.github.com/repos/langchain-ai/langchain/issues/8494/comments | 6 | 2023-07-30T19:10:58Z | 2023-12-13T16:07:53Z | https://github.com/langchain-ai/langchain/issues/8494 | 1,828,038,702 | 8,494 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
I am getting "Agent stopped due to iteration limit or time limit" as the error , As my max_iteration is also 15 then too i am getting this error.
I need some particular output from the model.
Following is my code
agent_executor = AgentExecutor.from_agent_and_tools(agent=agent,
tools=tools,
verbose=True, memory = memory, max_iterations = 15)
### Suggestion:
_No response_ | Agent stopped due to iteration limit or time limit | https://api.github.com/repos/langchain-ai/langchain/issues/8493/comments | 12 | 2023-07-30T17:37:50Z | 2024-05-28T10:38:44Z | https://github.com/langchain-ai/langchain/issues/8493 | 1,828,012,369 | 8,493 |
[
"langchain-ai",
"langchain"
] | ### System Info
ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
google-colab 1.0.0 requires requests==2.27.1, but you have requests 2.31.0 which is incompatible.
Successfully installed backoff-2.2.1 chroma-hnswlib-0.7.1 chromadb-0.4.3 coloredlogs-15.0.1 dataclasses-json-0.5.13 fastapi-0.99.1 h11-0.14.0 httptools-0.6.0 humanfriendly-10.0 langchain-0.0.247 langsmith-0.0.15 marshmallow-3.20.1 monotonic-1.6 mypy-extensions-1.0.0 onnxruntime-1.15.1 openai-0.27.8 openapi-schema-pydantic-1.2.4 overrides-7.3.1 posthog-3.0.1 pulsar-client-3.2.0 pypika-0.48.9 python-dotenv-1.0.0 requests-2.31.0 starlette-0.27.0 tokenizers-0.13.3 typing-inspect-0.9.0 uvicorn-0.23.1 uvloop-0.17.0 watchfiles-0.19.0 websockets-11.0.3
### Who can help?
@agola11
Trying to run Web Research Retriever code available in [Langchain docs](https://python.langchain.com/docs/modules/data_connection/retrievers/web_research)
in free Google Colab,
after running code block:
```
user_input = "How do LLM Powered Autonomous Agents work?"
qa_chain = RetrievalQAWithSourcesChain.from_chain_type(llm,retriever=web_research_retriever)
result = qa_chain({"question": user_input})
result
```
I got the following info log and error:
```
INFO:langchain.retrievers.web_research:Generating questions for Google Search ...
INFO:langchain.retrievers.web_research:Questions for Google Search (raw): {'question': 'What is Task Decomposition in LLM Powered Autonomous Agents?', 'text': LineList(lines=['1. How do LLM powered autonomous agents utilize task decomposition?\n', '2. Can you explain the concept of task decomposition in LLM powered autonomous agents?\n'])}
INFO:langchain.retrievers.web_research:Questions for Google Search: ['1. How do LLM powered autonomous agents utilize task decomposition?\n', '2. Can you explain the concept of task decomposition in LLM powered autonomous agents?\n']
INFO:langchain.retrievers.web_research:Searching for relevat urls ...
INFO:langchain.retrievers.web_research:Searching for relevat urls ...
INFO:langchain.retrievers.web_research:Search results: [{'title': "LLM Powered Autonomous Agents | Lil'Log", 'link': 'https://lilianweng.github.io/posts/2023-06-23-agent/', 'snippet': 'Jun 23, 2023 ... Agent System Overview In a LLM-powered autonomous agent system, ... Task decomposition can be done (1) by LLM with simple prompting like\xa0...'}]
INFO:langchain.retrievers.web_research:Searching for relevat urls ...
INFO:langchain.retrievers.web_research:Search results: [{'title': "LLM Powered Autonomous Agents | Lil'Log", 'link': 'https://lilianweng.github.io/posts/2023-06-23-agent/', 'snippet': 'Jun 23, 2023 ... Task decomposition can be done (1) by LLM with simple prompting like "Steps for XYZ.\\n1." , "What are the subgoals for achieving XYZ?" , (2)\xa0...'}]
INFO:langchain.retrievers.web_research:New URLs to load: ['https://lilianweng.github.io/posts/2023-06-23-agent/']
INFO:langchain.retrievers.web_research:Indexing new urls...
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
[<ipython-input-13-efc9ad25b93a>](https://localhost:8080/#) in <cell line: 5>()
3 logging.getLogger("langchain.retrievers.web_research").setLevel(logging.INFO)
4 user_input = "What is Task Decomposition in LLM Powered Autonomous Agents?"
----> 5 docs = web_research_retriever.get_relevant_documents(user_input)
4 frames
[/usr/local/lib/python3.10/dist-packages/langchain/schema/retriever.py](https://localhost:8080/#) in get_relevant_documents(self, query, callbacks, tags, metadata, **kwargs)
191 except Exception as e:
192 run_manager.on_retriever_error(e)
--> 193 raise e
194 else:
195 run_manager.on_retriever_end(
[/usr/local/lib/python3.10/dist-packages/langchain/schema/retriever.py](https://localhost:8080/#) in get_relevant_documents(self, query, callbacks, tags, metadata, **kwargs)
184 _kwargs = kwargs if self._expects_other_args else {}
185 if self._new_arg_supported:
--> 186 result = self._get_relevant_documents(
187 query, run_manager=run_manager, **_kwargs
188 )
[/usr/local/lib/python3.10/dist-packages/langchain/retrievers/web_research.py](https://localhost:8080/#) in _get_relevant_documents(self, query, run_manager)
202 html2text = Html2TextTransformer()
203 logger.info("Indexing new urls...")
--> 204 docs = loader.load()
205 docs = list(html2text.transform_documents(docs))
206 docs = self.text_splitter.split_documents(docs)
[/usr/local/lib/python3.10/dist-packages/langchain/document_loaders/async_html.py](https://localhost:8080/#) in load(self)
134 """Load text from the url(s) in web_path."""
135
--> 136 results = asyncio.run(self.fetch_all(self.web_paths))
137 docs = []
138 for i, text in enumerate(results):
[/usr/lib/python3.10/asyncio/runners.py](https://localhost:8080/#) in run(main, debug)
31 """
32 if events._get_running_loop() is not None:
---> 33 raise RuntimeError(
34 "asyncio.run() cannot be called from a running event loop")
35
RuntimeError: asyncio.run() cannot be called from a running event loop
```
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [X] Async
### Reproduction
```
! pip install langchain openai chromadb google-api-python-client
import os
import logging
from langchain.vectorstores import Chroma
from langchain.embeddings import OpenAIEmbeddings
from langchain.chat_models.openai import ChatOpenAI
from langchain.retrievers.web_research import WebResearchRetriever
from langchain.utilities import GoogleSearchAPIWrapper
os.environ["OPENAI_API_KEY"] = "****"
os.environ["GOOGLE_CSE_ID"] = "*****"
os.environ["GOOGLE_API_KEY"] = "******"
# Vectorstore
vectorstore = Chroma(embedding_function=OpenAIEmbeddings(),persist_directory="./chroma_db_oai")
# LLM
llm = ChatOpenAI(temperature=0)
# Search
search = GoogleSearchAPIWrapper()
from langchain.chains import RetrievalQAWithSourcesChain
user_input = "How do LLM Powered Autonomous Agents work?"
qa_chain = RetrievalQAWithSourcesChain.from_chain_type(llm,retriever=web_research_retriever)
result = qa_chain({"question": user_input})
result
```
### Expected behavior
It works as shown in docs' page. | Web Research Retriever error, code run in free Colab | https://api.github.com/repos/langchain-ai/langchain/issues/8487/comments | 2 | 2023-07-30T09:28:34Z | 2023-07-30T14:30:15Z | https://github.com/langchain-ai/langchain/issues/8487 | 1,827,868,223 | 8,487 |
[
"langchain-ai",
"langchain"
] | ### System Info
LangChain 0.0.242
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
tried to load https://huggingface.co/TheBloke/StableBeluga2-70B-GGML model to work with lanchain's Llamacpp :
`llm = LlamaCpp(model_path="./stablebeluga2-70b.ggmlv3.q4_0.bin", n_gpu_layers=n_gpu_layers,
n_batch=n_batch, n_ctx=8192, input={"temperature": 0.01},n_threads=8)
llm_chain = LLMChain(llm=llm, prompt=prompt)`
i see that there is no support in passing n_gqa=8 parameter that according to [https://github.com/abetlen/llama-cpp-python](https://github.com/abetlen/llama-cpp-python) should be used for 70B models
the error i get is :
error loading model: llama.cpp: tensor 'layers.0.attention.wk.weight' has wrong shape; expected 8192 x 8192, got 8192 x 1024
### Expected behavior
model should be loaded successfuly | no support for loading 70B models via llamacpp | https://api.github.com/repos/langchain-ai/langchain/issues/8486/comments | 3 | 2023-07-30T09:12:42Z | 2023-11-07T16:06:33Z | https://github.com/langchain-ai/langchain/issues/8486 | 1,827,863,684 | 8,486 |
[
"langchain-ai",
"langchain"
] | ### Feature request
I propose the addition of a foundational chain feature, namely, the IteratorChain to LangChain. Unlike a convenience function, this feature is intended to enhance the flexibility and usability of LangChain by allowing it to handle collections of inputs for any existing chain, propagating them sequentially or asynchronously.
```python
from langchain.chains import IteratorChain, LLMChain
llm_chain = LLMChain(...)
iterator_chain = IteratorChain(llm_chain)
inputs = [{"text": "Hello"}, {"text": "World"}]
outputs = iterator_chain.run(inputs)
```
In the current LangChain framework, when dealing with lists or collections of inputs, developers are required to manually loop through the input list and call the run method for each item. This approach not only results in more code but also creates challenges with LangChain's features like LangSmith logging, especially when lists are dynamically generated from previous chains.
The IteratorChain would encapsulate this looping, resulting in cleaner code and better integration with LangSmith logging and other LangChain features.
### Motivation
The proposed IteratorChain could address a current issue I'm having when dealing with nested lists of inputs. Let's consider the current SequentialChain setup below:
```python
SequentialChain(
chains=[
ChainA(input_variables=["inputA"], output_key="outputA"),
TransformChain(input_variables=["outputA", "auxiliary_input"], output_variables=["outputB"], transform=process_A_to_B),
TransformChain(input_variables=["outputB", "auxiliary_input"], output_variables=["outputC"], transform=refine_B_to_C),
TransformChain(input_variables=["outputC", "auxiliary_input"], output_variables=["final_output"], transform=process_C_to_final)
],
input_variables=["inputA", "auxiliary_input"],
output_variables=["final_output"]
)
```
In this setup, process_A_to_B and refine_B_to_C are both creating lists of inputs for further chains. However, these lists of inputs are currently not being processed elegantly. We have to loop manually over the list and call the underlying chain for each item. This not only leads to cumbersome code, but also hinders proper interaction with the LangSmith logging feature.
```python
def process_A_to_B(params) -> List[Dict[str, Any]]:
...
for item in items:
chainB = SomeChain(...)
output = chainB.run({"input": item, "auxiliary_input": auxiliary_input})
...
return {"outputB": outputs}
```
```python
def refine_B_to_C(params) -> List[Dict[str, Any]]:
...
for item in items:
chainC = AnotherChain(...)
output = chainC.run({"input": item, "auxiliary_input": auxiliary_input})
...
return {"outputC": outputs}
```
The addition of an IteratorChain feature would address these issues. It will encapsulate the manual loop and make the list handling process more intuitive and efficient, ensuring proper integration with LangSmith logging and other LangChain features.
### Your contribution
I'm prepared to contribute to the development of this feature, if decided that it is a good addition and is not already possible using existing capabilities. | IteratorChain | https://api.github.com/repos/langchain-ai/langchain/issues/8484/comments | 2 | 2023-07-30T07:43:44Z | 2023-11-05T16:04:59Z | https://github.com/langchain-ai/langchain/issues/8484 | 1,827,840,103 | 8,484 |
[
"langchain-ai",
"langchain"
] | ### System Info
I use this code:
```
search = GoogleSearchAPIWrapper()
tool = Tool(
name="Google Search",
description="Search Google for recent results.",
func=search.run,
)
tool.run("Obama's first name?")
```
The result looks fine. But when I use the code below
```
search = GoogleSearchAPIWrapper()
tools = [
Tool(
name="google-search",
func=search.run,
description="useful when you need to search google to answer questions about current events"
)
]
agent = initialize_agent(
tools,
llm,
agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION,
verbose=True,
max_iterations=6)
response = agent("What is the latest news about the Mars rover?")
print(response)
```
I get this error:
```
File /opt/homebrew/anaconda3/envs/common_3.8/lib/python3.8/site-packages/googleapiclient/http.py:191, in _retry_request(http, num_retries, req_type, sleep, rand, uri, method, *args, **kwargs)
189 try:
190 exception = None
--> 191 resp, content = http.request(uri, method, *args, **kwargs)
192 # Retry on SSL errors and socket timeout errors.
193 except _ssl_SSLError as ssl_error:
File /opt/homebrew/anaconda3/envs/common_3.8/lib/python3.8/site-packages/httplib2/__init__.py:1724, in Http.request(self, uri, method, body, headers, redirections, connection_type)
1722 content = b""
1723 else:
-> 1724 (response, content) = self._request(
1725 conn, authority, uri, request_uri, method, body, headers, redirections, cachekey,
1726 )
1727 except Exception as e:
1728 is_timeout = isinstance(e, socket.timeout)
File /opt/homebrew/anaconda3/envs/common_3.8/lib/python3.8/site-packages/httplib2/__init__.py:1444, in Http._request(self, conn, host, absolute_uri, request_uri, method, body, headers, redirections, cachekey)
1441 if auth:
1442 auth.request(method, request_uri, headers, body)
-> 1444 (response, content) = self._conn_request(conn, request_uri, method, body, headers)
1446 if auth:
1447 if auth.response(response, body):
File /opt/homebrew/anaconda3/envs/common_3.8/lib/python3.8/site-packages/httplib2/__init__.py:1396, in Http._conn_request(self, conn, request_uri, method, body, headers)
1394 pass
1395 try:
-> 1396 response = conn.getresponse()
1397 except (http.client.BadStatusLine, http.client.ResponseNotReady):
1398 # If we get a BadStatusLine on the first try then that means
1399 # the connection just went stale, so retry regardless of the
1400 # number of RETRIES set.
1401 if not seen_bad_status_line and i == 1:
File /opt/homebrew/anaconda3/envs/common_3.8/lib/python3.8/http/client.py:1348, in HTTPConnection.getresponse(self)
1346 try:
1347 try:
-> 1348 response.begin()
1349 except ConnectionError:
1350 self.close()
File /opt/homebrew/anaconda3/envs/common_3.8/lib/python3.8/http/client.py:316, in HTTPResponse.begin(self)
314 # read until we get a non-100 response
315 while True:
--> 316 version, status, reason = self._read_status()
317 if status != CONTINUE:
318 break
File /opt/homebrew/anaconda3/envs/common_3.8/lib/python3.8/http/client.py:277, in HTTPResponse._read_status(self)
276 def _read_status(self):
--> 277 line = str(self.fp.readline(_MAXLINE + 1), "iso-8859-1")
278 if len(line) > _MAXLINE:
279 raise LineTooLong("status line")
File /opt/homebrew/anaconda3/envs/common_3.8/lib/python3.8/socket.py:669, in SocketIO.readinto(self, b)
667 while True:
668 try:
--> 669 return self._sock.recv_into(b)
670 except timeout:
671 self._timeout_occurred = True
File /opt/homebrew/anaconda3/envs/common_3.8/lib/python3.8/ssl.py:1241, in SSLSocket.recv_into(self, buffer, nbytes, flags)
1237 if flags != 0:
1238 raise ValueError(
1239 "non-zero flags not allowed in calls to recv_into() on %s" %
1240 self.__class__)
-> 1241 return self.read(nbytes, buffer)
1242 else:
1243 return super().recv_into(buffer, nbytes, flags)
File /opt/homebrew/anaconda3/envs/common_3.8/lib/python3.8/ssl.py:1099, in SSLSocket.read(self, len, buffer)
1097 try:
1098 if buffer is not None:
-> 1099 return self._sslobj.read(len, buffer)
1100 else:
1101 return self._sslobj.read(len)
ConnectionResetError: [Errno 54] Connection reset by peer
```
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [X] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```
search = GoogleSearchAPIWrapper()
tools = [
Tool(
name="google-search",
func=search.run,
description="useful when you need to search google to answer questions about current events"
)
]
agent = initialize_agent(
tools,
llm,
agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION,
verbose=True,
max_iterations=6)
response = agent("What is the latest news about the Mars rover?")
print(response)
```
### Expected behavior
Hope to know the reason and how to prevent the issue. | ConnectionResetError: [Errno 54] Connection reset by peer | https://api.github.com/repos/langchain-ai/langchain/issues/8483/comments | 4 | 2023-07-30T07:07:22Z | 2023-11-09T16:08:15Z | https://github.com/langchain-ai/langchain/issues/8483 | 1,827,831,371 | 8,483 |
[
"langchain-ai",
"langchain"
] | ### System Info
Langchain 0.0.239
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [X] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
The results I got from 3 different Google Search API Wrapper. The same thing happenes to the Agent with tool.
```python
from langchain.utilities import GoogleSerperAPIWrapper
search = GoogleSerperAPIWrapper()
search.run("What's the Tesla stock today?")
```
'266.44 +10.73 (4.20%)'
```python
from langchain.utilities import SerpAPIWrapper
search = SerpAPIWrapper()
print(search.run("What's the Tesla stock today?"))
```
Tesla, Inc. is an American multinational automotive and clean energy company headquartered in Austin, Texas. Tesla designs and manufactures electric vehicles, stationary battery energy storage devices from home to grid-scale, solar panels and solar roof tiles, and related products and services.
```python
from langchain.utilities import GoogleSearchAPIWrapper
search = GoogleSearchAPIWrapper()
search.run("What's the Tesla stock today?")
```
"In this week's video, I cover need-to-know news items related to Tesla (NASDAQ: TSLA) during the week of July 24. Today's video will focus on what Tesla\xa0... TSLA | Complete Tesla Inc. stock news by MarketWatch. View real-time stock prices ... Here's What One Survey Revealed About Their Perception Of Elon Musk. Tesla is accelerating the world's transition to sustainable energy with electric cars, solar and integrated renewable energy solutions for homes and\xa0... Tesla's mission is to accelerate the world's transition to sustainable energy. Today, Tesla builds not only all-electric vehicles but also infinitely\xa0... Get Tesla Inc (TSLA:NASDAQ) real-time stock quotes, news, price and financial ... Please contact cnbc support to provide details about what went wrong. Feb 2, 2023 ... Tesla stock soared 41% in January, its best month since October 2021, leaving investors breathless and wondering what to do next. TSLA: Get the latest Tesla stock price and detailed information including TSLA news, historical charts and ... What are analysts forecasts for Tesla stock? Aug 25, 2022 ... The question now is what do Tesla investors expect the stock to do after the split. Tesla (ticker: TSLA) stock on Thursday was trading at\xa0... Aug 25, 2022 ... The electric car company completed a 3-for-1 stock split after the closing bell Wednesday. So one share now costs a third of what it did a day\xa0... Jun 30, 2023 ... Doubling your money isn't easy, and doubling it in just six months is even more difficult, so investors now have to decide: Is It time to take\xa0..."
### Expected behavior
I except the result from these 3 API should be similar. | 3 different google search API varies a lot | https://api.github.com/repos/langchain-ai/langchain/issues/8480/comments | 2 | 2023-07-30T06:00:36Z | 2023-11-13T16:07:20Z | https://github.com/langchain-ai/langchain/issues/8480 | 1,827,816,758 | 8,480 |
[
"langchain-ai",
"langchain"
] | ### Issue with current documentation:
The INFO: Python Guide links in both https://docs.langchain.com/docs/components/prompts/prompt-template and https://docs.langchain.com/docs/components/prompts/example-selectors are both broken (similar to #8105)
### Idea or request for content:
The pages have simply been moved from https://python.langchain.com/docs/modules/prompts/ to https://python.langchain.com/docs/modules/model_io/prompts/, so setting up corresponding redirects should fix it
I can open up a PR with the corresponding redirects myself | DOC: Broken Links in Prompts Sub Categories Pages | https://api.github.com/repos/langchain-ai/langchain/issues/8477/comments | 0 | 2023-07-30T04:41:57Z | 2023-07-31T02:38:53Z | https://github.com/langchain-ai/langchain/issues/8477 | 1,827,802,229 | 8,477 |
[
"langchain-ai",
"langchain"
] | ### System Info
Langchain version: 0.0.247
python version: 3.11.0
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [X] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
You can reproduce this issue according following link:
https://python.langchain.com/docs/modules/model_io/prompts/prompt_templates/prompts_pipelining
```
from langchain.prompts import ChatPromptTemplate, HumanMessagePromptTemplate
from langchain.schema import HumanMessage, AIMessage, SystemMessage
prompt = SystemMessage(content="You are a nice pirate")
new_prompt = (
prompt
+ HumanMessage(content="hi")
+ AIMessage(content="what?")
+ "{input}"
)
```
prompy + HumanMessage(content="hi") will generate this issue
### Expected behavior
operand + for 'SystemMessage' and 'HumanMessage' should be support | unsupported operand type(s) for +: 'SystemMessage' and 'HumanMessage' | https://api.github.com/repos/langchain-ai/langchain/issues/8472/comments | 5 | 2023-07-30T02:14:01Z | 2023-11-05T16:05:09Z | https://github.com/langchain-ai/langchain/issues/8472 | 1,827,763,902 | 8,472 |
[
"langchain-ai",
"langchain"
] | ### System Info
langchain-version: 0.0.246
AzureCognitiveSearchRetriever always return top 10 results as supposed to what was specified for top k
<img width="1158" alt="image" src="https://github.com/langchain-ai/langchain/assets/19245478/cb0e7317-8eee-4d9b-b71c-95c217451b42">
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
retriever = AzureCognitiveSearchRetriever(api_version=api_version, top_k=3)
results = retriever.get_relevant_documents(chat_input.to_string())
print(retriever.top_k)
print(len(results))
### Expected behavior
The two print results are not the same | AzureCognitiveSearchRetriever Issue | https://api.github.com/repos/langchain-ai/langchain/issues/8469/comments | 1 | 2023-07-29T23:12:59Z | 2023-11-04T16:04:30Z | https://github.com/langchain-ai/langchain/issues/8469 | 1,827,714,420 | 8,469 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
I just saw a novel error, which appears to be triggered by a failed OpenAI API call (inside an asynchronous block) which is causing an asyncio.run() inside an asyncio.run(). Error pasted below. Is this my (user) error? Or possibly a problem with the acompletion_with_retry() implementation?
```
2023-07-29 05:53:14,838 INFO message='OpenAI API response' path=https://api.openai.com/v1/chat/completions processing_ms=None request_id=None response_code=502
2023-07-29 05:53:14,838 INFO error_code=502 error_message='Bad gateway.' error_param=None error_type=cf_bad_gateway message='OpenAI API error received' stream_error=False
2023-07-29 05:53:14,839 WARNING Retrying langchain.chat_models.openai.acompletion_with_retry.<locals>._completion_with_retry in 4.0 seconds as it raised APIError: Bad gateway. {"error":{"code":502,"message":"Bad gateway.","param":null,"type":"cf_bad_gateway"}} 502 {'error': {'code': 502, 'message': 'Bad gateway.', 'param': None, 'type': 'cf_bad_gateway'}} <CIMultiDictProxy('Date': 'Sat, 29 Jul 2023 05:53:14 GMT', 'Content-Type': 'application/json', 'Content-Length': '84', 'Connection': 'keep-alive', 'X-Frame-Options': 'SAMEORIGIN', 'Referrer-Policy': 'same-origin', 'Cache-Control': 'private, max-age=0, no-store, no-cache, must-revalidate, post-check=0, pre-check=0', 'Expires': 'Thu, 01 Jan 1970 00:00:01 GMT', 'Server': 'cloudflare', 'CF-RAY': '7ee3120dab9f1084-ORD', 'alt-svc': 'h3=":443"; ma=86400')>.
2023-07-29 05:53:14,839 ERROR Error in on_retry: asyncio.run() cannot be called from a running event loop
/usr/local/python-modules/tenacity/__init__.py:338: RuntimeWarning: coroutine 'AsyncRunManager.on_retry' was never awaited
self.before_sleep(retry_state)
RuntimeWarning: Enable tracemalloc to get the object allocation traceback
```
### Suggestion:
_No response_ | Issue: OpenAI Bad Gateway results in Error in on_retry: asyncio.run() cannot be called from a running event loop (coroutine 'AsyncRunManager.on_retry' was never awaited) inside openai.acompletion_with_retry | https://api.github.com/repos/langchain-ai/langchain/issues/8462/comments | 14 | 2023-07-29T17:11:33Z | 2023-09-25T09:44:18Z | https://github.com/langchain-ai/langchain/issues/8462 | 1,827,559,495 | 8,462 |
[
"langchain-ai",
"langchain"
] | ### System Info
I'm trying to use the tutorial on langchain but I get this error:
```
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
Cell In[11], line 1
----> 1 from langchain.document_loaders import TextLoader
2 print(langchain.__version__)
3 loader = TextLoader("Scribbles.txt")
File [~/Dropbox/Personal/islington_news/myenv/lib/python3.9/site-packages/langchain/__init__.py:6](https://file+.vscode-resource.vscode-cdn.net/Users/davidelks/Dropbox/Personal/~/Dropbox/Personal/islington_news/myenv/lib/python3.9/site-packages/langchain/__init__.py:6)
3 from importlib import metadata
4 from typing import Optional
----> 6 from langchain.agents import MRKLChain, ReActChain, SelfAskWithSearchChain
7 from langchain.cache import BaseCache
8 from langchain.chains import (
9 ConversationChain,
10 LLMBashChain,
(...)
16 VectorDBQAWithSourcesChain,
17 )
File [~/Dropbox/Personal/islington_news/myenv/lib/python3.9/site-packages/langchain/agents/__init__.py:2](https://file+.vscode-resource.vscode-cdn.net/Users/davidelks/Dropbox/Personal/~/Dropbox/Personal/islington_news/myenv/lib/python3.9/site-packages/langchain/agents/__init__.py:2)
1 """Interface for agents."""
----> 2 from langchain.agents.agent import (
3 Agent,
4 AgentExecutor,
5 AgentOutputParser,
...
851 if not isinstance(cls, _GenericAlias):
--> 852 return issubclass(cls, self.__origin__)
853 return super().__subclasscheck__(cls)
TypeError: issubclass() arg 1 must be a class
```
I can't provide the version of langchain because I get this error. (I've got 0.0.247 from pip install.)
Running on MacOs Ventura. Python: 3.9.15
### Who can help?
@elksie5000
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [X] Callbacks/Tracing
- [ ] Async
### Reproduction
from langchain.document_loaders import TextLoader
print(langchain.__version__)
loader = TextLoader("Scribbles.txt")
from langchain.indexes import VectorstoreIndexCreator
index = VectorstoreIndexCreator().from_loaders([loader])
### Expected behavior
The index to be created. | TypeError: issubclass() arg 1 must be a class | https://api.github.com/repos/langchain-ai/langchain/issues/8458/comments | 2 | 2023-07-29T11:19:47Z | 2023-11-04T16:04:36Z | https://github.com/langchain-ai/langchain/issues/8458 | 1,827,446,765 | 8,458 |
[
"langchain-ai",
"langchain"
] | ### System Info
... % python --version
Python 3.11.4
... % pip show langchain | grep Version
Version: 0.0.247
### Who can help?
@eyurtsev
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
When following the langchain docs [here](https://python.langchain.com/docs/integrations/vectorstores/qdrant#qdrant-cloud), there will be an error thrown:
```py
qdrant = Qdrant.from_documents(
docs,
embeddings,
url,
prefer_grpc=True,
api_key=api_key,
collection_name="test",
)
```
error:
```
Traceback (most recent call last):
File "...myscript.py", line 29, in <module>
qdrant = Qdrant.from_documents(
^^^^^^^^^^^^^^^^^^^^^^
TypeError: VectorStore.from_documents() takes 3 positional arguments but 4 were given
```
Is it related to https://github.com/langchain-ai/langchain/pull/7910 ?
### Expected behavior
QDrant being initialized properly. | VectorStore.from_documents() takes 3 positional arguments but 4 were given | https://api.github.com/repos/langchain-ai/langchain/issues/8457/comments | 2 | 2023-07-29T10:53:33Z | 2023-07-30T06:26:23Z | https://github.com/langchain-ai/langchain/issues/8457 | 1,827,440,722 | 8,457 |
[
"langchain-ai",
"langchain"
] | ### Feature request
I am behind company firewall and I need to set proxy to `SitemapLoader`
### Motivation
I am behind company firewall
### Your contribution
Eg.:
```python
sitemap_loader = SitemapLoader(
web_path="https://langchain.readthedocs.io/sitemap.xml",
https_proxy="https://my.proxy.io/"
)
docs = sitemap_loader.load()
``` | SitemapLoader: set proxy | https://api.github.com/repos/langchain-ai/langchain/issues/8451/comments | 2 | 2023-07-29T06:02:55Z | 2024-04-15T16:41:40Z | https://github.com/langchain-ai/langchain/issues/8451 | 1,827,335,864 | 8,451 |
[
"langchain-ai",
"langchain"
] | ### System Info
```python
from langchain.llms.base import LLM
from langchain.llms import GooglePalm
```
This throws an error saying it requires `google.generativeai`; pervious it used to work, something changed and is it documented?
### Who can help?
@hwchase17
@agola11
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```
pip install langchain
```
```python
from langchain.llms.base import LLM
from langchain.llms import GooglePalm
```
### Expected behavior
Should load without errors? | GooglePalm requires google.generativeai? | https://api.github.com/repos/langchain-ai/langchain/issues/8449/comments | 3 | 2023-07-29T04:55:22Z | 2023-11-15T16:06:58Z | https://github.com/langchain-ai/langchain/issues/8449 | 1,827,318,961 | 8,449 |
[
"langchain-ai",
"langchain"
] | ### System Info
```shell
$ langchain.__version__
'0.0.234'
$ uname -a
Linux codespaces-92388d 5.15.0-1042-azure #49-Ubuntu SMP Tue Jul 11 17:28:46 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
$ python
Python 3.10.8 (main, Jun 15 2023, 01:39:58) [GCC 9.4.0] on linux
```
### Who can help?
@hwchase17 @vowe
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [X] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
The project address is: https://github.com/eunomia-bpf/trace-agent. When uncomment the line [# args_schema=AnalyseInput,](https://github.com/eunomia-bpf/trace-agent/blob/a0c1ff74017e1e47a900c9371833eb2cca1705ef/iminder/tools.py#L38) in the file `trace-agent/iminder/tools.py`, and then run the project using `python -m iminder pid`, the following error occurs:
```shell
$ python -m iminder 480
Traceback (most recent call last):
File "/usr/local/python/3.10.8/lib/python3.10/runpy.py", line 196, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/usr/local/python/3.10.8/lib/python3.10/runpy.py", line 86, in _run_code
exec(code, run_globals)
File "/workspaces/trace-agent/iminder/__main__.py", line 11, in <module>
bot.run([f"Obtain the resource usage of the process whose pid is {pid} over a period of time, "
File "/workspaces/trace-agent/iminder/autogpt.py", line 61, in run
return self.agent.run(tasks)
File "/usr/local/python/3.10.8/lib/python3.10/site-packages/langchain/experimental/autonomous_agents/autogpt/agent.py", line 93, in run
assistant_reply = self.chain.run(
File "/usr/local/python/3.10.8/lib/python3.10/site-packages/langchain/chains/base.py", line 445, in run
return self(kwargs, callbacks=callbacks, tags=tags, metadata=metadata)[
File "/usr/local/python/3.10.8/lib/python3.10/site-packages/langchain/chains/base.py", line 243, in __call__
raise e
File "/usr/local/python/3.10.8/lib/python3.10/site-packages/langchain/chains/base.py", line 237, in __call__
self._call(inputs, run_manager=run_manager)
File "/usr/local/python/3.10.8/lib/python3.10/site-packages/langchain/chains/llm.py", line 92, in _call
response = self.generate([inputs], run_manager=run_manager)
File "/usr/local/python/3.10.8/lib/python3.10/site-packages/langchain/chains/llm.py", line 101, in generate
prompts, stop = self.prep_prompts(input_list, run_manager=run_manager)
File "/usr/local/python/3.10.8/lib/python3.10/site-packages/langchain/chains/llm.py", line 135, in prep_prompts
prompt = self.prompt.format_prompt(**selected_inputs)
File "/usr/local/python/3.10.8/lib/python3.10/site-packages/langchain/prompts/chat.py", line 155, in format_prompt
messages = self.format_messages(**kwargs)
File "/workspaces/trace-agent/iminder/prompt.py", line 130, in format_messages
misc_messages = self._format_misc_messages(**kwargs)
File "/workspaces/trace-agent/iminder/prompt.py", line 67, in _format_misc_messages
base_prompt = SystemMessage(content=self.construct_full_prompt(**kwargs))
File "/workspaces/trace-agent/iminder/prompt.py", line 58, in construct_full_prompt
full_prompt += f"\n\n{get_prompt(self.tools)}"
File "/usr/local/python/3.10.8/lib/python3.10/site-packages/langchain/experimental/autonomous_agents/autogpt/prompt_generator.py", line 184, in get_prompt
prompt_string = prompt_generator.generate_prompt_string()
File "/usr/local/python/3.10.8/lib/python3.10/site-packages/langchain/experimental/autonomous_agents/autogpt/prompt_generator.py", line 113, in generate_prompt_string
f"{self._generate_numbered_list(self.commands, item_type='command')}\n\n"
File "/usr/local/python/3.10.8/lib/python3.10/site-packages/langchain/experimental/autonomous_agents/autogpt/prompt_generator.py", line 84, in _generate_numbered_list
command_strings = [
File "/usr/local/python/3.10.8/lib/python3.10/site-packages/langchain/experimental/autonomous_agents/autogpt/prompt_generator.py", line 85, in <listcomp>
f"{i + 1}. {self._generate_command_string(item)}"
File "/usr/local/python/3.10.8/lib/python3.10/site-packages/langchain/experimental/autonomous_agents/autogpt/prompt_generator.py", line 50, in _generate_command_string
output += f", args json schema: {json.dumps(tool.args)}"
File "/usr/local/python/3.10.8/lib/python3.10/site-packages/langchain/tools/base.py", line 418, in args
return self.args_schema.schema()["properties"]
File "pydantic/main.py", line 664, in pydantic.main.BaseModel.schema
File "pydantic/schema.py", line 188, in pydantic.schema.model_schema
File "pydantic/schema.py", line 582, in pydantic.schema.model_process_schema
File "pydantic/schema.py", line 623, in pydantic.schema.model_type_schema
File "pydantic/schema.py", line 249, in pydantic.schema.field_schema
File "pydantic/schema.py", line 217, in pydantic.schema.get_field_info_schema
File "pydantic/schema.py", line 992, in pydantic.schema.encode_default
File "pydantic/schema.py", line 991, in genexpr
File "pydantic/schema.py", line 996, in pydantic.schema.encode_default
File "pydantic/json.py", line 90, in pydantic.json.pydantic_encoder
TypeError: Object of type 'FieldInfo' is not JSON serializable
```
### Expected behavior
In [trace-agent/iminder/tools.py](https://github.com/eunomia-bpf/trace-agent/blob/main/iminder/tools.py), I have defined two custom tools: one is called `sample`, and the other is called `analyse_process`. Both tools have only one input parameter, but of different types. `sample` takes an integer as input, while `analyse_process` takes a string. Strangely, `sample` works as expected, but `analyse_process` does not. My expectation was that both of them would function correctly. | Object of type 'FieldInfo' is not JSON serializable | https://api.github.com/repos/langchain-ai/langchain/issues/8448/comments | 2 | 2023-07-29T04:47:30Z | 2023-07-29T08:50:24Z | https://github.com/langchain-ai/langchain/issues/8448 | 1,827,317,440 | 8,448 |
[
"langchain-ai",
"langchain"
] | ### System Info
Python = 3.9
Langchain = 0.0.245
### Who can help?
@hwchase17
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [X] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
I checked the recent update for summarization pipeline with memory as:
```
from langchain.llms import HuggingFacePipeline
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer, pipeline
from langchain.memory import ConversationSummaryBufferMemory
import torch
summarize_model = AutoModelForSeq2SeqLM.from_pretrained("facebook/bart-large-cnn", device_map="auto", torch_dtype=torch.bfloat16, trust_remote_code=True)
summarize_tokenizer = AutoTokenizer.from_pretrained("facebook/bart-large-cnn", padding_side="left")
pipe_summary = pipeline("summarization", model=summarize_model, tokenizer=summarize_tokenizer) #, max_new_tokens=500, min_new_tokens=300
hf_summary = HuggingFacePipeline(pipeline=pipe_summary)
memory=ConversationSummaryBufferMemory(llm=hf_summary, max_token_limit=10)
```
Then, added chat history to the memory and observed the memory afterwards as:
```
memory.save_context({"input": "hi"}, {"output": "whats up"})
memory.save_context({"input": "not much you"}, {"output": "not much"})
memory.save_context({"input": "what's my name"}, {"output": "AJ"})
memory.load_memory_variables({})
```
It returned:
```
{'history': "System: The human asks what the AI thinks of artificial intelligence. The AI thinks artificial intelligence is a force for good because it will help humans reach their full potential. Progressively summarize the lines of conversation provided, adding onto the previous summary returning a new summary. The human asks the AI: Why do you think artificial Intelligence is a Force for good? The AI: Because artificial intelligence will help human reach their potential.\nHuman: what's my name\nAI: AJ"}
```
This doesn't summarize the actual chat history, but returns a generalized text: `The human asks what the AI thinks of artificial intelligence. The AI thinks artificial intelligence is a force for good because it will help humans reach their full potential. Progressively summarize the lines of conversation provided, adding onto the previous summary returning a new summary. The human asks the AI: Why do you think artificial Intelligence is a Force for good? The AI: Because artificial intelligence will help human reach their potential.`
### Expected behavior
This would pass wrong prompt to the LLM for question-answering. The expectation is the summary of the chat history. | Huggingface_Pipeline for summarization returning generalized response | https://api.github.com/repos/langchain-ai/langchain/issues/8444/comments | 6 | 2023-07-29T01:15:26Z | 2023-08-02T12:38:17Z | https://github.com/langchain-ai/langchain/issues/8444 | 1,827,254,867 | 8,444 |
[
"langchain-ai",
"langchain"
] | ### Issue with current documentation:
## Description
The installation instructions in the Contributor docs are not working and should be updated. For example:
```bash
> poetry install -E all
Installing dependencies from lock file
Extra [all] is not specified.
```
```bash
> poetry install --with dev
Group(s) not found: dev (via --with)
```
### Idea or request for content:
_No response_ | DOC: Contributor docs are inaccurate | https://api.github.com/repos/langchain-ai/langchain/issues/8440/comments | 2 | 2023-07-28T23:26:09Z | 2023-07-31T18:01:18Z | https://github.com/langchain-ai/langchain/issues/8440 | 1,827,201,525 | 8,440 |
[
"langchain-ai",
"langchain"
] | null | Issue: Indexing and querying an XML file | https://api.github.com/repos/langchain-ai/langchain/issues/8436/comments | 0 | 2023-07-28T21:37:19Z | 2023-07-28T21:40:29Z | https://github.com/langchain-ai/langchain/issues/8436 | 1,827,113,761 | 8,436 |
[
"langchain-ai",
"langchain"
] | ### System Info
- MacOS 13.4.1 (c)
- Intel Core i9
#### Version
- Python 3.8.17
- Langchain 0.0.245
#### Context
I am trying build a prompt that convert latex string generated by an OCR algo to a text describing that latex. When using the `FewShotPromptTemplate`, the curly brackets in the latex string are somehow interpreted as key to a dict.
### Who can help?
@hwchase17
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [X] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
#### Code
```
from langchain.prompts.few_shot import FewShotPromptTemplate
from langchain.prompts.prompt import PromptTemplate
examples = [
{
"latex": """\sum_{i=1}^{n}""",
"doc": """taking sum from 1 to n"""
}
]
example_template = """
latex: {latex}
doc: {doc}
"""
prefix = """ Convert the latex
"""
suffix = """
User: {latex}
AI: """
example_prompt = PromptTemplate(input_variables=["latex", "doc"], template="Question: {latex}\n{doc}")
few_shot_prompt_template = FewShotPromptTemplate(
examples=examples,
example_prompt=example_prompt,
prefix=prefix,
suffix=suffix,
input_variables=["latex"],
example_separator="\n\n"
)
print(example_prompt.format(**examples[0]))
print(few_shot_prompt_template.format(latex="\frac{a}{b}"))
```
### Expected behavior
#### Error
The PromptTemplate.format works fine, but the FewShotPromptTemplate fails.
```
---> 34 print(few_shot_prompt_template.format(latex="\frac{a}{b}"))
File [~/Library/Caches/pypoetry/virtualenvs/expression-engine-OXFJOYa8-py3.8/lib/python3.8/site-packages/langchain/prompts/few_shot.py:123](https://file+.vscode-resource.vscode-cdn.net/Users/LLM/~/Library/Caches/pypoetry/virtualenvs/expression-engine-OXFJOYa8-py3.8/lib/python3.8/site-packages/langchain/prompts/few_shot.py:123), in FewShotPromptTemplate.format(self, **kwargs)
120 template = self.example_separator.join([piece for piece in pieces if piece])
122 # Format the template with the input variables.
--> 123 return DEFAULT_FORMATTER_MAPPING[self.template_format](template, **kwargs)
File [/usr/local/opt/python](https://file+.vscode-resource.vscode-cdn.net/usr/local/opt/python)@3.8/Frameworks/Python.framework/Versions/3.8/lib/python3.8/string.py:163, in Formatter.format(self, format_string, *args, **kwargs)
162 def format(self, format_string, [/](https://file+.vscode-resource.vscode-cdn.net/), *args, **kwargs):
--> 163 return self.vformat(format_string, args, kwargs)
File [~/Library/Caches/pypoetry/virtualenvs/expression-engine-OXFJOYa8-py3.8/lib/python3.8/site-packages/langchain/utils/formatting.py:29](https://file+.vscode-resource.vscode-cdn.net/Users/LLM/~/Library/Caches/pypoetry/virtualenvs/expression-engine-OXFJOYa8-py3.8/lib/python3.8/site-packages/langchain/utils/formatting.py:29), in StrictFormatter.vformat(self, format_string, args, kwargs)
24 if len(args) > 0:
25 raise ValueError(
26 "No arguments should be provided, "
...
227 return args[key]
228 else:
--> 229 return kwargs[key]
KeyError: 'i=1'
``` | FewShotPromptTemplate example formating bug | https://api.github.com/repos/langchain-ai/langchain/issues/8433/comments | 2 | 2023-07-28T20:48:54Z | 2023-11-03T16:05:41Z | https://github.com/langchain-ai/langchain/issues/8433 | 1,827,071,328 | 8,433 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
OpenAI Cookbook
- https://github.com/openai/openai-cookbook/blob/main/examples/How_to_format_inputs_to_ChatGPT_models.ipynb
- `gpt-3.5-turbo-0301` set `tokens_per_message = 4` and `tokens_per_name = -1`
- `gpt-3.5-turbo-*` set `tokens_per_message = 3` and `tokens_per_name = 1`
- `gpt-3.5-turbo` redirect to `gpt-3.5-turbo-0613`
- `gpt-4` redirect to `gpt-4-0613`
```
if model in {
"gpt-3.5-turbo-0613",
"gpt-3.5-turbo-16k-0613",
"gpt-4-0314",
"gpt-4-32k-0314",
"gpt-4-0613",
"gpt-4-32k-0613",
}:
tokens_per_message = 3
tokens_per_name = 1
elif model == "gpt-3.5-turbo-0301":
tokens_per_message = 4 # every message follows <|start|>{role/name}\n{content}<|end|>\n
tokens_per_name = -1 # if there's a name, the role is omitted
elif "gpt-3.5-turbo" in model:
print("Warning: gpt-3.5-turbo may update over time. Returning num tokens assuming gpt-3.5-turbo-0613.")
return num_tokens_from_messages(messages, model="gpt-3.5-turbo-0613")
elif "gpt-4" in model:
print("Warning: gpt-4 may update over time. Returning num tokens assuming gpt-4-0613.")
return num_tokens_from_messages(messages, model="gpt-4-0613")
```
LangChain
- https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/chat_models/openai.py#L541-L548
- `gpt-3.5-turbo` and `gpt-3.5-turbo-*` set `tokens_per_message = 4` and `tokens_per_name = -1`
```
if model.startswith("gpt-3.5-turbo"):
# every message follows <im_start>{role/name}\n{content}<im_end>\n
tokens_per_message = 4
# if there's a name, the role is omitted
tokens_per_name = -1
elif model.startswith("gpt-4"):
tokens_per_message = 3
tokens_per_name = 1
```
- https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/chat_models/openai.py#L508-L515
- `gpt-3.5-turbo` redirect to `gpt-3.5-turbo-0301`
- `gpt-4` redirect to `gpt-4-0314`
```
if model == "gpt-3.5-turbo":
# gpt-3.5-turbo may change over time.
# Returning num tokens assuming gpt-3.5-turbo-0301.
model = "gpt-3.5-turbo-0301"
elif model == "gpt-4":
# gpt-4 may change over time.
# Returning num tokens assuming gpt-4-0314.
model = "gpt-4-0314"
```
### Suggestion:
Follow the OpenAI Cookbook
- `gpt-3.5-turbo-0301` set `tokens_per_message = 4` and `tokens_per_name = -1`
- `gpt-3.5-turbo-*` set `tokens_per_message = 3` and `tokens_per_name = 1`
| Issue: Azure/OpenAI get_num_tokens_from_messages returns wrong prompt tokens | https://api.github.com/repos/langchain-ai/langchain/issues/8430/comments | 1 | 2023-07-28T19:22:44Z | 2023-07-29T01:13:36Z | https://github.com/langchain-ai/langchain/issues/8430 | 1,826,981,943 | 8,430 |
[
"langchain-ai",
"langchain"
] | ### Feature request
Expose text-bison as chat model since it might be useful for some applications.
### Motivation
Sometimes it might be interesting to compare text-bison vs chat-bison for chat scenarios.
### Your contribution
yes, I'm happy to do it. | Expose Vertex text model as chat model | https://api.github.com/repos/langchain-ai/langchain/issues/8427/comments | 2 | 2023-07-28T18:49:53Z | 2023-11-03T16:05:29Z | https://github.com/langchain-ai/langchain/issues/8427 | 1,826,935,451 | 8,427 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
Is there anyway to use a different model to generate AGENT THOUGHTS and AGENT FINAL ANSWER?
For example: I want to to use GPT3.5 for generate thoughts and GPT4 to generate the final answer.
### Suggestion:
_No response_ | Different Model to generate thought and answer from Agent. | https://api.github.com/repos/langchain-ai/langchain/issues/8421/comments | 1 | 2023-07-28T15:28:07Z | 2023-11-03T16:05:57Z | https://github.com/langchain-ai/langchain/issues/8421 | 1,826,632,580 | 8,421 |
[
"langchain-ai",
"langchain"
] | ### System Info
Firstly sorry if I am posting this in wrong place but I felt like it belongs to here.
I am trying use LlamaCpp for QA for txt documents but on Chromadb I am getting following error
I couldn't find a way to solve this:
```
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
[<ipython-input-13-9bd0ee703b23>](https://localhost:8080/#) in <cell line: 1>()
----> 1 db = Chroma.from_documents(texts, embeddings, persist_directory='db')
7 frames
[/usr/local/lib/python3.10/dist-packages/langchain/vectorstores/chroma.py](https://localhost:8080/#) in from_documents(cls, documents, embedding, ids, collection_name, persist_directory, client_settings, client, collection_metadata, **kwargs)
576 texts = [doc.page_content for doc in documents]
577 metadatas = [doc.metadata for doc in documents]
--> 578 return cls.from_texts(
579 texts=texts,
580 embedding=embedding,
[/usr/local/lib/python3.10/dist-packages/langchain/vectorstores/chroma.py](https://localhost:8080/#) in from_texts(cls, texts, embedding, metadatas, ids, collection_name, persist_directory, client_settings, client, collection_metadata, **kwargs)
540 **kwargs,
541 )
--> 542 chroma_collection.add_texts(texts=texts, metadatas=metadatas, ids=ids)
543 return chroma_collection
544
[/usr/local/lib/python3.10/dist-packages/langchain/vectorstores/chroma.py](https://localhost:8080/#) in add_texts(self, texts, metadatas, ids, **kwargs)
173 embeddings = None
174 if self._embedding_function is not None:
--> 175 embeddings = self._embedding_function.embed_documents(list(texts))
176
177 if metadatas:
[/usr/local/lib/python3.10/dist-packages/langchain/embeddings/llamacpp.py](https://localhost:8080/#) in embed_documents(self, texts)
108 List of embeddings, one for each text.
109 """
--> 110 embeddings = [self.client.embed(text) for text in texts]
111 return [list(map(float, e)) for e in embeddings]
112
[/usr/local/lib/python3.10/dist-packages/langchain/embeddings/llamacpp.py](https://localhost:8080/#) in <listcomp>(.0)
108 List of embeddings, one for each text.
109 """
--> 110 embeddings = [self.client.embed(text) for text in texts]
111 return [list(map(float, e)) for e in embeddings]
112
[/usr/local/lib/python3.10/dist-packages/llama_cpp/llama.py](https://localhost:8080/#) in embed(self, input)
810 A list of embeddings
811 """
--> 812 return list(map(float, self.create_embedding(input)["data"][0]["embedding"]))
813
814 def _create_completion(
[/usr/local/lib/python3.10/dist-packages/llama_cpp/llama.py](https://localhost:8080/#) in create_embedding(self, input, model)
774 tokens = self.tokenize(input.encode("utf-8"))
775 self.reset()
--> 776 self.eval(tokens)
777 n_tokens = len(tokens)
778 total_tokens += n_tokens
[/usr/local/lib/python3.10/dist-packages/llama_cpp/llama.py](https://localhost:8080/#) in eval(self, tokens)
469 raise RuntimeError(f"llama_eval returned {return_code}")
470 # Save tokens
--> 471 self.input_ids[self.n_tokens : self.n_tokens + n_tokens] = batch
472 # Save logits
473 rows = n_tokens if self.params.logits_all else 1
ValueError: could not broadcast input array from shape (8,) into shape (0,)
```
Code:
```
#installation
!pip install langchain PyPDF2 huggingface_hub chromadb llama-cpp-python
#download the model
!git clone https://github.com/ggerganov/llama.cpp.git
%cd llama.cpp
!curl -L https://huggingface.co/TheBloke/Llama-2-7B-Chat-GGML/resolve/main/llama-2-7b-chat.ggmlv3.q4_K_M.bin -o ./models/llama-2-7b-chat.ggmlv3.q4_K_M.bin
!LLAMA_METAL=1 make
```
```
from langchain.llms import LlamaCpp
from langchain.embeddings import LlamaCppEmbeddings
from langchain.prompts import PromptTemplate
from langchain.chains import LLMChain
from langchain.document_loaders import TextLoader
from langchain.text_splitter import CharacterTextSplitter
from langchain.vectorstores import Chroma
prompt_template = """Use the following pieces of context to answer the question at the end. If you don't know the answer, just say that you don't know or you can't help, don't try to make up an answer.
{context}
Question: {question}
Answer:"""
prompt = PromptTemplate(template=prompt_template, input_variables=["context", "question"])
model_path = '/content/llama.cpp/models/llama-2-7b-chat.ggmlv3.q4_K_M.bin'
llm = LlamaCpp(model_path=model_path)
embeddings = LlamaCppEmbeddings(model_path=model_path)
llm_chain = LLMChain(llm=llm, prompt=prompt)
loader = TextLoader(txt_file_path)
docs = loader.load()
text_splitter = CharacterTextSplitter(chunk_size=100, chunk_overlap=0)
texts = text_splitter.split_documents(docs)
db = Chroma.from_documents(texts, embeddings, persist_directory='db')
question = 'summerize this document'
similar_doc = db.similarity_search(question, k=1)
context = similar_doc[0].page_content
query_llm = LLMChain(llm=llm, prompt=prompt)
response = query_llm.run({"context": context, "question": question})
```
versions:
```
langchain==0.0.246
chromadb==0.4.3
```
Is there any alternative way to do what i want to achieve?
### Who can help?
@hwchase17 @agola11
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [X] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
- Upload a text file
- run all the code to download the model
- replace model_path with downloaded model
- run the rest
### Expected behavior
working | langchain with LlamaCpp for QA from txt files fails on Chroma part | https://api.github.com/repos/langchain-ai/langchain/issues/8420/comments | 4 | 2023-07-28T14:45:53Z | 2023-10-13T12:13:43Z | https://github.com/langchain-ai/langchain/issues/8420 | 1,826,557,242 | 8,420 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
I'm trying to load some documents, powerpoints and text to train my custom LLm using Langchain.
When I run it I come to a weird error message where it tells I don't have "tokenizers" and "taggers" packages (folders).
I've read the docs, asked Langchain chatbot, pip install nltk, uninstall, pip install nltk without dependencies, added them with nltk.download(), nltk.download("punkt"), nltk.download("all"),... Did also manually put on the path: nltk.data.path = ['C:\Users\zaesa\AppData\Roaming\nltk_data'] and added all the folders. Added the tokenizers folder and taggers folder from the github repo: [](https://github.com/nltk/nltk_data/tree/gh-pages/packages). Everything. Also asked on the Github repo. Nothing, no success.
Here the code of the file I try to run:
`
from nltk.tokenize import sent_tokenize
from langchain.document_loaders import UnstructuredPowerPointLoader, TextLoader, UnstructuredWordDocumentLoader
from dotenv import load_dotenv, find_dotenv
import os
import openai
import sys
import nltk
nltk.data.path = ['C:\\Users\\zaesa\\AppData\\Roaming\\nltk_data']
nltk.download(
'punkt', download_dir='C:\\Users\\zaesa\\AppData\\Roaming\\nltk_data')
sys.path.append('../..')
_ = load_dotenv(find_dotenv()) # read local .env file
openai.api_key = os.environ['OPENAI_API_KEY']
folder_path_docx = "DB\\ DB VARIADO\\DOCS"
folder_path_txt = "DB\\BLOG-POSTS"
folder_path_pptx_1 = "DB\\PPT DAY JUNIO"
folder_path_pptx_2 = "DB\\DB VARIADO\\PPTX"
loaded_content = []
for file in os.listdir(folder_path_docx):
if file.endswith(".docx"):
file_path = os.path.join(folder_path_docx, file)
loader = UnstructuredWordDocumentLoader(file_path)
docx = loader.load()
loaded_content.extend(docx)
for file in os.listdir(folder_path_txt):
if file.endswith(".txt"):
file_path = os.path.join(folder_path_txt, file)
loader = TextLoader(file_path, encoding='utf-8')
text = loader.load()
loaded_content.extend(text)
for file in os.listdir(folder_path_pptx_1):
if file.endswith(".pptx"):
file_path = os.path.join(folder_path_pptx_1, file)
loader = UnstructuredPowerPointLoader(file_path)
slides_1 = loader.load()
loaded_content.extend(slides_1)
for file in os.listdir(folder_path_pptx_2):
if file.endswith(".pptx"):
file_path = os.path.join(folder_path_pptx_2, file)
loader = UnstructuredPowerPointLoader(file_path)
slides_2 = loader.load()
loaded_content.extend(slides_2)
print(loaded_content[0].page_content)
print(nltk.data.path)
installed_packages = nltk.downloader.Downloader(
download_dir='C:\\Users\\zaesa\\AppData\\Roaming\\nltk_data').packages()
print(installed_packages)
sent_tokenize("Hello. How are you? I'm well.")
`
When running the file I get:
`
[nltk_data] in index
[nltk_data] Error loading tokenizers: Package 'tokenizers' not found
[nltk_data] in index
[nltk_data] Error loading taggers: Package 'taggers' not found in
[nltk_data] index
[nltk_data] Error loading tokenizers: Package 'tokenizers' not found
[nltk_data] in index
[nltk_data] Error loading tokenizers: Package 'tokenizers' not found
[nltk_data] in index
[nltk_data] Error loading tokenizers: Package 'tokenizers' not found
[nltk_data] in index
[nltk_data] Error loading taggers: Package 'taggers' not found in
[nltk_data] index
[nltk_data] Error loading tokenizers: Package 'tokenizers' not found
[nltk_data] in index
[nltk_data] Error loading tokenizers: Package 'tokenizers' not found
[nltk_data] in index
[nltk_data] Error loading tokenizers: Package 'tokenizers' not found
[nltk_data] in index
[nltk_data] Error loading tokenizers: Package 'tokenizers' not found
[nltk_data] in index
[nltk_data] Error loading taggers: Package 'taggers' not found in
[nltk_data] index
[nltk_data] Error loading tokenizers: Package 'tokenizers' not found
[nltk_data] in index
[nltk_data] Error loading tokenizers: Package 'tokenizers' not found
[nltk_data] in index
[nltk_data] Error loading tokenizers: Package 'tokenizers' not found
[nltk_data] in index
[nltk_data] Error loading tokenizers: Package 'tokenizers' not found
[nltk_data] in index
[nltk_data] Error loading taggers: Package 'taggers' not found in
[nltk_data] index
[nltk_data] Error loading tokenizers: Package 'tokenizers' not found
[nltk_data] in index
[nltk_data] Error loading tokenizers: Package 'tokenizers' not found
[nltk_data] in index
[nltk_data] Error loading tokenizers: Package 'tokenizers' not found
[nltk_data] in index
[nltk_data] Error loading tokenizers: Package 'tokenizers' not found
[nltk_data] in index
[nltk_data] Error loading taggers: Package 'taggers' not found in
[nltk_data] index
- HERE SOME TEXT -
['C:\\Users\\zaesa\\AppData\\Roaming\\nltk_data']
dict_values([<Package perluniprops>, <Package mwa_ppdb>, <Package punkt>, <Package rslp>, <Package porter_test>, <Package snowball_data>, <Package maxent_ne_chunker>, <Package moses_sample>, <Package bllip_wsj_no_aux>, <Package word2vec_sample>, <Package wmt15_eval>, <Package spanish_grammars>, <Package sample_grammars>, <Package large_grammars>, <Package book_grammars>, <Package basque_grammars>, <Package maxent_treebank_pos_tagger>, <Package averaged_perceptron_tagger>, <Package averaged_perceptron_tagger_ru>, <Package universal_tagset>, <Package vader_lexicon>, <Package lin_thesaurus>, <Package movie_reviews>, <Package problem_reports>, <Package pros_cons>, <Package masc_tagged>, <Package sentence_polarity>, <Package webtext>, <Package nps_chat>, <Package city_database>, <Package europarl_raw>, <Package biocreative_ppi>, <Package verbnet3>, <Package pe08>, <Package pil>, <Package crubadan>, <Package gutenberg>, <Package propbank>, <Package machado>, <Package state_union>, <Package twitter_samples>, <Package semcor>, <Package wordnet31>, <Package extended_omw>, <Package names>, <Package ptb>, <Package nombank.1.0>, <Package floresta>, <Package comtrans>, <Package knbc>, <Package mac_morpho>, <Package swadesh>, <Package rte>, <Package toolbox>, <Package jeita>, <Package product_reviews_1>, <Package omw>, <Package wordnet2022>, <Package sentiwordnet>, <Package product_reviews_2>, <Package abc>, <Package wordnet2021>, <Package udhr2>, <Package senseval>, <Package words>, <Package framenet_v15>, <Package unicode_samples>, <Package kimmo>, <Package framenet_v17>, <Package chat80>, <Package qc>, <Package inaugural>, <Package wordnet>, <Package stopwords>, <Package verbnet>, <Package shakespeare>, <Package ycoe>, <Package ieer>, <Package cess_cat>, <Package switchboard>, <Package comparative_sentences>, <Package subjectivity>, <Package udhr>, <Package pl196x>, <Package paradigms>, <Package gazetteers>, <Package timit>, <Package treebank>, <Package sinica_treebank>, <Package opinion_lexicon>, <Package ppattach>, <Package dependency_treebank>, <Package reuters>, <Package genesis>, <Package cess_esp>, <Package conll2007>, <Package nonbreaking_prefixes>, <Package dolch>, <Package smultron>, <Package alpino>, <Package wordnet_ic>, <Package brown>, <Package bcp47>, <Package panlex_swadesh>, <Package conll2000>, <Package universal_treebanks_v20>, <Package brown_tei>, <Package cmudict>, <Package omw-1.4>, <Package mte_teip5>, <Package indian>, <Package conll2002>, <Package tagsets>])
`
And here is how my folders structure from nltk_data looks like:
<img width="841" alt="nltk-screenshot" src="https://github.com/langchain-ai/langchain/assets/29057173/c094ec97-6e9b-4a3c-83ae-afbe08af3380">
<img width="842" alt="taggers-screenshot" src="https://github.com/langchain-ai/langchain/assets/29057173/e943a0cc-6897-4a59-9d23-0c7e5c080f37">
<img width="835" alt="tokeenizers-screenshot" src="https://github.com/langchain-ai/langchain/assets/29057173/184d69f3-d8a9-42e1-a792-a78044f54076">
<img width="838" alt="punkt-screenshot" src="https://github.com/langchain-ai/langchain/assets/29057173/e1158619-1fdc-4fd5-b014-53ddc802e9c4">
### Suggestion:
I have fresh installed nltk with no dependencies. The version is the latest. The support team from NLTK doesn't know what is wrong. It seems everything is fine. So, it has to be a bug or something coming from Langchain that I'm not able to see. Really appreciate any help. Need to make this work! Thank you. | Working with Langchain I get nlkt errors telling me: Package "tokenizers" not found in index and Packaage "taggers" not found in index | https://api.github.com/repos/langchain-ai/langchain/issues/8419/comments | 8 | 2023-07-28T12:23:04Z | 2023-11-03T16:06:43Z | https://github.com/langchain-ai/langchain/issues/8419 | 1,826,332,861 | 8,419 |
[
"langchain-ai",
"langchain"
] | I'd like to incorporate this 'system_message' with each whenever I call 'qa.run(prompt)'.
How is this possible. Can someone help?
Here is the code I wrote to initialize the LLM and the RetrievalQA:
```from langchain.chat_models import ChatOpenAI
from langchain.chains.conversation.memory import ConversationBufferWindowMemory
from langchain.prompts import ChatPromptTemplate
from langchain.chains import RetrievalQA, LLMChain
from langchain.schema import (
AIMessage,
HumanMessage,
SystemMessage
)
llm = ChatOpenAI(
openai_api_key=OPENAI_API_KEY,
model_name = 'gpt-3.5-turbo',
temperature=0.0
)
system_message = [SystemMessage(
content='You are a Virtual Vet. '
'You should help clients with their concerns about their pets and provide helpful solutions.'
'You can ask questions to help you understand and diagnose the problem.'
'You should only talk within the context of problem.'
'If you are unsure of how to help, you can suggest the client to go to the nearest clink of their place.'
'You should talk on German, unless the client talks in English.')]
conversational_memory = ConversationBufferWindowMemory(
memory_key='chat_history',
k=2,
return_messages=True
)
qa = RetrievalQA.from_chain_type(
llm=llm,
chain_type='stuff',
retriever=vectorstore.as_retriever(),
)
| Question: How can I include SystemMessage with RetrievalQA | https://api.github.com/repos/langchain-ai/langchain/issues/8418/comments | 5 | 2023-07-28T12:05:32Z | 2023-08-01T06:22:50Z | https://github.com/langchain-ai/langchain/issues/8418 | 1,826,310,026 | 8,418 |
[
"langchain-ai",
"langchain"
] | ### System Info
langchain==0.0.245
gptcache==0.1.37
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```python
from gptcache import Cache
from gptcache.adapter.api import init_similar_cache
from langchain.cache import GPTCache
# In the first llm predict call the cache is not initialized and always returns None
def init_gptcache(cache_obj: Cache, llm_str: str):
init_similar_cache(cache_obj=cache_obj, data_dir=f"similar_cache_{llm_str}")
langchain.llm_cache = GPTCache(init_gptcache)
llm = OpenAI(model_name="text-davinci-002", temperature=0.2)
llm.predict("tell me a joke")
```
### Expected behavior
It should first create the gptcache object and check the cache. This becomes a problem if I use external dbs for gptcache for my langchain app.
Please assign this issue to me. I am willing to contribute on this one. | GPTCache object should be created during or before the first lookup | https://api.github.com/repos/langchain-ai/langchain/issues/8415/comments | 1 | 2023-07-28T10:24:52Z | 2023-08-02T15:17:50Z | https://github.com/langchain-ai/langchain/issues/8415 | 1,826,173,560 | 8,415 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
My Chatbot is fairly straight forward. It uses MosaicMLInstructorEmbeddings() as the embedding and MosaicML() as the model. I have a file that I want to retrieve from. When outputting an answer, for instance:
What is AliBi?
It will give a response like:
Answer: LiBi (Attention with Linear Biases) dispenses with position embeddings for tokens in transformer-based NLP models, instead encoding position information by biasing the query-key attention scores proportionally to each token pair’s distance. ALiBi yields excellent extrapolation to unseen sequence lengths compared to other position embedding schemes. We leverage this extrapolation capability by training with shorter sequence lengths, which reduces the memory and computation load.
Or when I give it:
Answer: arded checkpointing is a feature in distributed systems that allows for the checkpointing of the state of the system to be divided into multiple shards. This is useful for systems that have a large amount of data or compute to perform.
I have noticed that the chatbot fairly consistently drops the first or first two characters and I'm wondering if this is a bug with LangChain, MosaicMLInstructorEmbeddings, or MosaicML.
Here is my chatbot file:
https://github.com/KuuCi/examples/blob/support-bot/examples/end-to-end-examples/support_chatbot/chatbot.py
### Suggestion:
_No response_ | Issue: LangChain is fairly consistently dropping the first one or two characters of the chain answer. | https://api.github.com/repos/langchain-ai/langchain/issues/8413/comments | 7 | 2023-07-28T08:53:41Z | 2023-07-31T22:00:04Z | https://github.com/langchain-ai/langchain/issues/8413 | 1,826,004,866 | 8,413 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
I have added the tools named A_tool, default_tool to a `ZeroShotAgent`.
The output from one of the execution looks like:
```
Thought: We need to use the A_tool.
Action: Use A_tool
Observation: Use A_tool is not a valid tool, try another one.
Thought:We need to use the default_tool since A_tool is not a valid tool.
Action: Use default_tool
Action Input: None
Observation: Use default_tool is not a valid tool, try another one.
```
I am not sure why the prefix `Use ` is getting added to Action.....ideally action should be just the name of the tool [`A_tool`,` default_tool`]
This is causing InvalidTool to be invoked again and again.
What can i do to fix this issue?
### Suggestion:
_No response_ | Issue: prefix `Use ` being added to agent action causing InvalidTool to be invoked again and again | https://api.github.com/repos/langchain-ai/langchain/issues/8407/comments | 7 | 2023-07-28T06:15:22Z | 2024-03-30T16:04:56Z | https://github.com/langchain-ai/langchain/issues/8407 | 1,825,780,100 | 8,407 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
Currently, I'm using SequentialChain class to combine two steps in my workflow.
Step1: I'm using the LLM through prompt to identify the intent of the question posed by the user.
Step2: I'm using the csv based agent to answer the question posed by the user based on the csv file, but my aim is to answer the question only if the intent of the question is a textual response.
Below is the code snippets I have used to create the SequentialChain
`model = AzureOpenAI(temperature=0,deployment_name="",openai_api_key="",openai_api_version="",openai_api_base="",
)`
`template = """
You will help me identify the intent with the following examples and instructions.
Give your response in this format {{"Intent":"<identified intent>",
"Question":"<Input question>"}}
### Instructions
# Different possible intents are textResponse, BarChart, LineChart.
# If the question doesn't come under any of the intents, identify it as a None intent.
####
### Examples
Question: What is the total count of stores in 2022?
Intent: textResponse
Question: What is the split of sale amount for each sale type?
Intent: BarChart
Question: What is the monthly trend of sales amount in 2022?
Intent: LineChart
Question: {input}
"""`
`prompt = PromptTemplate(
input_variables=["input"],
template=template,
)`
`chain_one = LLMChain(llm=model, prompt=prompt, output_key = "intent")`
`agent = create_csv_agent(
AzureOpenAI(temperature=0.5,top_p = 0.5,deployment_name="",openai_api_key="",openai_api_version="",openai_api_base="",),
<csv file path>,
verbose=True,
agent_type=AgentType.ZERO_SHOT_REACT_DESCRIPTION,
handle_parsing_errors=True
)`
`agent.agent.llm_chain.prompt.template = """
You are a friendly assistant who is warm and thoughtful and never rude or curt.
Possible Intents are: textResponse, LineChart, BarChart, None
You will act further only if the {intent} is textResponse, else your Final Answer will be I cannot respond to your query.
If {intent} is textResponse use the python_repl_ast to answer the question.
You should use the tools below to answer the question posed of you:
python_repl_ast: A Python shell. Use this to execute python commands.
You should use the python_repl_ast to answer the question posed of you. You are working with a pandas dataframe in Python.
The name of the dataframe is `df`.
Input to python_repl_ast should be a valid python command.
Give your Final Answer in this format {{"output":"Final Answer"}}
Use the following format:
Question: the input question you must answer
Thought: you should always think about what to do.
Action: the action to take, should be one of [python_repl_ast]
Action Input: the input to the action
Observation: the result of the action
... (this Thought/Action/Action Input/Observation can repeat N times)
Thought: I now know the final answer
Final Answer: the final answer to the original input question.
This is the result of `print(df.head())`:{df_head}
Begin!
Question: {input}
{agent_scratchpad}
"""`
`input_var = agent.agent.llm_chain.prompt.input_variables`
`input_var.append("intent")`
This was done to append my input variables to already pre-defined ones for the csv agent.
`agent.agent.llm_chain.output_key = "FinalAnswer"`
`chain_two = agent`
`overall_chain = SequentialChain(chains=[chain_one, chain_two],input_variables=["input"],
output_variables=["intent","FinalAnswer"],verbose=True)`
`overall_chain.run(input = "count of total stores in 2022")`
Now, when I run the above code I get the following error:
**validation error for SequentialChain __root__ Expected output variables that were not found: {'FinalAnswer'}. (type=value_error)**
As far as I understood the langchain documentation [(https://python.langchain.com/docs/modules/chains/foundational/sequential_chains)]
the output_key must be defined for each LLM hit for the model to tag the response to that key, hence here I have provided the output key to the agent through the llm_chain.output_key property. But still the code throws error that output variables were not found.
Is this a bug in langchain while binding the csv agents to SequentialChain class or am I missing something? Can someone please help?
### Suggestion:
_No response_ | Issue: Unable to define "output_key" param of the LLM chain class for a csv agent while binding to SequentialChain class | https://api.github.com/repos/langchain-ai/langchain/issues/8406/comments | 2 | 2023-07-28T06:13:52Z | 2023-08-02T04:54:40Z | https://github.com/langchain-ai/langchain/issues/8406 | 1,825,778,483 | 8,406 |
[
"langchain-ai",
"langchain"
] | ### Feature request
I'm interested in the code segmenters created. So far we have python and javascript. I'm looking for a typescript one in particular, the JS one marks my typescript as invalid. But in general, having the language support shown in the text splitter Language enum would be ideal but with full on parsing.
### Motivation
This call is ok to start:
`RecursiveCharacterTextSplitter.get_separators_for_language(Language.JS)`
but when we're splitting by the word function we're cutting off `export async` or `default`, and especially JsDoc, which is probably the best think to start a chunk of a function with.
Splitting by keywords like this is kinda clunky as a long term solution because of the above cases and others, having something that's more carefully splitting files up would be better.
### Your contribution
Maybe instead of a parser like eprisma, there's just a generic one that's powered by an LLM. | Add more CodeSegmenters | https://api.github.com/repos/langchain-ai/langchain/issues/8405/comments | 3 | 2023-07-28T06:06:12Z | 2024-02-13T16:45:52Z | https://github.com/langchain-ai/langchain/issues/8405 | 1,825,765,474 | 8,405 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
Hello, I'm currently using the load_summarize_chain function to summarize extensive documents, specifically those about 250 pages or more. In this process, I'm employing the map_reduce chain type along with a custom prompt.
The process works as follows:
In the initial step, the chain sends parallel requests to LLM to summarize chunks of data.
Upon receiving all responses, the summary chain collects them, combines them into a single prompt, and generates the final output.
I'm curious about two aspects of this process:
In the second step, the chain employs the StuffDocumentsChain method to create the final summary despite setting mapreduce in chaintype. Is it possible to use an alternative chain type for this? If yes, could you recommend any?
Would it be feasible to utilize different Language Learning Models (LLMs) in the two steps? For instance, could I use GPT3.5 for the first step and GPT4 for generating the final summary?
I appreciate your help and look forward to your response.
Thanks in advance!
### Suggestion:
_No response_ | Questions Regarding load_summarize_chain Implementation and LLM Models Usage | https://api.github.com/repos/langchain-ai/langchain/issues/8399/comments | 2 | 2023-07-28T04:43:43Z | 2023-12-07T16:07:05Z | https://github.com/langchain-ai/langchain/issues/8399 | 1,825,692,044 | 8,399 |
[
"langchain-ai",
"langchain"
] | For example: we have two tool, one is search tool, used to search transaction, the other is date tool
for search tool, we have three parameters, if user input less parameters, search tool will feedback user to input missing information, then user will provide supplementary information, agent need to combine dialogue history to route to search tool until the information is complete, finally search tool return the query results. | can you provide agent examples for multiple rounds of dialogue? | https://api.github.com/repos/langchain-ai/langchain/issues/8396/comments | 1 | 2023-07-28T03:45:01Z | 2023-11-03T16:05:37Z | https://github.com/langchain-ai/langchain/issues/8396 | 1,825,651,360 | 8,396 |
[
"langchain-ai",
"langchain"
] | i just wrote a custom agent to classify intent to choose different tool, but i don't know how to get response from agent, when i print answer, it shows Agent stopped due to iteration limit or time limit. Code is shown as below, thanks
tools = [SearchTool(),DateTool()]
agent = IntentAgent(tools=tools, llm=llm)
agent_exec = AgentExecutor.from_agent_and_tools(agent=agent, tools=tools, verbose=True, max_iterations=1)
answer=agent_exec.run(prompt)
| how can i get response of agent | https://api.github.com/repos/langchain-ai/langchain/issues/8393/comments | 4 | 2023-07-28T02:39:03Z | 2023-11-03T16:06:06Z | https://github.com/langchain-ai/langchain/issues/8393 | 1,825,595,680 | 8,393 |
[
"langchain-ai",
"langchain"
] | ### Issue with current documentation:
I have tried to make a wrapper around my LLMs but the class cant be instantiated.
Can't even get the example here to work:
https://python.langchain.com/docs/modules/model_io/models/llms/custom_llm
Can someone help out with this?
### Idea or request for content:
_No response_ | DOC: Custom LLM Wrappers not functional | https://api.github.com/repos/langchain-ai/langchain/issues/8392/comments | 1 | 2023-07-28T02:06:13Z | 2023-08-06T18:32:16Z | https://github.com/langchain-ai/langchain/issues/8392 | 1,825,568,030 | 8,392 |
[
"langchain-ai",
"langchain"
] | ### System Info
(h2ogpt) jon@pseudotensor:~/h2ogpt$ pip freeze | grep langchain
langchain==0.0.235
langchainplus-sdk==0.0.20
Python 3.10
(h2ogpt) jon@pseudotensor:~/h2ogpt$ lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 20.04.6 LTS
Release: 20.04
Codename: focal
### Who can help?
@agola11
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [x] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [x] Chains
- [ ] Callbacks/Tracing
- [x] Async
### Reproduction
https://github.com/h2oai/h2ogpt/pull/551/commits/cc3331d897f4f7bab13c7f9644e7a7d7cd35031e
The above shows my introduction of async from before not having it.
The text generation inference server is set to have a large concurrency, but is showing requests are coming in back-to-back.
### Expected behavior
I expect the summarization part to be parallel, like stated here:
https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/chains/combine_documents/map_reduce.py#L210-L213
But perhaps I misunderstand something. Or perhaps it's not really parallel:
https://github.com/langchain-ai/langchain/issues/1145#issuecomment-1586234397
There's lots of discussion w.r.t. hitting rate limit with OpenAI:
https://github.com/langchain-ai/langchain/issues/2465
https://github.com/langchain-ai/langchain/issues/1643
So I presume this works, but I'm not seeing it. In OpenAI case it seems to be done via batching, which is possible in HF TGI server but not implemented. But I would have thought that all the reduction tasks could have been in parallel with asyncio.
https://github.com/langchain-ai/langchain/pull/1463#issuecomment-1566391189 | chain.arun() for summarization no faster than chain() | https://api.github.com/repos/langchain-ai/langchain/issues/8391/comments | 8 | 2023-07-28T01:09:54Z | 2023-07-28T05:12:04Z | https://github.com/langchain-ai/langchain/issues/8391 | 1,825,511,510 | 8,391 |
[
"langchain-ai",
"langchain"
] | ### Issue with current documentation:
I would like to deploy my langchain project in a shared host server , so I will need tobe able APIs to receive request and return response to the end devices. I think this feature is already done , however, I do not how to implement it in my project
### Idea or request for content:
_No response_ | DOC: how to setup APIs in my project to receive requests and returns responses from and to end devices | https://api.github.com/repos/langchain-ai/langchain/issues/8390/comments | 4 | 2023-07-28T00:17:12Z | 2024-02-10T16:20:48Z | https://github.com/langchain-ai/langchain/issues/8390 | 1,825,424,441 | 8,390 |
[
"langchain-ai",
"langchain"
] | ### System Info
I wonder if the input_documents are combined with relevant_docuemnts from retriever? I have been comparing results, and although wiki_doc provide different context, it is not reflected in response! I have tried both version, and the information by "input_doc" is not reflected in result1. I looked up in example, and I didn't find example where both "retriever" and "input_documents" are used together, it is either this OR that.
llm = ChatOpenAI(model_name="gpt-3.5-turbo", temperature=0)
qa_chain = RetrievalQA.from_chain_type(llm, retriever=ret_para, chain_type="stuff")
result1 = qa_chain({"input_documents": wiki_doc, "query": query})
result2 = qa_chain({"query": query})
### Who can help?
@agola11
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Steps to reproduce is:
1-load any db
2-ret = db.as_retriever()
3-llm = ChatOpenAI(model_name="gpt-3.5-turbo", temperature=0)
4-qa_chain = RetrievalQA.from_chain_type(llm, retriever=ret_para, chain_type="stuff")
5-Create any additional_doc from some other sources
6-result1 = qa_chain({"input_documents": additional_doc, "query": query})
7-result2 = qa_chain({"query": query})
### Expected behavior
It is expected that result1 and result2 have different output since they have different contexts, if input_documents is properly combined with relevant_documents from retriever. | qa_chain with retriever and input_documents. | https://api.github.com/repos/langchain-ai/langchain/issues/8386/comments | 1 | 2023-07-27T22:35:13Z | 2023-11-02T16:04:35Z | https://github.com/langchain-ai/langchain/issues/8386 | 1,825,313,643 | 8,386 |
[
"langchain-ai",
"langchain"
] | ### Feature request
An integration of exllama in Langchain to be able to use 4-bit GPTQ weights, designed to be fast and memory-efficient on modern GPUs.
### Motivation
The benchmarks on the official repo speak for themselves:
https://github.com/turboderp/exllama#results-so-far
### Your contribution
There is a fork that uses exllama with langchain here :
https://github.com/CoffeeVampir3/exllama-langchain-example/tree/master | Exllama integration to run GPTQ models | https://api.github.com/repos/langchain-ai/langchain/issues/8385/comments | 12 | 2023-07-27T22:26:51Z | 2024-05-31T23:49:27Z | https://github.com/langchain-ai/langchain/issues/8385 | 1,825,307,485 | 8,385 |
[
"langchain-ai",
"langchain"
] | ### Feature request
This is related to [HyDE](https://python.langchain.com/docs/modules/chains/additional/hyde). But I feel my feature request puts the HyDE example in a more realistic context, and naturally extends the RetrievalQA chain.
My suggestion would be allow the text passed to the retriever to be different from the query passed to the LLM. This would be useful as fallback when the original query, as provided by the user, yields a response of "I don't know" from the LLM; or if it returns no or poor-scoring documents during the retrieval step.
Step 1, Embed original query
Step 2, retrieve candidate documents based on embedded original query
Step 3, pass original query & retrieved documents to LLM for answer
If Step 2 yields 0/low score documents OR Step 3 yields no/low confidence answer...
Step 4, generate an alternate _hypothetical_ answer (i.e., as described in [HyDE](https://arxiv.org/abs/2212.10496)
Step 5, embed _hypothetical_ answer
Step 6, retrieve candidate documents based on embedded _hypothetical_ answer
Step 7, pass original query & _newly_ retrieved documents to LLM for answer
### Motivation
Perhaps I have a query that does not yield good retrieval documents from my main knowledgebase corpus. It is poorly formed, it has spelling errors, it is vague, it is brief. Even though the corpus does contain the factual answer to the intent of the query. Instead, a different form of the original query would yield better retrieved documents, and therefore produce the correct & complete answer desired.
For example, very simple questions like "Who's Character X?" may yield a wide range of sub-optimal documents from a QA Retriever. As a result, the LLM's response may be muddled or impossible to generate.
### Your contribution
Happy to refine the suggestion and provide concrete examples. | RetrievalQA: Submit different queries to Retriever and LLM | https://api.github.com/repos/langchain-ai/langchain/issues/8380/comments | 3 | 2023-07-27T20:32:27Z | 2023-10-27T16:18:45Z | https://github.com/langchain-ai/langchain/issues/8380 | 1,825,171,219 | 8,380 |
[
"langchain-ai",
"langchain"
] | ### System Info
langchain==0.0.244
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Create a new Matching Engine Index Endpoint that is public.
Follow the tutorial to make a similarity search:
```
vector_store = MatchingEngine.from_components(
project_id="",
region="us-central1",
gcs_bucket_name="",
index_id="",
endpoint_id="",
embedding=embeddings,
)
vector_store.similarity_search("what is a cat?", k=5)
```
Error:
```
File ~/code/gcp-langchain-retrieval-augmentation/embeddings/.venv/lib/python3.9/site-packages/grpc/_channel.py:1030, in _UnaryUnaryMultiCallable.__call__(self, request, timeout, metadata, credentials, wait_for_ready, compression)
1021 def __call__(self,
1022 request: Any,
1023 timeout: Optional[float] = None,
(...)
1026 wait_for_ready: Optional[bool] = None,
1027 compression: Optional[grpc.Compression] = None) -> Any:
1028 state, call, = self._blocking(request, timeout, metadata, credentials,
1029 wait_for_ready, compression)
-> 1030 return _end_unary_response_blocking(state, call, False, None)
File ~/code/gcp-langchain-retrieval-augmentation/embeddings/.venv/lib/python3.9/site-packages/grpc/_channel.py:910, in _end_unary_response_blocking(state, call, with_call, deadline)
908 return state.response
909 else:
--> 910 raise _InactiveRpcError(state)
_InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
status = StatusCode.UNAVAILABLE
details = "DNS resolution failed for :10000: unparseable host:port"
debug_error_string = "UNKNOWN:DNS resolution failed for :10000: unparseable host:port {created_time:"2023-07-27T20:12:23.727315699+00:00", grpc_status:14}"
>
```
### Expected behavior
It should be possible to do this. The VertexAI Python SDK supports it with the `endpoint.find_neighbors` function.
I think just changing [the wrapper](https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/vectorstores/matching_engine.py#L178) from `.match` to `.find_neighbors` for when the endpoint is public should do it.
| GCP Matching Engine support for public index endpoints | https://api.github.com/repos/langchain-ai/langchain/issues/8378/comments | 7 | 2023-07-27T20:14:21Z | 2023-11-21T02:11:30Z | https://github.com/langchain-ai/langchain/issues/8378 | 1,825,144,169 | 8,378 |
[
"langchain-ai",
"langchain"
] | ### System Info
**RecursiveUrlLoader** is not working. Please refer to the below code. "docs" size is always 0.
https://python.langchain.com/docs/integrations/document_loaders/recursive_url_loader
from langchain.document_loaders.recursive_url_loader import RecursiveUrlLoader
url = "https://js.langchain.com/docs/modules/memory/examples/"
loader = RecursiveUrlLoader(url=url)
docs = loader.load()
print(len(docs))
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
from langchain.document_loaders.recursive_url_loader import RecursiveUrlLoader
url = "https://js.langchain.com/docs/modules/memory/examples/"
loader = RecursiveUrlLoader(url=url)
docs = loader.load()
print(len(docs))
Here Length is displaying as zero.
### Expected behavior
All the urls should be scraped | RecursiveUrlLoader is not working | https://api.github.com/repos/langchain-ai/langchain/issues/8367/comments | 2 | 2023-07-27T17:30:39Z | 2023-11-02T16:04:39Z | https://github.com/langchain-ai/langchain/issues/8367 | 1,824,862,056 | 8,367 |
[
"langchain-ai",
"langchain"
] | ### Feature request
The `_id` field should be populated with these in order:
1. `ids: Optional[List[str]] = None,` -- already in place
2. Document metadata (filename + start_index) -- new feature
3. Leave `_id` empty (instead of uuid) -- new feature
https://github.com/langchain-ai/langchain/blob/cf608f876b0ada9ac965fe5b25b5ca6e5e47feeb/libs/langchain/langchain/vectorstores/opensearch_vector_search.py#L125
### Motivation
Deterministic _id is very important to keep a robust data pipeline. Duplicate load will thus result in version increment instead of creating new document with the same content.
### Your contribution
I can do a PR if the idea is approved. | OpenSearch bulk ingest with deterministic _id if available | https://api.github.com/repos/langchain-ai/langchain/issues/8366/comments | 2 | 2023-07-27T17:23:26Z | 2023-10-26T16:40:38Z | https://github.com/langchain-ai/langchain/issues/8366 | 1,824,852,567 | 8,366 |
[
"langchain-ai",
"langchain"
] | ### System Info
Langchain version: 0.0.244
Numexpr version: 2.8.4
Python version: 3.10.11
### Who can help?
@hwchase17 @vowelparrot
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Numexpr's evaluate function that Langchain uses [here](https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/chains/llm_math/base.py#L80) in the LLMMathChain is susceptible to arbitrary code execution with eval in the latest released version. See this [issue](https://github.com/pydata/numexpr/issues/442) where PoC for numexpr's evaluate is also provided.
This vulnerability allows an arbitrary code execution, that is to run code and commands on target machine, via LLMMathChain's run method with the right prompt. I'd like to ask the Langchain's maintainers to confirm if they want a full PoC with Langchain posted here publicly.
### Expected behavior
Numerical expressions should be evaluated securely so as to not allow code execution. | Arbitrary code execution in LLMMathChain | https://api.github.com/repos/langchain-ai/langchain/issues/8363/comments | 33 | 2023-07-27T16:00:56Z | 2024-03-13T16:12:31Z | https://github.com/langchain-ai/langchain/issues/8363 | 1,824,692,692 | 8,363 |
[
"langchain-ai",
"langchain"
] | ### Issue with current documentation:
Following this:https://github.com/levalencia/langchainLuisValencia/blob/master/.github/CONTRIBUTING.md#-common-tasks
On Windows.
Forked the repo, cloned it locally, created a conda environment, (Poetry was already installed), then installed all dependencies.
Error received:```
• Installing xmltodict (0.13.0)
• Installing zstandard (0.21.0)
ChefBuildError
Backend subprocess exited when trying to invoke get_requires_for_build_wheel
Traceback (most recent call last):
File "C:\Users\xx\AppData\Roaming\pypoetry\venv\Lib\site-packages\pyproject_hooks\_in_process\_in_process.py", line 353, in <module>
main()
File "C:\Users\xx\AppData\Roaming\pypoetry\venv\Lib\site-packages\pyproject_hooks\_in_process\_in_process.py", line 335, in main
json_out['return_val'] = hook(**hook_input['kwargs'])
File "C:\Users\xx\AppData\Roaming\pypoetry\venv\Lib\site-packages\pyproject_hooks\_in_process\_in_process.py", line 118, in get_requires_for_build_wheel
return hook(config_settings)
File "C:\Users\LUISVA~1\AppData\Local\Temp\tmpn9w63h_u\.venv\lib\site-packages\setuptools\build_meta.py", line 341, in get_requires_for_build_wheel
return self._get_build_requires(config_settings, requirements=['wheel'])
File "C:\Users\LUISVA~1\AppData\Local\Temp\tmpn9w63h_u\.venv\lib\site-packages\setuptools\build_meta.py", line 323, in _get_build_requires
self.run_setup()
File "C:\Users\LUISVA~1\AppData\Local\Temp\tmpn9w63h_u\.venv\lib\site-packages\setuptools\build_meta.py", line 487, in run_setup
super(_BuildMetaLegacyBackend,
File "C:\Users\LUISVA~1\AppData\Local\Temp\tmpn9w63h_u\.venv\lib\site-packages\setuptools\build_meta.py", line 338, in run_setup
exec(code, locals())
File "<string>", line 8, in <module>
RuntimeError: uvloop does not support Windows at the moment
at ~\AppData\Roaming\pypoetry\venv\Lib\site-packages\poetry\installation\chef.py:147 in _prepare
143│
144│ error = ChefBuildError("\n\n".join(message_parts))
145│
146│ if error is not None:
→ 147│ raise error from None
148│
149│ return path
150│
151│ def _prepare_sdist(self, archive: Path, destination: Path | None = None) -> Path:
Note: This error originates from the build backend, and is likely not a problem with poetry but with uvloop (0.17.0) not supporting PEP 517 builds. You can verify this by running 'pip wheel --use-pep517 "uvloop (==0.17.0)"'.
```
How can I fix this?
### Idea or request for content:
Poetry Version 1.5.1
What do I need to do to be able to test locally? Before I start to contribute? | DOC: Contribution guidelines not working | https://api.github.com/repos/langchain-ai/langchain/issues/8362/comments | 1 | 2023-07-27T14:46:12Z | 2023-10-26T19:39:35Z | https://github.com/langchain-ai/langchain/issues/8362 | 1,824,539,164 | 8,362 |
[
"langchain-ai",
"langchain"
] | ### System Info
Mac, command line and jupyter notebook inside VSCode
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Create a new conda environment
Then
`conda install -c conda-forge 'langchain'`
`pip install openai`
Run sample code (can be in jupyter or terminal via a python file):
```
import os
openai_api_key = os.environ.get("OPENAI_API_KEY")
from langchain.llms import OpenAI
from langchain.chat_models import ChatOpenAI
llm = OpenAI()
chat_model = ChatOpenAI()
llm.predict("hi!")
```
Get the following error:
`TypeError: Argument 'bases' has incorrect type (expected list, got tuple)`
Full error output:
```
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
Cell In[2], line 5
1 import os
3 openai_api_key = os.environ.get("OPENAI_API_KEY")
----> 5 from langchain.llms import OpenAI
6 from langchain.chat_models import ChatOpenAI
8 llm = OpenAI()
File [~/anaconda3/envs/core_ml/lib/python3.11/site-packages/langchain/__init__.py:6](https://file+.vscode-resource.vscode-cdn.net/Users/coreyszopinski/Documents/projects/core/experiments-transformers/nlp/langchain/~/anaconda3/envs/core_ml/lib/python3.11/site-packages/langchain/__init__.py:6)
3 from importlib import metadata
4 from typing import Optional
----> 6 from langchain.agents import MRKLChain, ReActChain, SelfAskWithSearchChain
7 from langchain.cache import BaseCache
8 from langchain.chains import (
9 ConversationChain,
10 LLMBashChain,
(...)
18 VectorDBQAWithSourcesChain,
19 )
File [~/anaconda3/envs/core_ml/lib/python3.11/site-packages/langchain/agents/__init__.py:2](https://file+.vscode-resource.vscode-cdn.net/Users/coreyszopinski/Documents/projects/core/experiments-transformers/nlp/langchain/~/anaconda3/envs/core_ml/lib/python3.11/site-packages/langchain/agents/__init__.py:2)
1 """Interface for agents."""
----> 2 from langchain.agents.agent import (
3 Agent,
4 AgentExecutor,
5 AgentOutputParser,
6 BaseMultiActionAgent,
7 BaseSingleActionAgent,
8 LLMSingleActionAgent,
9 )
10 from langchain.agents.agent_toolkits import (
11 create_csv_agent,
12 create_json_agent,
(...)
22 create_xorbits_agent,
23 )
24 from langchain.agents.agent_types import AgentType
File [~/anaconda3/envs/core_ml/lib/python3.11/site-packages/langchain/agents/agent.py:16](https://file+.vscode-resource.vscode-cdn.net/Users/coreyszopinski/Documents/projects/core/experiments-transformers/nlp/langchain/~/anaconda3/envs/core_ml/lib/python3.11/site-packages/langchain/agents/agent.py:16)
13 from pydantic import BaseModel, root_validator
15 from langchain.agents.agent_types import AgentType
---> 16 from langchain.agents.tools import InvalidTool
17 from langchain.callbacks.base import BaseCallbackManager
18 from langchain.callbacks.manager import (
19 AsyncCallbackManagerForChainRun,
20 AsyncCallbackManagerForToolRun,
(...)
23 Callbacks,
24 )
File [~/anaconda3/envs/core_ml/lib/python3.11/site-packages/langchain/agents/tools.py:4](https://file+.vscode-resource.vscode-cdn.net/Users/coreyszopinski/Documents/projects/core/experiments-transformers/nlp/langchain/~/anaconda3/envs/core_ml/lib/python3.11/site-packages/langchain/agents/tools.py:4)
1 """Interface for tools."""
2 from typing import Optional
----> 4 from langchain.callbacks.manager import (
5 AsyncCallbackManagerForToolRun,
6 CallbackManagerForToolRun,
7 )
8 from langchain.tools.base import BaseTool, Tool, tool
11 class InvalidTool(BaseTool):
File [~/anaconda3/envs/core_ml/lib/python3.11/site-packages/langchain/callbacks/__init__.py:3](https://file+.vscode-resource.vscode-cdn.net/Users/coreyszopinski/Documents/projects/core/experiments-transformers/nlp/langchain/~/anaconda3/envs/core_ml/lib/python3.11/site-packages/langchain/callbacks/__init__.py:3)
1 """Callback handlers that allow listening to events in LangChain."""
----> 3 from langchain.callbacks.aim_callback import AimCallbackHandler
4 from langchain.callbacks.argilla_callback import ArgillaCallbackHandler
5 from langchain.callbacks.arize_callback import ArizeCallbackHandler
File [~/anaconda3/envs/core_ml/lib/python3.11/site-packages/langchain/callbacks/aim_callback.py:4](https://file+.vscode-resource.vscode-cdn.net/Users/coreyszopinski/Documents/projects/core/experiments-transformers/nlp/langchain/~/anaconda3/envs/core_ml/lib/python3.11/site-packages/langchain/callbacks/aim_callback.py:4)
1 from copy import deepcopy
2 from typing import Any, Dict, List, Optional, Union
----> 4 from langchain.callbacks.base import BaseCallbackHandler
5 from langchain.schema import AgentAction, AgentFinish, LLMResult
8 def import_aim() -> Any:
File [~/anaconda3/envs/core_ml/lib/python3.11/site-packages/langchain/callbacks/base.py:7](https://file+.vscode-resource.vscode-cdn.net/Users/coreyszopinski/Documents/projects/core/experiments-transformers/nlp/langchain/~/anaconda3/envs/core_ml/lib/python3.11/site-packages/langchain/callbacks/base.py:7)
4 from typing import Any, Dict, List, Optional, Sequence, Union
5 from uuid import UUID
----> 7 from langchain.schema.agent import AgentAction, AgentFinish
8 from langchain.schema.document import Document
9 from langchain.schema.messages import BaseMessage
File [~/anaconda3/envs/core_ml/lib/python3.11/site-packages/langchain/schema/__init__.py:2](https://file+.vscode-resource.vscode-cdn.net/Users/coreyszopinski/Documents/projects/core/experiments-transformers/nlp/langchain/~/anaconda3/envs/core_ml/lib/python3.11/site-packages/langchain/schema/__init__.py:2)
1 from langchain.schema.agent import AgentAction, AgentFinish
----> 2 from langchain.schema.document import BaseDocumentTransformer, Document
3 from langchain.schema.language_model import BaseLanguageModel
4 from langchain.schema.memory import BaseChatMessageHistory, BaseMemory
File [~/anaconda3/envs/core_ml/lib/python3.11/site-packages/langchain/schema/document.py:11](https://file+.vscode-resource.vscode-cdn.net/Users/coreyszopinski/Documents/projects/core/experiments-transformers/nlp/langchain/~/anaconda3/envs/core_ml/lib/python3.11/site-packages/langchain/schema/document.py:11)
6 from pydantic import Field
8 from langchain.load.serializable import Serializable
---> 11 class Document(Serializable):
12 """Class for storing a piece of text and associated metadata."""
14 page_content: str
File [~/anaconda3/envs/core_ml/lib/python3.11/site-packages/pydantic/main.py:186](https://file+.vscode-resource.vscode-cdn.net/Users/coreyszopinski/Documents/projects/core/experiments-transformers/nlp/langchain/~/anaconda3/envs/core_ml/lib/python3.11/site-packages/pydantic/main.py:186), in pydantic.main.ModelMetaclass.__new__()
TypeError: Argument 'bases' has incorrect type (expected list, got tuple)
```
### Expected behavior
Expected sample code to run without errors. | Argument 'bases' has incorrect type | https://api.github.com/repos/langchain-ai/langchain/issues/8361/comments | 2 | 2023-07-27T14:44:05Z | 2023-11-03T16:07:03Z | https://github.com/langchain-ai/langchain/issues/8361 | 1,824,534,625 | 8,361 |
[
"langchain-ai",
"langchain"
] | ### System Info
LangChain version v0.0.5, Python 3.9.13
### Who can help?
@hwchase17
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [X] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Issue was reported before, see [issue 5428](https://github.com/langchain-ai/langchain/issues/5428)
It is still not solved.
The problem is the regular expression used in libs/langchain/langchain/output_parsers/json.py
```python
match = re.search(r"```(json)?(.*?)```", json_string, re.DOTALL)
```
`.*?` does non-greedy matching, so the parsing does not stop at the final triple backticks, but
at the first occurrence of triple backticks which can be the start of a code section like ```python
Which leads errors like:
`Error while running agent: Could not parse LLM output: json { "action": "Final Answer", "action_input": "Here is a simple Python script that includes a main function and prints 'Hello':\n\npython\ndef say_hello():\n print('Hello')\n\ndef main():\n say_hello()\n\nif name == 'main':\n main()\n```\nThis script defines a function say_hello that prints the string 'Hello'. The main function calls say_hello. The final lines check if this script is being run directly (as opposed to being imported as a module), and if so, calls the main function." }`
### Expected behavior
Correct LLM output parsing for answers including code sections.
See [issue 5428](https://github.com/langchain-ai/langchain/issues/5428) | LLM output parsing error for answers including code sections. | https://api.github.com/repos/langchain-ai/langchain/issues/8357/comments | 4 | 2023-07-27T12:40:32Z | 2023-08-03T09:05:28Z | https://github.com/langchain-ai/langchain/issues/8357 | 1,824,290,695 | 8,357 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
if we are passing chathistory as a input in agent
template = """You are a chatbot having a conversation with a human.
{chat_history}
{human_input}
Chatbot:"""
prefix = """Have a conversation with a human, answering the following questions as best you can. You have access to the following tools:"""
suffix = """Begin!"
{chat_history}
Question: {input}
{agent_scratchpad}
"""
SO will it exceed the max token limit of the agent ??
If Yes what alternatives can be used for this
### Suggestion:
_No response_ | Chat History Issue | https://api.github.com/repos/langchain-ai/langchain/issues/8355/comments | 3 | 2023-07-27T11:56:41Z | 2024-02-11T16:17:46Z | https://github.com/langchain-ai/langchain/issues/8355 | 1,824,215,752 | 8,355 |
[
"langchain-ai",
"langchain"
] | ### System Info
I'm trying to achieve the least calls to openAI-s API, So I've tried to save my vectorstorage to the disk and then reload it (like its in the "if" part). But if I use OpenAIEmbeddings() as embedding_funcion it still calls to openai's embeddings API. (Maybe it should be a feature request) Is this a bug? Am I doing something wrong?
import os
from langchain.document_loaders import TextLoader
from langchain.embeddings import OpenAIEmbeddings
from langchain.indexes import VectorstoreIndexCreator
from langchain.indexes.vectorstore import VectorStoreIndexWrapper
from langchain.vectorstores import Chroma
loaders = [TextLoader("text1.txt"), TextLoader("text2.txt")]
indexes = []
for loader in loaders:
path = os.path.splitext(loader.file_path)[0]
if os.path.exists(path):
indexes.append(VectorStoreIndexWrapper(
vectorstore=Chroma(
persist_directory=f"./{path}", embedding_function=OpenAIEmbeddings())))
else:
indexes.append(VectorstoreIndexCreator(
vectorstore_kwargs={"persist_directory": f"./{path}"}).from_loaders([loader]))
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [X] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
The code I provided above creates folders when there are text1.txt and text2.txt in the same folder. (something should be in these files, like a poem or an article or something).
### Expected behavior
After the script ran the folders ("text1" and "text2") should be created with a {guid} folder and a chroma.sqlite3 file in them.
I checked my request usage on https://platform.openai.com/account/usage, and there were 2 requests to text-embedding-ada-002-v2 since there are 2 text files | Loading vectorstore from disk still calls to OpenAI's embeddings API | https://api.github.com/repos/langchain-ai/langchain/issues/8352/comments | 1 | 2023-07-27T11:41:07Z | 2023-11-02T16:17:17Z | https://github.com/langchain-ai/langchain/issues/8352 | 1,824,192,942 | 8,352 |
[
"langchain-ai",
"langchain"
] | ### System Info
LangChain 0.0.235
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [X] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```python
from langchain.schema import FunctionMessage
from langchain.memory import PostgresChatMessageHistory
history = PostgresChatMessageHistory(
connection_string="postgresql://postgres:mypassword@localhost/chat_history", # needs configuring
session_id="test_session_id",
)
function_message = FunctionMessage(
content='A Message for passing the result of executing a function back to a model',
name='name of function'
)
history.add_message(function_message)
history.messages
```
### Expected behavior
Output:
FunctionMessage(content='A Message for passing the result of executing a function back to a model', additional_kwargs={}, name='name of function') | No function message compatability in PostgresChatMessageHistory | https://api.github.com/repos/langchain-ai/langchain/issues/8349/comments | 4 | 2023-07-27T10:53:51Z | 2023-07-28T08:24:55Z | https://github.com/langchain-ai/langchain/issues/8349 | 1,824,121,928 | 8,349 |
[
"langchain-ai",
"langchain"
] | ### Feature request
Some kind of possibility to make a custom tool with return direct not do so when faced with a error such as a ToolException.
### Motivation
Currently, when a tool with return_direct faces a handled error, it returns the error directly. I would want a tool to return an output directly but only if there are no errors, since the errors aren't useful to be directly returned.
### Your contribution
I am not sure how this could be implemented, but would gladly help working on any ideas. Found this issue which would also enable this https://github.com/langchain-ai/langchain/issues/8306 | Custom tool return_direct except when a ToolException occurs. | https://api.github.com/repos/langchain-ai/langchain/issues/8348/comments | 2 | 2023-07-27T10:03:00Z | 2024-02-06T16:32:01Z | https://github.com/langchain-ai/langchain/issues/8348 | 1,824,041,459 | 8,348 |
[
"langchain-ai",
"langchain"
] | ### Feature request
The Playwright toolkit is awesome. But it would be great to be able to see what the agent is doing. We could add a feature that takes screenshoots of the current state of the web page
### Motivation
It feels strange to use the toolkit but being blind, we cannot see what the agent is doing. The original idea is that by taking screenshoot we could use agents to perform UI testing.
### Your contribution
I could help writing a PR, but I’ll need help from the Playwright community. I also want to know what the community thinks of this feature. | Playwright Browser Tools: add screenshot feature | https://api.github.com/repos/langchain-ai/langchain/issues/8347/comments | 1 | 2023-07-27T09:51:26Z | 2023-11-02T16:14:07Z | https://github.com/langchain-ai/langchain/issues/8347 | 1,824,020,139 | 8,347 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.