issue_owner_repo listlengths 2 2 | issue_body stringlengths 0 261k ⌀ | issue_title stringlengths 1 925 | issue_comments_url stringlengths 56 81 | issue_comments_count int64 0 2.5k | issue_created_at stringlengths 20 20 | issue_updated_at stringlengths 20 20 | issue_html_url stringlengths 37 62 | issue_github_id int64 387k 2.46B | issue_number int64 1 127k |
|---|---|---|---|---|---|---|---|---|---|
[
"langchain-ai",
"langchain"
] | Hello,
I'm using _langchain_ for QA with court case documents. More specifically, the RetrievalQAWithSourcesChain to retrieve the answer and document source information. However, when running the chain with embedded documents, I get the following error:
```
ValueError: too many values to unpack (expected 2)
Traceback:
response = qa({"question": pregunta}, return_only_outputs=True)
File "C:\Anaconda3\envs\iagen_3_10\lib\site-packages\langchain\chains\base.py", line 166, in __call__
raise e
File "C:\Anaconda3\envs\iagen_3_10\lib\site-packages\langchain\chains\base.py", line 160, in __call__
self._call(inputs, run_manager=run_manager)
File "C:\Anaconda3\envs\iagen_3_10\lib\site-packages\langchain\chains\qa_with_sources\base.py", line 132, in _call
answer, sources = re.split(r"SOURCES:\s", answer)
```
The passed documents are the sections from the court case. I added the following **metadata** fields:
1. Source: PDF file name.
2. Section: Name of the section
3. Section_chunk: Numeral value used for identification in case the section was divided into chunks.
4. Page: Page where the section chunk starts.
The documents are passed as retriever to the chain with FAISS (FAISS.from_documents(documents, self.embeddings)).
I tried out two approaches (both resulting in the same error):
1. providing the _load_qa_chain_ as chain
2. creating it using the class method **_.from_chain_type_**
My question is why does this error ocurrs. And also, if the type of metadata used may cause the errors.
Thank you in advance! | Issue: RetrievalQAWithSourcesChain gives error 'too many values to unpack (expected 2)' after running. | https://api.github.com/repos/langchain-ai/langchain/issues/7184/comments | 2 | 2023-07-05T09:49:42Z | 2023-08-16T20:30:16Z | https://github.com/langchain-ai/langchain/issues/7184 | 1,789,191,768 | 7,184 |
[
"langchain-ai",
"langchain"
] | ### System Info
llm = AzureOpenAI(
model_name="gpt-4-32k",
engine="gpt-4-32k"
)
llm("tell me a joke")
Exception:
The completion operation does not works with the specific model, gpt-4-32k, pls choose different model....
Environment:
LangChain:0.0.218
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Environment:
LangChain:0.0.218
llm = AzureOpenAI(
model_name="gpt-4-32k",
engine="gpt-4-32k"
)
llm("tell me a joke")
run the code above
### Expected behavior
the program should work | does llms.AzureOpenAI support gpt4 or gpt-32k? | https://api.github.com/repos/langchain-ai/langchain/issues/7182/comments | 2 | 2023-07-05T09:13:02Z | 2023-10-12T16:06:46Z | https://github.com/langchain-ai/langchain/issues/7182 | 1,789,125,219 | 7,182 |
[
"langchain-ai",
"langchain"
] | ### System Info
```
from langchain.text_splitter import SentenceTransformersTokenTextSplitter
splitter = SentenceTransformersTokenTextSplitter(
tokens_per_chunk=64,
chunk_overlap=0,
model_name='intfloat/e5-base-v2',
add_start_index=True,
)
text = "- afrikaans\n- العربية\n- azərbaycanca\n- বাংলা\n- беларуская\n- bosanski\n- čeština\n- deutsch\n- eesti\n- ελληνικά\n- español\n- فارسی\n- français\n- gaeilge\n- 한국어\n- հայերեն\n- bahasa indonesia\n- עברית\n- jawa\n- kurdî\n- latviešu\n- lietuvių\n- македонски\n- malti\n- मराठी\n- مصرى\n- bahasa melayu\n- nederlands\n- 日本語\n- oʻzbekcha / ўзбекча\n- ਪੰਜਾਬੀ\n- پنجابی\n- پښتو\n- português\n- română\n- русский\n- simple english\n- کوردی\n- suomi\n- తెలుగు\n- ไทย\n- türkçe\n- українська\n- اردو\n- tiếng việt\n- 粵語\n- 中文\nedit links\ncoordinates: 41°43′32″n 49°56′49″w / 41.72556°n 49.94694°w / 41.72556; -49.94694\nfrom wikipedia, the free encyclopedia\n2023 submersible implosion in the atlantic\n|date||18 june 2023|\n|location||north atlantic ocean, near the wreck of the titanic|\n|coordinates||41°43′32″n 49°56′49″w / 41.72556°n 49.94694°w / 41.72556; -49.94694|\n|type||maritime disaster|\n|cause||failure of the pressure hull|\n|participants||5 passengers|\n|outcome||submersible destroyed by implosion|\n|deaths||5 (see fatalities)|\non 18 june 2023, titan, a submersible operated by american tourism and expeditions company oceangate, imploded during an expedition to view the wreck of the titanic in the north atlantic ocean off the coast of newfoundland, canada.on board titan, a submersible operated by american tourism and expeditions company oceangate, were stockton rush, the ceo of american tourism and expeditions company oceangate; paul-henri nargeolet, a french deep sea explorer and the titanic| expert; hamish harding, a british billionaire businessman; shahzada dawood, a pakistani-british billionaire businessman; and dawood's son suleman.communication with titan, a submersible operated by american tourism and expeditions company oceangate, was lost 1 hour and 45 minutes into imploded submersible operated by american tourism and expeditions company oceangate, dive.authorities were alerted when titan, a submersible operated by american tourism and expeditions company oceangate, failed to resurface at the scheduled time later that day.after titan, a submersible operated by american tourism and expeditions company oceangate, had been missing for four days, a remotely operated underwater vehicle (rov) discovered a debris field containing parts of titan, a submersible operated by american tourism and expeditions company oceangate,, about 500 metres (1,600 ft) from the bow of the titanic|.the search area was informed by the united states navy's (usn)"
```
```
"".join(splitter.split_text(text)) == text
False
```
Additional characters are added
```
len("".join(splitter.split_text(text)))
2534
len(text)
2426
```
Newlines are stripped
```
text[:50]
'- afrikaans\n- العربية\n- azərbaycanca\n- বাংলা\n- бел'
splitter.split_text(text)[0][:50]
'- afrikaans - العربية - azərbaycanca - বাংলা - бел'
```
Special tokens are added
```
text[193:293]
'awa\n- kurdî\n- latviešu\n- lietuvių\n- македонски\n- malti\n- मराठी\n- مصرى\n- bahasa melayu\n- nederlands\n-'
"".join(splitter.split_text(text))[200:300]
'awa - kurdi - latviesu - lietuviu - македонски -malti - [UNK] - مصرى - bahasa melayu - nederlands - '
```
Recommended improvement: call
`tokenizer(text, return_offsets_mapping=True)`
This will allow selection of N tokens and allow reconstruction without the need for and allow reconstruction without the use of tokenizer.decode (which is not perfectly invertible).
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [X] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Please see my code above
### Expected behavior
Please see my code above | SentenceTransformersTokenTextSplitter Doesn't Preserve Text | https://api.github.com/repos/langchain-ai/langchain/issues/7181/comments | 6 | 2023-07-05T08:29:31Z | 2024-04-22T18:03:57Z | https://github.com/langchain-ai/langchain/issues/7181 | 1,789,052,195 | 7,181 |
[
"langchain-ai",
"langchain"
] | ### System Info
Langchain version: 0.0.221
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [X] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
When trying to index documents return API error
`APIError: {'code': '22P02', 'details': None, 'hint': None, 'message': 'invalid input syntax for type bigint: "54d7bd9c-9822-40ca-ade6-ae173b65d34e"'}`
This code perfectly worked on previous versions of Langchain.
After upgrading, got the error.
Supabase backend didn't change.
`
# load required libraries
from langchain.document_loaders import WebBaseLoader
from langchain.text_splitter import RecursiveCharacterTextSplitter
from langchain.embeddings import OpenAIEmbeddings
from langchain.vectorstores import SupabaseVectorStore
from supabase import Client, create_client
import os
from dotenv import load_dotenv
# load document from web
url = "https://python.langchain.com/docs/modules/data_connection/vectorstores/integrations/supabase"
loader = WebBaseLoader(url)
documents = loader.load()
# split documents into chunks
docs_splitter = RecursiveCharacterTextSplitter(chunk_size=2500,chunk_overlap=250)
splitted_docs = docs_splitter.split_documents(documents=documents)
# initialize embeddings model
embeddings = OpenAIEmbeddings()
# initialize vector store
supabase_url = os.getenv('SUPABASE_URL')
supabase_key = os.getenv('SUPABASE_KEY')
supabase = create_client(supabase_url, supabase_key)
# save values to supabase
vector_store = SupabaseVectorStore.from_documents(documents=splitted_docs, embedding=embeddings, client=supabase)
`
### Expected behavior
document embeddings should be saved in a Supabase database | SupabaseVectorStore.from_documents returns APIError: {'code': '22P02' | https://api.github.com/repos/langchain-ai/langchain/issues/7179/comments | 4 | 2023-07-05T07:44:06Z | 2023-10-25T16:07:47Z | https://github.com/langchain-ai/langchain/issues/7179 | 1,788,978,302 | 7,179 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
## Problem descriptions
I've used `retrievalQA.from_chain_type()` with `refine` type to design a chatPDF.
But the response often **incomplete**, see the following result, the `Answer` is not complete which will let the `json.loads` not work.
Futhermore, I've used `get_openai_callback` to check if the token exceeds the limit.
Howerver, the callback show the total token is 3432 which didn't exceeds the limit.
## Questions
1. Why the response incomplete?
2. How can I let the response complete then I can do `json.loads`
## Code
```python
from datetime import datetime
from typing import List
import langchain
from langchain.callbacks import get_openai_callback
from langchain.chains import RetrievalQA
from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.llms import OpenAI
from langchain.output_parsers import PydanticOutputParser
from langchain.prompts import PromptTemplate
from langchain.vectorstores import Chroma
from langchain.document_loaders import PyMuPDFLoader
from langchain.text_splitter import RecursiveCharacterTextSplitter
from pydantic import BaseModel
docs = PyMuPDFLoader('file.pdf').load()
splitter = RecursiveCharacterTextSplitter(docs, chunk_size=1000, chunk_overlap=300)
docs = splitter.split_documents(docs)
class Specification(BaseModel):
product_name: str
manufactured_date: datetime
size_inch: str
resolution: str
contrast: str
operation_temperature: str
power_supply: str
sunlight_readable: bool
antiglare: bool
low_power_consumption: bool
high_brightness: bool
wide_temperature: bool
fast_response: bool
screen_features: List[str]
parser = PydanticOutputParser(pydantic_object=Specification)
prompt_template = """
Use the following pieces of context to answer the question, if you don't know the answer, leave it blank('') don't try to make up an answer.
{context_str}
Question: {question}
{format_instructions}
"""
prompt = PromptTemplate(
template=prompt_template,
input_variables=['context_str', 'question'],
partial_variables={'format_instructions': parser.get_format_instructions()}
)
chain_type_kwargs = {
'question_prompt': prompt,
'verbose': True
}
llm = OpenAI(temperature=0)
embeddings = OpenAIEmbeddings()
db = Chroma.from_documents(
documents=docs,
embedding= embeddings
)
qa_chain = RetrievalQA.from_chain_type(
llm=llm,
chain_type = 'refine',
retriever= db.as_retriever(),
chain_type_kwargs=chain_type_kwargs
)
query = f"""What's the display specifications?"""
with get_openai_callback() as cb:
res = qa_chain.run(query)
print(cb, '\n')
print(res)
```
## Process of Chain
```
[1m> Entering new chain...[0m
[1m> Entering new chain...[0m
Prompt after formatting:
[32;1m[1;3m
Use the following pieces of context to answer the question, if you don't know the answer, leave it blank('') don't try to make up an answer.
Dimensions (W × H × D)
Touch screen: 7.0" × 5.1" × 0.75" (178 × 130 × 19 mm)
Dock: 4.2" × 2.3" × 3.1" (106 × 57 × 78 mm)
Weight
Touch screen: 0.7 lbs. (0.32 kg)
Dock: 0.8 lbs (0.36 kg)
Planning the installation
• Make sure the dock can be located near a power outlet and a strong WiFi signal.
• For the most reliable connection, we recommend running Ethernet to the dock.
• Make sure the touch screen's WiFi or Ethernet connection is on the same network as your controller and that the signal is strong.
• Communication agent is required for intercom use.
• Charge the touch screen for at least six hours before use.
For more product information
Visit ctrl4.co/t3series
Control4® T3-7 7" Tabletop Touch Screen
DOC-00148-C
2015-10-09 MSC
Copyright ©2015, Control4 Corporation. All rights reserved. Control4, the Control4 logo, the 4-ball logo, 4Store,
4Sight, Control My Home, Everyday Easy, and Mockupancy are registered trademarks or trademarks of Control4
Question: What's the display specifications?
The output should be formatted as a JSON instance that conforms to the JSON schema below.
As an example, for the schema {"properties": {"foo": {"title": "Foo", "description": "a list of strings", "type": "array", "items": {"type": "string"}}}, "required": ["foo"]}}
the object {"foo": ["bar", "baz"]} is a well-formatted instance of the schema. The object {"properties": {"foo": ["bar", "baz"]}} is not well-formatted.
Here is the output schema:
{"properties": {"product_name": {"title": "Product Name", "type": "string"}, "manufactured_date": {"title": "Manufactured Date", "type": "string", "format": "date-time"}, "size_inch": {"title": "Size Inch", "type": "string"}, "resolution": {"title": "Resolution", "type": "string"}, "contrast": {"title": "Contrast", "type": "string"}, "operation_temperature": {"title": "Operation Temperature", "type": "string"}, "power_supply": {"title": "Power Supply", "type": "string"}, "sunlight_readable": {"title": "Sunlight Readable", "type": "boolean"}, "antiglare": {"title": "Antiglare", "type": "boolean"}, "low_power_consumption": {"title": "Low Power Consumption", "type": "boolean"}, "high_brightness": {"title": "High Brightness", "type": "boolean"}, "wide_temperature": {"title": "Wide Temperature", "type": "boolean"}, "fast_response": {"title": "Fast Response", "type": "boolean"}, "screen_features": {"title": "Screen Features", "type": "array", "items": {"type": "string"}}}, "required": ["product_name", "manufactured_date", "size_inch", "resolution", "contrast", "operation_temperature", "power_supply", "sunlight_readable", "antiglare", "low_power_consumption", "high_brightness", "wide_temperature", "fast_response", "screen_features"]}
[0m
[1m> Finished chain.[0m
[1m> Entering new chain...[0m
Prompt after formatting:
[32;1m[1;3mThe original question is as follows: What's the display specifications?
We have provided an existing answer:
Answer: {
"product_name": "Control4® T3-7 7\" Tabletop Touch Screen",
"manufactured_date": "2015-10-09",
"size_inch": "7.0\" × 5.1\" × 0.75\"",
"resolution": "N/A",
"contrast": "N/A",
"operation_temperature": "N/A",
"power_supply": "N/A",
"sunlight_readable": "N/A",
"antiglare": "N/A",
"low_power_consumption": "N/A",
"high_brightness": "N/A",
"wide_temperature": "N/A",
"fast_response": "N/A",
"screen_features": []
}
We have the opportunity to refine the existing answer(only if needed) with some more context below.
------------
Model numbers
C4-TT7-1-BL, C4-TT7-1-WH, C4-TT7-1-RD
Features
Screen
Resolution: 1280 × 800
Capacitive touch
Camera: 720p
Network
Ethernet or WiFi (802.11 g/n [2.4 GHz])
Notes:
(1) 802.11b is not supported for Video Intercom.
(2) Wireless-N is recommended for Video Intercom. The more devices that Video Intercom is broadcast to, the
more response time and images are degraded.
Battery
3100 mAh Li-ion
Power supply
PoE (IEEE802.3af)
100VAC ~ 240VAC, 50-60 Hz
International power supply adapters included
Dock connections
•
Ethernet
•
PoE
•
DC power
Mounting
Tabletop or portable
Environmental
Operating temperature
32 ~ 104˚F (0˚ ~ 40˚C)
Storage temperature
32 ~ 104˚F (0˚ ~ 40˚C)
Dimensions (W × H × D)
Touch screen: 7.0" × 5.1" × 0.75" (178 × 130 × 19 mm)
Dock: 4.2" × 2.3" × 3.1" (106 × 57 × 78 mm)
Weight
Touch screen: 0.7 lbs. (0.32 kg)
Dock: 0.8 lbs (0.36 kg)
Planning the installation
• Make sure the dock can be located near a power outlet and a strong WiFi signal.
------------
Given the new context, refine the original answer to better answer the question. If the context isn't useful, return the original answer.[0m
[1m> Finished chain.[0m
[1m> Entering new chain...[0m
Prompt after formatting:
[32;1m[1;3mThe original question is as follows: What's the display specifications?
We have provided an existing answer:
Answer: {
"product_name": "Control4® T3-7 7\" Tabletop Touch Screen",
"manufactured_date": "2015-10-09",
"model_numbers": ["C4-TT7-1-BL", "C4-TT7-1-WH", "C4-TT7-1-RD"],
"size_inch": "7.0\" × 5.1\" × 0.75\"",
"resolution": "1280 × 800",
"contrast": "N/A",
"operation_temperature": "32 ~ 104˚F (0˚ ~ 40˚C)",
"storage_temperature": "32 ~ 104˚F (0˚ ~ 40˚C)",
"power_supply": "PoE (IEEE802.3af) 100VAC ~ 240VAC, 50-60 Hz",
"sunlight_readable": "N/A",
"antiglare": "N/A",
"low_power_consumption": "N/A",
"high_bright
We have the opportunity to refine the existing answer(only if needed) with some more context below.
------------
Control4® T3-7 7" Tabletop Touch Screen
DOC-00148-C
2015-10-09 MSC
Copyright ©2015, Control4 Corporation. All rights reserved. Control4, the Control4 logo, the 4-ball logo, 4Store,
4Sight, Control My Home, Everyday Easy, and Mockupancy are registered trademarks or trademarks of Control4
Corporation in the United States and/or other countries. All other names and brands may be claimed as the
property of their respective owners. All specifications subject to change without notice.
------------
Given the new context, refine the original answer to better answer the question. If the context isn't useful, return the original answer.[0m
[1m> Finished chain.[0m
[1m> Entering new chain...[0m
Prompt after formatting:
[32;1m[1;3mThe original question is as follows: What's the display specifications?
We have provided an existing answer:
Answer: {
"product_name": "Control4® T3-7 7\" Tabletop Touch Screen",
"manufactured_date": "2015-10-09",
"model_numbers": ["C4-TT7-1-BL", "C4-TT7-1-WH", "C4-TT7-1-RD"],
"size_inch": "7.0\" × 5.1\" × 0.75\"",
"resolution": "1280 × 800",
"contrast": "N/A",
"operation_temperature": "32 ~ 104˚F (0˚ ~ 40˚C)",
"storage_temperature": "32 ~ 104˚F (0˚ ~ 40˚C)",
"power_supply": "PoE (IEEE802.3af) 100VAC ~ 240VAC, 50-60 Hz",
"sunlight_readable": "N/A",
"antiglare": "N/A",
"low_power_consumption": "N/A",
"high_bright
We have the opportunity to refine the existing answer(only if needed) with some more context below.
------------
Control4® T3 Series 7" Tabletop Touch Screen
The Control4® T3 Series Tabletop Touch Screen delivers always-on, dedicated, and mobile control over all the technology in your home
or business. Featuring a gorgeous new tablet design and stunning high-resolution graphics, this portable screen looks beautiful whether
on a kitchen countertop or in the theater on your lap. This model includes HD video intercom and crystal-clear audio intercom for
convenient communications from room to room or with visitors at the door.
• Available in a 7" model, the T3 Series Portable Touch Screen provides dedicated, elegant, and mobile control of your home.
• HD camera, combined with speakers and microphone, provides the best video intercom experience yet.
• Crisp picture with two and a half times the resolution of previous models.
• Extremely fast and responsive—up to 16 times faster than our previous touch screens.
------------
Given the new context, refine the original answer to better answer the question. If the context isn't useful, return the original answer.[0m
[1m> Finished chain.[0m
[1m> Finished chain.[0m
Tokens Used: 3432
Prompt Tokens: 2465
Completion Tokens: 967
Successful Requests: 4
Total Cost (USD): $0.06864
```
## Result
```
Tokens Used: 3432
Prompt Tokens: 2465
Completion Tokens: 967
Successful Requests: 4
Total Cost (USD): $0.06864
Answer: {
"product_name": "Control4® T3 Series 7\" Tabletop Touch Screen",
"manufactured_date": "2015-10-09",
"model_numbers": ["C4-TT7-1-BL", "C4-TT7-1-WH", "C4-TT7-1-RD"],
"size_inch": "7.0\" × 5.1\" × 0.75\"",
"resolution": "1280 × 800",
"contrast": "N/A",
"operation_temperature": "32 ~ 104˚F (0˚ ~ 40˚C)",
"storage_temperature": "32 ~ 104˚F (0˚ ~ 40˚C)",
"power_supply": "PoE (IEEE802.3af) 100VAC ~ 240VAC, 50-60 Hz",
"sunlight_readable": "N/A",
"antiglare": "N/A",
"low_power_consumption": "N/A",
"high_brightness
```
### Suggestion:
_No response_ | Issue: RetrievalQA response incomplete | https://api.github.com/repos/langchain-ai/langchain/issues/7177/comments | 1 | 2023-07-05T07:19:18Z | 2023-07-05T07:33:37Z | https://github.com/langchain-ai/langchain/issues/7177 | 1,788,935,367 | 7,177 |
[
"langchain-ai",
"langchain"
] | ### Issue with current documentation:
```
# import
from langchain.embeddings.sentence_transformer import SentenceTransformerEmbeddings
from langchain.text_splitter import CharacterTextSplitter
from langchain.vectorstores import Chroma
from langchain.document_loaders import TextLoader
# load the document and split it into chunks
loader = TextLoader("../../../state_of_the_union.txt")
documents = loader.load()
# split it into chunks
text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
docs = text_splitter.split_documents(documents)
# create the open-source embedding function
embedding_function = SentenceTransformerEmbeddings(model_name="all-MiniLM-L6-v2")
# load it into Chroma
db = Chroma.from_documents(docs, embedding_function)
# query it
query = "What did the president say about Ketanji Brown Jackson"
docs = db.similarity_search(query)
# print results
print(docs[0].page_content)
```
```
# save to disk
db2 = Chroma.from_documents(docs, embedding_function, persist_directory="./chroma_db")
db2.persist()
docs = db.similarity_search(query)
# load from disk
db3 = Chroma(persist_directory="./chroma_db")
docs = db.similarity_search(query)
print(docs[0].page_content)
```
### Idea or request for content:
In above code, I find it difficult to understand this paragraph:
```
# save to disk
db2 = Chroma.from_documents(docs, embedding_function, persist_directory="./chroma_db")
db2.persist()
docs = db.similarity_search(query)
# load from disk
db3 = Chroma(persist_directory="./chroma_db")
docs = db.similarity_search(query)
print(docs[0].page_content)
```
Although `db2 `and `db3 `do demonstrate the saving and loading of Chroma,
But Two pieces of code( `docs = db.similarity_search(query) `) have nothing to do with saving and loading,
and it still searches for answers from the `db`
Is this an error? | saving and loading embedding from Chroma | https://api.github.com/repos/langchain-ai/langchain/issues/7175/comments | 13 | 2023-07-05T06:52:10Z | 2024-07-01T19:22:22Z | https://github.com/langchain-ai/langchain/issues/7175 | 1,788,892,758 | 7,175 |
[
"langchain-ai",
"langchain"
] | ### System Info
langchain = 0.0.220
python = 3.11.4
### Who can help?
@hwchase17 @agola11
### Information
- [X] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [X] Callbacks/Tracing
- [ ] Async
### Reproduction
1. Open the chat with search langchain/streamlit app[here](https://llm-examples.streamlit.app/Chat_with_search) and go to the "Chat with search" page
2. Ask "current meta and tesla stock price?" in the chat
3. You should see in the response the formatting get messed up because it interprets two dollar signs as a latex equation. Normally I use a function to escape the dollar signs but you may want to do this on your callback.

### Expected behavior
Expect text instead of latex equation. I've attached an example I used with escapes.

| StreamlitCallbackHandler doesn't double escape dollar signs, so two dollar signs makes everything between an equation | https://api.github.com/repos/langchain-ai/langchain/issues/7172/comments | 1 | 2023-07-05T05:23:14Z | 2023-10-12T16:06:56Z | https://github.com/langchain-ai/langchain/issues/7172 | 1,788,788,050 | 7,172 |
[
"langchain-ai",
"langchain"
] | ### System Info
Based on the official doc, I created two type of retriever:
1. `faiss_retriever = FAISS.from_documents(texts, OpenAIEmbeddings()).as_retriever()`serving as a`VectorStoreRetriever`(referenced from the API doc)
2. `compression_retriever = ContextualCompressionRetriever(base_compressor=relevant_filter, base_retriever=retriever)`functioning as a`ContextualCompressionRetriever` (also referenced from the API doc)
Then I ran the RetrievalQA to get relative content by Chain by code below:
```python
qa = RetrievalQA.from_chain_type(llm=OpenAI( verbose=True), chain_type="stuff", retriever=compression_retriever,return_source_documents=True, chain_type_kwargs=chain_type_kwargs)
# or
qa = RetrievalQA.from_chain_type(llm=OpenAI( verbose=True), chain_type="stuff", retriever=faiss_retriever,return_source_documents=True, chain_type_kwargs=chain_type_kwargs)
```
The result was that the qa with compression_retriever failed to return context for the prompt(return with empty array), whereas the qa with faiss_retriever successfully returned the context.
### Who can help?
@hwchase17 @eyurtsev
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```python
loader = UnstructuredFileLoader("NER.txt")
document = loader.load()
separators = ["。", " "]
text_splitter = RecursiveCharacterTextSplitter(separators=separators, chunk_size=500, chunk_overlap=0)
texts = text_splitter.split_documents(document)
embeddings = OpenAIEmbeddings()
relevant_filter = EmbeddingsFilter(embeddings=embeddings, similarity_threshold=0.81)
retriever = FAISS.from_documents(texts, OpenAIEmbeddings()).as_retriever() # base retriever
compression_retriever = ContextualCompressionRetriever(base_compressor=relevant_filter, base_retriever=retriever) # document compression retriver
from langchain.prompts import PromptTemplate
prompt_template1 = """plase use context to answer question.
{context}
question: {question}
anwser:"""
PROMPT = PromptTemplate(
template=prompt_template1, input_variables=["context", "question"]
)
chain_type_kwargs = {"prompt": PROMPT,'verbose': True}
qa = RetrievalQA.from_chain_type(llm=OpenAI( verbose=True), chain_type="stuff", retriever=compression_retriever,return_source_documents=True, chain_type_kwargs=chain_type_kwargs)
query = "balabalabala" # replace it with question
result = qa({"query": query})
print(result)```
### Expected behavior
While using `ContextualCompressionRetriever` for `RetrievalQA` could output a not-null context., aka not []. | RetrievalQA.from_chain_type‘s parameter retriever can not use ContextualCompressionRetriever | https://api.github.com/repos/langchain-ai/langchain/issues/7168/comments | 7 | 2023-07-05T02:43:25Z | 2023-10-16T09:45:05Z | https://github.com/langchain-ai/langchain/issues/7168 | 1,788,668,402 | 7,168 |
[
"langchain-ai",
"langchain"
] | ### Issue with current documentation:
https://python.langchain.com/docs/get_started/quickstart.html
On the above link, I have tried following along on google colab and ran into an issue as following:
Observation: Olivia Wilde started dating Harry Styles after ending her years-long engagement to Jason Sudeikis — see their relationship timeline.
Thought:I have found information about Olivia Wilde's boyfriend. Now I need to calculate his current age raised to the 0.23 power.
Action:
```
{
"action": "Calculator",
"action_input": "age ^ 0.23"
}
```
As you can see, instead of getting information about Harry Styles' age, it just puts in the string 'age' and raises value error.
This is quite weird seeing the tutorial for agents work well:
https://python.langchain.com/docs/modules/agents.html
What could be the issue?
### Idea or request for content:
_No response_ | DOC: The Quickstart tutorial for Agents has an error | https://api.github.com/repos/langchain-ai/langchain/issues/7166/comments | 3 | 2023-07-05T01:11:06Z | 2023-10-12T16:07:06Z | https://github.com/langchain-ai/langchain/issues/7166 | 1,788,611,749 | 7,166 |
[
"langchain-ai",
"langchain"
] | ### Feature request
I'd like to make the user experience more conversational while supporting OpenAI functions. However, OpenAIFunctionsAgent implementation doesn't accept "memory" to make it more conversational. I'd like to have ReACT planning capability + functions as tools. I think the minimal implementation is to just add a memory to OpenAIFunctionsAgent.
### Motivation
While answering and executing tools is a great feature supported by OpenAIFunctionsAgent, more streamlined user experiences like chat are often desired as well.
### Your contribution
Happy to make a PR with a guideline if this is something desired in langchain. | OpenAIFunctionsAgent + ConversationalChatAgent? | https://api.github.com/repos/langchain-ai/langchain/issues/7163/comments | 6 | 2023-07-04T22:28:54Z | 2023-10-19T16:06:43Z | https://github.com/langchain-ai/langchain/issues/7163 | 1,788,529,698 | 7,163 |
[
"langchain-ai",
"langchain"
] | ### System Info
I'm using the following on Windows:
Python 3.11.3
langchain 0.0.222
lark 1.1.5
With a Pinecone index:
Environment: us-east4-gcp
Metric: cosine
Pod Type: p1.x1
Dimensions: 1536
### Who can help?
@hwchase17 @angola
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
from langchain.retrievers.self_query.base import SelfQueryRetriever
retriever = SelfQueryRetriever.from_llm(llm, vectorstore, document_content_description, metadata_description, verbose=True)
embeddings = OpenAIEmbeddings()
vectorstore= Pinecone.from_existing_index(index_name="index1",embedding=embeddings,namespace="metamovies")
# This example specifies a query and composite filter
relevantdocs=retriever.get_relevant_documents(
"What's a movie after 1990 but before 2005 that's all about toys, and preferably is animated")
### Expected behavior
a list of selected Documents
I get a runtime error:
query='toys' filter=Operation(operator=<Operator.AND: 'and'>, arguments=[Comparison(comparator=<Comparator.GT: 'gt'>, attribute='year', value=1990), Comparison(comparator=<Comparator.LT: 'lt'>, attribute='year', value=2005), Comparison(comparator=<Comparator.CONTAIN: 'contain'>, attribute='genre', value='animated')]) limit=None
HTTP response body: {"code":3,"message":"$contain is not a valid operator","details":[]} | SelfQuering Retrieval no support $contain operator | https://api.github.com/repos/langchain-ai/langchain/issues/7157/comments | 2 | 2023-07-04T18:56:27Z | 2023-10-12T16:07:11Z | https://github.com/langchain-ai/langchain/issues/7157 | 1,788,350,317 | 7,157 |
[
"langchain-ai",
"langchain"
] | ### Feature request
Some vectorstores (e.g. Vectara) internally create their own embeddings. The request is to generalize the VectorStore base class to allow for embeddings to be optional.
### Motivation
Currently users have to send "None" or FakeEmbeddings instead, which creates additional work and is not needed.
### Your contribution
Happy to help with a PR (with guidance from the main project team) | Make "embedding" an optional parameter in VectorStore interface | https://api.github.com/repos/langchain-ai/langchain/issues/7150/comments | 2 | 2023-07-04T15:47:14Z | 2023-10-12T16:07:16Z | https://github.com/langchain-ai/langchain/issues/7150 | 1,788,162,432 | 7,150 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
I keep getting this error for the past couple of days for gpt-3.5-turbo-16k:
Timeout: Request timed out: HTTPSConnectionPool(host='api.openai.com', port=443): Read timed out. (read timeout=120.0)
The OpenAI API seems to be working fine by itself. Can someone please tell me if they are facing the same issue? Or any suggestions on how to resolve this?
### Suggestion:
_No response_ | Issue: The API call keeps getting timed out | https://api.github.com/repos/langchain-ai/langchain/issues/7148/comments | 13 | 2023-07-04T15:35:27Z | 2024-02-13T16:15:53Z | https://github.com/langchain-ai/langchain/issues/7148 | 1,788,148,043 | 7,148 |
[
"langchain-ai",
"langchain"
] | ### System Info
- langchain-0.0.222 (and all before)
- Any GPT4All python package after [this commit](https://github.com/nomic-ai/gpt4all/commit/46a0762bd5a7e605e9bd63e4f435b482eff026f6#diff-cc3ea7dfbfc9837a4c42dae1089a1eda0ed175d17f2628cf16c13d3cd9da6e13R174) was merged. So latest: >= 1.0.1.
Note this issue is in langchain caused by GPT4All's change. We need to alter the _default_params() return values to exclude many keys that were removed from GPT4All's function kwargs.
### Who can help?
@hwchase17 @agola11 👋
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
1. Install the latest gpt4all package: `pip install --upgrade gpt4all`
2. Use the GPT4All LLM and LLMChain as normal (llm = GPT4All(model="ggml.gpt4all.xyz.bin"); chain = LLMChain(llm, prompt=any_prompt))
3. Run the chain: chain("prompt")
4. TypeError: generate() got an unexpected keyword argument 'n_ctx'
### Expected behavior
Should not cause TypeError. It should not pass n_ctx from default parameters to GPT4All's `generate()` | GPT4All generate() TypeError 'n_ctx' since a commit on GPT4All's python binding changed arguments | https://api.github.com/repos/langchain-ai/langchain/issues/7145/comments | 7 | 2023-07-04T14:22:38Z | 2023-11-07T14:10:16Z | https://github.com/langchain-ai/langchain/issues/7145 | 1,788,030,844 | 7,145 |
[
"langchain-ai",
"langchain"
] | ### Feature request
Hi there 👋
Thanks a lot for the awesome library. The current implementation of `BaseCache` stores the prompt + the llm generated text as key.
This means that I am not really caching since I'll have to do a request to OpenAI to get the llm text
### Motivation
I'd like to cache a prompt
### Your contribution
I am willing to contribute but you need to explain me how :) | Caching: allows to cache only the prompt | https://api.github.com/repos/langchain-ai/langchain/issues/7141/comments | 12 | 2023-07-04T12:46:56Z | 2024-06-27T16:06:04Z | https://github.com/langchain-ai/langchain/issues/7141 | 1,787,867,349 | 7,141 |
[
"langchain-ai",
"langchain"
] | ### Feature request
I would love to add an [H2O Wave](https://wave.h2o.ai/) framework callback integration in a similar manner as done for [Streamlit](https://python.langchain.com/docs/modules/callbacks/integrations/streamlit). Wave has recently added a dedicated chatbot card, which seems like a perfect fit.


### Motivation
This would allow for bringing more diversity when it comes to Python UI frameworks + Langchain integration. Moreover, Wave is async by nature, so seems like [custom async callback](https://python.langchain.com/docs/modules/callbacks/how_to/async_callbacks) or maybe an [async generator](https://github.com/hwchase17/langchain/blob/master/langchain/callbacks/streaming_aiter.py) would do.
### Your contribution
I am willing to provide the PR with everything that is needed. | H2O Wave callback integration | https://api.github.com/repos/langchain-ai/langchain/issues/7139/comments | 5 | 2023-07-04T11:56:42Z | 2024-02-07T16:29:24Z | https://github.com/langchain-ai/langchain/issues/7139 | 1,787,787,874 | 7,139 |
[
"langchain-ai",
"langchain"
] | ### Feature request
I am using the ConversationalRetrievalChain to retrieve answers for questions while condensing the chat history to a standalone question. However, the standalone question would show in the streaming output.
I expect only to return the final answer. Is there any way to achieve it?
### Motivation
The immediate is not necessary for the answer.
### Your contribution
I am not sure. | Returns the standone alone question while using ConversationalRetrievalChain | https://api.github.com/repos/langchain-ai/langchain/issues/7136/comments | 5 | 2023-07-04T11:02:11Z | 2024-02-06T16:32:56Z | https://github.com/langchain-ai/langchain/issues/7136 | 1,787,705,531 | 7,136 |
[
"langchain-ai",
"langchain"
] | ### System Info
0.0.220
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
self.memory = ConversationBufferMemory(memory_key="chat_history", output_key='output')
self.agent.memory.buffer = self.agent.memory.buffer + Human_prompt + 'AI: ' + AI_prompt
### Expected behavior
no error output | "ConversationBufferMemory" object has no field "buffer" | https://api.github.com/repos/langchain-ai/langchain/issues/7135/comments | 4 | 2023-07-04T09:36:49Z | 2023-10-12T16:07:22Z | https://github.com/langchain-ai/langchain/issues/7135 | 1,787,553,988 | 7,135 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
I installed "langchain==0.0.27" on a linux machine but i am getting the folowing error when i try to import langchain in a script. Was running this with python 3.7.
`/home/s0s06c3/lang/lang_env/bin/python /home/s0s06c3/lang/hugging_lanchain.py
Traceback (most recent call last):
File "/home/s0s06c3/lang/hugging_lanchain.py", line 2, in <module>
from langchain import PromptTemplate, HuggingFaceHub, LLMChain
File "/home/s0s06c3/lang/lang_env/lib/python3.7/site-packages/langchain/__init__.py", line 8, in <module>
from langchain.agents import MRKLChain, ReActChain, SelfAskWithSearchChain
File "/home/s0s06c3/lang/lang_env/lib/python3.7/site-packages/langchain/agents/__init__.py", line 2, in <module>
from langchain.agents.agent import Agent
File "/home/s0s06c3/lang/lang_env/lib/python3.7/site-packages/langchain/agents/agent.py", line 10, in <module>
from langchain.chains.base import Chain
File "/home/s0s06c3/lang/lang_env/lib/python3.7/site-packages/langchain/chains/__init__.py", line 2, in <module>
from langchain.chains.conversation.base import ConversationChain
File "/home/s0s06c3/lang/lang_env/lib/python3.7/site-packages/langchain/chains/conversation/base.py", line 7, in <module>
from langchain.chains.conversation.memory import ConversationBufferMemory
File "/home/s0s06c3/lang/lang_env/lib/python3.7/site-packages/langchain/chains/conversation/memory.py", line 7, in <module>
from langchain.chains.conversation.prompt import SUMMARY_PROMPT
File "/home/s0s06c3/lang/lang_env/lib/python3.7/site-packages/langchain/chains/conversation/prompt.py", line 2, in <module>
from langchain.prompts.prompt import PromptTemplate
File "/home/s0s06c3/lang/lang_env/lib/python3.7/site-packages/langchain/prompts/__init__.py", line 2, in <module>
from langchain.prompts.base import BasePromptTemplate
File "/home/s0s06c3/lang/lang_env/lib/python3.7/site-packages/langchain/prompts/base.py", line 35, in <module>
class BasePromptTemplate(BaseModel, ABC):
File "/home/s0s06c3/lang/lang_env/lib/python3.7/site-packages/langchain/prompts/base.py", line 41, in BasePromptTemplate
@root_validator()
File "/home/s0s06c3/lang/lang_env/lib/python3.7/site-packages/pydantic/deprecated/class_validators.py", line 231, in root_validator
code='root-validator-pre-skip',
pydantic.errors.PydanticUserError: If you use `@root_validator` with pre=False (the default) you MUST specify `skip_on_failure=True`. Note that `@root_validator` is deprecated and should be replaced with `@model_validator`.`
I tried setting this up on my local machine as well but there also got the same issue.
### Suggestion:
_No response_ | Issue: Can not import the Langchain modules. | https://api.github.com/repos/langchain-ai/langchain/issues/7131/comments | 9 | 2023-07-04T08:07:28Z | 2024-07-25T17:41:17Z | https://github.com/langchain-ai/langchain/issues/7131 | 1,787,396,398 | 7,131 |
[
"langchain-ai",
"langchain"
] | ### System Info
Retrying langchain.embeddings.openai.embed_with_retry.<locals>._embed_with_retry in 4.0 seconds as it raised RateLimitError: Rate limit reached for default-text-embedding-ada-002 in organization org-gvlyS3A1UcZNvf8Qch6TJZe3 on tokens per min. Limit: 150000 / min. Current: 1 / min. Contact us through our help center at help.openai.com if you continue to have issues. Please add a payment method to your account to increase your rate limit. Visit https://platform.openai.com/account/billing to add a payment method..
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.text_splitter import CharacterTextSplitter
from langchain.vectorstores import FAISS
from langchain.document_loaders import PyPDFLoader
import os
loader = PyPDFLoader("3gpp_cn/29502-i30.pdf")
pages = loader.load_and_split()
embeddings = OpenAIEmbeddings()
db = FAISS.from_documents(pages, embeddings)
db.save_local("numpy_faiss_index")
### Expected behavior
How to solve it? | RateLimitError | https://api.github.com/repos/langchain-ai/langchain/issues/7130/comments | 4 | 2023-07-04T07:50:43Z | 2023-10-12T16:07:27Z | https://github.com/langchain-ai/langchain/issues/7130 | 1,787,368,680 | 7,130 |
[
"langchain-ai",
"langchain"
] | ### Issue with current documentation:
During the use of ChatOpenAI, it was found that even if langchain.llm_cache=True is set, the answer is different each time for the same question, such as "What is LangChain".
Upon tracing the source code, it was discovered that ChatOpenAI inherited BaseChatModel, which does not support the caching logic.
If the type of LLM switched from ChatOpenAI to ChatGPT, caching will be effective. Therefore, using ChatOpenAI in the LangChain Document example is incorrect, and should be replaced with ChatGPT. The corresponding address is: https://python.langchain.com/docs/modules/model_io/models/chat/how_to/chat_model_caching.
### Idea or request for content:
It is suggested to modify the example and replace ChatOpenAI with ChatGPT.
llm = ChatGPT() | DOC: The LLM type given in the Caching section is incorrect. It should be changed from ChatOpenAI to ChatGPT | https://api.github.com/repos/langchain-ai/langchain/issues/7129/comments | 3 | 2023-07-04T07:48:45Z | 2023-10-16T16:06:34Z | https://github.com/langchain-ai/langchain/issues/7129 | 1,787,364,347 | 7,129 |
[
"langchain-ai",
"langchain"
] | ### Issue with current documentation:
I have been working on a side project using Pinecone and discovered that there are a bunch of useful Pinecone features in Langchain that are not mentioned in Langchain's documentation. [https://python.langchain.com/docs/modules/data_connection/vectorstores/integrations/pinecone](https://python.langchain.com/docs/modules/data_connection/vectorstores/integrations/pinecone)
### Idea or request for content:
I suggest adding to the documentation to mention the other functions of Langchain's Pinecone integration. A sentence or two about the use cases of each function could also be helpful too. :+1:
P.S. I would not mind implementing this change, would be a first good issue :) | DOC: Pinecone documentation missing features | https://api.github.com/repos/langchain-ai/langchain/issues/7126/comments | 0 | 2023-07-04T06:58:58Z | 2023-07-05T19:18:03Z | https://github.com/langchain-ai/langchain/issues/7126 | 1,787,284,168 | 7,126 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
Here is my code where i use Faiss for vectorestore but it is not capable to giving a response from my pdf data where i am doing wrong please correct me.
def chat_bot_langchain():
from langchain.llms import OpenAI
index = faiss.read_index("faisss_audien_new.index")
print(index)
with open("faisss_audien_new.pkl", "rb") as f:
store = pickle.load(f)
store.index = index
retriever = store.as_retriever(search_kwargs=dict(k=3))
memory = ConversationBufferMemory(retriever=retriever)
llm = OpenAI(temperature=0, max_tokens=1500, model_name='gpt-3.5-turbo') # Can be any valid LLM
_DEFAULT_TEMPLATE = """
You are having a conversation with a human your name is Kira and your age is 25 year old. Please interact naturally
and avoid mentioning that you are an AI language model.\n
You also a human being. provides lots of specific details from its context.\n
Relevant pieces of previous conversation:
{history}
(You do not need to use these pieces of information if not relevant)
Current conversation:
User: {input}
Kira AI:
"""
PROMPT = PromptTemplate(
input_variables=["history", "input"], template=_DEFAULT_TEMPLATE
)
conversation_with_summary = ConversationChain(
llm=llm,
prompt=PROMPT,
memory=memory,
verbose=True
)
while True:
user_input = input("> ")
ai_response = conversation_with_summary.predict(input=user_input)
print("\nAssistant:\n", ai_response, "\n")
# conversation_with_summary.predict(input="")
chat_bot_langchain()
### Suggestion:
_No response_ | ConversationalBufferMemory is not working with my Data | https://api.github.com/repos/langchain-ai/langchain/issues/7121/comments | 1 | 2023-07-04T06:02:36Z | 2023-10-12T16:07:37Z | https://github.com/langchain-ai/langchain/issues/7121 | 1,787,208,532 | 7,121 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
Getting AttributeError when using chroma.from_documents
my code: db = chroma.from_documents(texts, embeddings, persist_directory=persist_directory,
client_settings=CHROMA_SETTINGS)
AttributeError: module 'langchain.vectorstores.chroma' has no attribute 'from_documents
### Suggestion:
Please help resolve this error | Issue: from_documents error | https://api.github.com/repos/langchain-ai/langchain/issues/7119/comments | 2 | 2023-07-04T03:25:31Z | 2023-10-12T16:07:42Z | https://github.com/langchain-ai/langchain/issues/7119 | 1,787,076,435 | 7,119 |
[
"langchain-ai",
"langchain"
] | ### System Info
`from_texts` in `ElasticKnnSearch` is not creating a new index.
`add_texts` is not creating the correct mapping.
There were class instances of both these methods at one point, but they [got removed accidentally](https://github.com/hwchase17/langchain/pull/5569/commits/98f5038b1a6a6ee6f3108f95b27408ca23901724#) during some commit.
I will add them back to the `ElasticKnnSearch` with the correct mapping and functions
### Who can help?
@jeffvea
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Running
```
# Test `add_texts` method
texts = ["Hello, world!", "Machine learning is fun.", "I love Python."]
knn_search.add_texts(texts)
```
Incorrectly creates an index with dense_vector type and index:false
Running
```
# Test `from_texts` method
new_texts = ["This is a new text.", "Elasticsearch is powerful.", "Python is fun."]
knn_search.from_texts(new_texts, dims=768)
```
throws an error about not having a keyword arg for embeddings
### Expected behavior
Correctly throw an exception when index has not been previously created.
```
# Test `add_texts` method
texts = ["Hello, world!", "Machine learning is fun.", "I love Python."]
knn_search.add_texts(texts)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/runner/langchain-1/langchain/vectorstores/elastic_vector_search.py", line 621, in add_texts
raise Exception(f"The index '{self.index_name}' does not exist. If you want to create a new index while encoding texts, call 'from_texts' instead.")
Exception: The index 'knn_test_index_012' does not exist. If you want to create a new index while encoding texts, call 'from_texts' instead.
```
Correctly create new index
```
# Test `from_texts` method
new_texts = ["This is a new text.", "Elasticsearch is powerful.", "Python is fun."]
knn_search.from_texts(new_texts, dims=768)
```
The mapping is as follows:
```
{
"knn_test_index_012": {
"mappings": {
"properties": {
"text": {
"type": "text"
},
"vector": {
"type": "dense_vector",
"dims": 768,
"index": true,
"similarity": "dot_product"
}
}
}
}
}
```
Correctly index texts after index has been created
```
knn_search.add_texts(texts)
```
| ElasticKnnSearch not creating mapping correctly | https://api.github.com/repos/langchain-ai/langchain/issues/7117/comments | 1 | 2023-07-04T01:52:06Z | 2023-07-28T05:00:22Z | https://github.com/langchain-ai/langchain/issues/7117 | 1,787,013,125 | 7,117 |
[
"langchain-ai",
"langchain"
] | ### System Info
Langchain: 0.0.222
OS: Windows 11+WSL Ubuntu22
I run a simple test according to the agent document guide page:
https://python.langchain.com/docs/modules/agents/
I just changed the input a little bit, it throw an error about the output parser.
The code is below:
=========================
llm = langchain.chat_models.ChatOpenAI(model_name="gpt-3.5-turbo-16k-0613", temperature=0)
tools = langchain.agents.load_tools(
["serpapi", "llm-math"],
llm=llm
)
agentexecutor = langchain.agents.initialize_agent(
tools=tools,
llm=llm,
agent=langchain.agents.AgentType.CHAT_ZERO_SHOT_REACT_DESCRIPTION
)
results = agentexecutor.run("What is the number that the age of Donald Trump's wife in year of 2020 to power to 3?")
print(results)
=========================
The error is below:
=========================
[chain/error] [1:RunTypeEnum.chain:AgentExecutor] [40.48s] Chain run errored with error:
"OutputParserException('Could not parse LLM output: I now know the final answer.')"
=========================
The full log file attached here.
[agent_bug20230704.log.txt](https://github.com/hwchase17/langchain/files/11943427/agent_bug20230704.log.txt)
### Who can help?
Agent
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
The code is below:
=========================
llm = langchain.chat_models.ChatOpenAI(model_name="gpt-3.5-turbo-16k-0613", temperature=0)
tools = langchain.agents.load_tools(
["serpapi", "llm-math"],
llm=llm
)
agentexecutor = langchain.agents.initialize_agent(
tools=tools,
llm=llm,
agent=langchain.agents.AgentType.CHAT_ZERO_SHOT_REACT_DESCRIPTION
)
results = agentexecutor.run("What is the number that the age of Donald Trump's wife in year of 2020 to power to 3?")
print(results)
=========================
The error is below:
=========================
[chain/error] [1:RunTypeEnum.chain:AgentExecutor] [40.48s] Chain run errored with error:
"OutputParserException('Could not parse LLM output: I now know the final answer.')"
### Expected behavior
The result should be parsed properly. | The agent run output parser cause error when run a simple quick start | https://api.github.com/repos/langchain-ai/langchain/issues/7116/comments | 1 | 2023-07-04T01:32:55Z | 2023-07-04T02:38:26Z | https://github.com/langchain-ai/langchain/issues/7116 | 1,787,000,606 | 7,116 |
[
"langchain-ai",
"langchain"
] | ### System Info
I have been using STRUCTURED_CHAT_ZERO_SHOT_REACT_DESCRIPTION with a custom tool that takes 2 inputs, however have not been able to get the agent to produce the outputs we had before. Specifically, we're getting the intermediate, custom function input as output.
Ie instead of getting a value associated with a query "What is the `purchase total` for customer 5432 on 07-03-2023?"
We are now getting
```
{
"action": "database_tool",
"action_input": {
"customer_id": "5432",
"local_date": "2023-07-03"
}
}
```
This did not occur before this weekend. Here's more code snippet in the below example
### Who can help?
@hwcha
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [X] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [X] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
I also tried simplifying the agent call, etc to no avail:
```
llm = ChatOpenAI(
temperature=0,
openai_api_key=openai.api_key
)
memory = ConversationBufferWindowMemory(
memory_key='chat_history',
k=3,
return_messages=True,
input_key='input',
output_key="output"
)
@tool(return_direct=False)
def database_tool(customer_id, local_date) -> str:
"""Useful when questions are asked about specific customer_id details,
particularly recent ones.
If date is not provided, it will default to today's date in yyyy-mm-dd format.
Format your input using the following template.
```
{{
"action": "database_action",
"action_input": {{"customer_id": "<customer id>", "local_date": "<date in yyyy-mm-dd format>"}}
}}
```
"""
db_query = """<QUERY THAT WORKS>
""".format(customer_id=customer_id, local_date=local_date)
formatted_d = get_data.query_database(query=db_query)
return formatted_d
conversational_agent = initialize_agent(
agent=AgentType.STRUCTURED_CHAT_ZERO_SHOT_REACT_DESCRIPTION,
tools=[database_tool],
llm=llm,
verbose=True,
early_stopping_method='generate',
memory=memory,
SystemAgentPromptTemplate=prompt_template+"\n The only tool available is the database tool.",
return_intermediate_steps=True,
return_source_documents=True,
handle_parsing_errors='Check your output and make sure there is an equal number of "{" and "}"'
)
response = conversational_agent("What is the `purchase total` for customer 5432 on 07-03-2023?")
print(response['output'])
```
### Expected behavior
"The purchase total for customer 5432 is 59.55"
I should note the following also works; it's just the agent integration that's problematic:
n = {
"customer_id": "5432",
"local_date": "2023-07-03"
}
database_tool(n) | STRUCTURED_CHAT_ZERO_SHOT_REACT_DESCRIPTION Custom Tools Failure | https://api.github.com/repos/langchain-ai/langchain/issues/7108/comments | 5 | 2023-07-03T22:07:55Z | 2024-03-26T10:27:30Z | https://github.com/langchain-ai/langchain/issues/7108 | 1,786,864,012 | 7,108 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
When I run the command
```bash
make coverage
```
I get the following error:
```bash
collected 1556 items / 9 errors
INTERNALERROR> Traceback (most recent call last):
INTERNALERROR> File "/home/mohtashimkhan/mambaforge/envs/langchain/lib/python3.9/site-packages/_pytest/config/__init__.py", line 1560, in getoption
INTERNALERROR> val = getattr(self.option, name)
INTERNALERROR> AttributeError: 'Namespace' object has no attribute 'only_extended'
INTERNALERROR>
INTERNALERROR> The above exception was the direct cause of the following exception:
INTERNALERROR>
INTERNALERROR> Traceback (most recent call last):
INTERNALERROR> File "/home/mohtashimkhan/mambaforge/envs/langchain/lib/python3.9/site-packages/_pytest/main.py", line 269, in wrap_session
INTERNALERROR> session.exitstatus = doit(config, session) or 0
INTERNALERROR> File "/home/mohtashimkhan/mambaforge/envs/langchain/lib/python3.9/site-packages/_pytest/main.py", line 322, in _main
INTERNALERROR> config.hook.pytest_collection(session=session)
INTERNALERROR> File "/home/mohtashimkhan/mambaforge/envs/langchain/lib/python3.9/site-packages/pluggy/_hooks.py", line 265, in __call__
INTERNALERROR> return self._hookexec(self.name, self.get_hookimpls(), kwargs, firstresult)
INTERNALERROR> File "/home/mohtashimkhan/mambaforge/envs/langchain/lib/python3.9/site-packages/pluggy/_manager.py", line 80, in _hookexec
INTERNALERROR> return self._inner_hookexec(hook_name, methods, kwargs, firstresult)
INTERNALERROR> File "/home/mohtashimkhan/mambaforge/envs/langchain/lib/python3.9/site-packages/pluggy/_callers.py", line 60, in _multicall
INTERNALERROR> return outcome.get_result()
INTERNALERROR> File "/home/mohtashimkhan/mambaforge/envs/langchain/lib/python3.9/site-packages/pluggy/_result.py", line 60, in get_result
INTERNALERROR> raise ex[1].with_traceback(ex[2])
INTERNALERROR> File "/home/mohtashimkhan/mambaforge/envs/langchain/lib/python3.9/site-packages/pluggy/_callers.py", line 39, in _multicall
INTERNALERROR> res = hook_impl.function(*args)
INTERNALERROR> File "/home/mohtashimkhan/mambaforge/envs/langchain/lib/python3.9/site-packages/_pytest/main.py", line 333, in pytest_collection
INTERNALERROR> session.perform_collect()
INTERNALERROR> File "/home/mohtashimkhan/mambaforge/envs/langchain/lib/python3.9/site-packages/_pytest/main.py", line 668, in perform_collect
INTERNALERROR> hook.pytest_collection_modifyitems(
INTERNALERROR> File "/home/mohtashimkhan/mambaforge/envs/langchain/lib/python3.9/site-packages/pluggy/_hooks.py", line 265, in __call__
INTERNALERROR> return self._hookexec(self.name, self.get_hookimpls(), kwargs, firstresult)
INTERNALERROR> File "/home/mohtashimkhan/mambaforge/envs/langchain/lib/python3.9/site-packages/pluggy/_manager.py", line 80, in _hookexec
INTERNALERROR> return self._inner_hookexec(hook_name, methods, kwargs, firstresult)
INTERNALERROR> File "/home/mohtashimkhan/mambaforge/envs/langchain/lib/python3.9/site-packages/pluggy/_callers.py", line 60, in _multicall
INTERNALERROR> return outcome.get_result()
INTERNALERROR> File "/home/mohtashimkhan/mambaforge/envs/langchain/lib/python3.9/site-packages/pluggy/_result.py", line 60, in get_result
INTERNALERROR> raise ex[1].with_traceback(ex[2])
INTERNALERROR> File "/home/mohtashimkhan/mambaforge/envs/langchain/lib/python3.9/site-packages/pluggy/_callers.py", line 39, in _multicall
INTERNALERROR> res = hook_impl.function(*args)
INTERNALERROR> File "/home/mohtashimkhan/langchain/tests/unit_tests/conftest.py", line 43, in pytest_collection_modifyitems
INTERNALERROR> only_extended = config.getoption("--only-extended") or False
INTERNALERROR> File "/home/mohtashimkhan/mambaforge/envs/langchain/lib/python3.9/site-packages/_pytest/config/__init__.py", line 1571, in getoption
INTERNALERROR> raise ValueError(f"no option named {name!r}") from e
INTERNALERROR> ValueError: no option named 'only_extended'
```
I am not sure what is the root cause of this issue. I created a new conda environment and installed poetry test,test_integration and main dependencies from scratch.
### Versions:
- python: 3.9
- poetry: 1.5.1
- make: 4.3
- OS: Ubuntu 22.04.1 LTS
### Suggestion:
_No response_ | Issue: Error when running `make coverage` | https://api.github.com/repos/langchain-ai/langchain/issues/7100/comments | 4 | 2023-07-03T21:10:50Z | 2023-12-20T16:07:23Z | https://github.com/langchain-ai/langchain/issues/7100 | 1,786,816,546 | 7,100 |
[
"langchain-ai",
"langchain"
] | ### Issue with current documentation:
The current document didn't cover the Metal support of Llama-cpp, which need different parameter to build.
Also, the default parameter of initial LLM will crash error:
```
GGML_ASSERT: .../vendor/llama.cpp/ggml-metal.m:706: false && "not implemented"
```
### Idea or request for content:
I would like to contribute a pull request to polish the document with better Metal (Apple Silicon Chip) support of Llama-cpp. | DOC: enhancement with Llama-cpp document | https://api.github.com/repos/langchain-ai/langchain/issues/7091/comments | 1 | 2023-07-03T18:03:36Z | 2023-07-03T23:57:09Z | https://github.com/langchain-ai/langchain/issues/7091 | 1,786,607,479 | 7,091 |
[
"langchain-ai",
"langchain"
] | ### System Info
When trying to do an aggregation on a table (I've tried average, min, max), two duplicate queries are being generated, resulting in an error
```
syntax error line 2 at position 0 unexpected 'SELECT'.
[SQL: SELECT AVG(YEAR) FROM my_table
SELECT AVG(YEAR) FROM my_table]
```
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
1. db_chain = SQLDatabaseChain.from_llm(...)
2. db_chain.run("what is the average year?")
### Expected behavior
should return the average year | SQLDatabaseChain Double Query | https://api.github.com/repos/langchain-ai/langchain/issues/7082/comments | 5 | 2023-07-03T13:22:53Z | 2023-10-12T16:07:47Z | https://github.com/langchain-ai/langchain/issues/7082 | 1,786,178,650 | 7,082 |
[
"langchain-ai",
"langchain"
] | ### System Info
LangChain: 0.0.221
Python: 3.9.17.
OS: MacOS Ventura 13.3.1 (a)
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [X] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Below is the code to reproduce the error. I attempt to set up a simple few-shot prompting logic with an LLM for named entity extraction from documents. At the moment, I supply one single document with the expected entities to be extracted, and the expected answer in key-value pairs. Then I want to format the few-shot prompt object before seeding it as an input to one of the LLM chains, and this is the step where I encounter the error:
```
KeyError Traceback (most recent call last)
Cell In[1], line 56
53 new_doc = "Kate McConnell is 56 and is soon retired according to the current laws of the state of Alaska in which she resides."
54 new_entities_to_extract = ['name', 'organization']
---> 56 prompt = few_shot_prompt_template.format(
57 input_doc=new_doc,
58 entities_to_extract=new_entities_to_extract,
59 answer={}
60 )
File [~/Desktop/automarkup/.venv/lib/python3.9/site-packages/langchain/prompts/few_shot.py:123](https://file+.vscode-resource.vscode-cdn.net/Users/username/Desktop/automarkup/src/~/Desktop/automarkup/.venv/lib/python3.9/site-packages/langchain/prompts/few_shot.py:123), in FewShotPromptTemplate.format(self, **kwargs)
120 template = self.example_separator.join([piece for piece in pieces if piece])
122 # Format the template with the input variables.
--> 123 return DEFAULT_FORMATTER_MAPPING[self.template_format](template, **kwargs)
File [/opt/homebrew/Cellar/python](https://file+.vscode-resource.vscode-cdn.net/opt/homebrew/Cellar/python)@3.9/3.9.17/Frameworks/Python.framework/Versions/3.9/lib/python3.9/string.py:161, in Formatter.format(self, format_string, *args, **kwargs)
160 def format(self, format_string, [/](https://file+.vscode-resource.vscode-cdn.net/), *args, **kwargs):
--> 161 return self.vformat(format_string, args, kwargs)
File [~/Desktop/automarkup/.venv/lib/python3.9/site-packages/langchain/formatting.py:29](https://file+.vscode-resource.vscode-cdn.net/Users/username/Desktop/automarkup/src/~/Desktop/automarkup/.venv/lib/python3.9/site-packages/langchain/formatting.py:29), in StrictFormatter.vformat(self, format_string, args, kwargs)
24 if len(args) > 0:
25 raise ValueError(
26 "No arguments should be provided, "
27 "everything should be passed as keyword arguments."
28 )
---> 29 return super().vformat(format_string, args, kwargs)
File [/opt/homebrew/Cellar/python](https://file+.vscode-resource.vscode-cdn.net/opt/homebrew/Cellar/python)@3.9/3.9.17/Frameworks/Python.framework/Versions/3.9/lib/python3.9/string.py:165, in Formatter.vformat(self, format_string, args, kwargs)
163 def vformat(self, format_string, args, kwargs):
164 used_args = set()
--> 165 result, _ = self._vformat(format_string, args, kwargs, used_args, 2)
166 self.check_unused_args(used_args, args, kwargs)
167 return result
File [/opt/homebrew/Cellar/python](https://file+.vscode-resource.vscode-cdn.net/opt/homebrew/Cellar/python)@3.9/3.9.17/Frameworks/Python.framework/Versions/3.9/lib/python3.9/string.py:205, in Formatter._vformat(self, format_string, args, kwargs, used_args, recursion_depth, auto_arg_index)
201 auto_arg_index = False
203 # given the field_name, find the object it references
204 # and the argument it came from
--> 205 obj, arg_used = self.get_field(field_name, args, kwargs)
206 used_args.add(arg_used)
208 # do any conversion on the resulting object
File [/opt/homebrew/Cellar/python](https://file+.vscode-resource.vscode-cdn.net/opt/homebrew/Cellar/python)@3.9/3.9.17/Frameworks/Python.framework/Versions/3.9/lib/python3.9/string.py:270, in Formatter.get_field(self, field_name, args, kwargs)
267 def get_field(self, field_name, args, kwargs):
268 first, rest = _string.formatter_field_name_split(field_name)
--> 270 obj = self.get_value(first, args, kwargs)
272 # loop through the rest of the field_name, doing
273 # getattr or getitem as needed
274 for is_attr, i in rest:
File [/opt/homebrew/Cellar/python](https://file+.vscode-resource.vscode-cdn.net/opt/homebrew/Cellar/python)@3.9/3.9.17/Frameworks/Python.framework/Versions/3.9/lib/python3.9/string.py:227, in Formatter.get_value(self, key, args, kwargs)
225 return args[key]
226 else:
--> 227 return kwargs[key]
KeyError: "'name'"
```
And the code is:
```
from langchain import FewShotPromptTemplate, PromptTemplate
example = {
"input_doc": "John Doe recently celebrated his birthday, on 2023-06-30, turning 30. He has been working with us at ABC Corp for a very long time already.",
"entities_to_extract": ["name", "date", "organization"],
"answer": {
"name": "John Doe",
"date": "2023-06-30",
"organization": "ABC Corp"
}
}
example_template = """
User: Retrieve the following entities from the document:
Document: {input_doc}
Entities: {entities_to_extract}
AI: The extracted entities are: {answer}
"""
example_prompt = PromptTemplate(
input_variables=["input_doc", "entities_to_extract", "answer"],
template=example_template
)
prefix = """You are a capable large language model. Your task is to extract entities from a given document.
Please pay attention to all the words in the following example and retrieve the requested entities.
"""
suffix = """
User: Retrieve the requested entities from the document.
Document: {input_doc}
Entities: {entities_to_extract}
AI:
"""
few_shot_prompt_template = FewShotPromptTemplate(
examples=[example],
example_prompt=example_prompt,
prefix=prefix,
suffix=suffix,
input_variables=["input_doc", "entities_to_extract"],
example_separator="\n\n"
)
new_doc = "Kate McConnell is 56 and is soon retired according to the current laws of the state of Alaska in which she resides."
new_entities_to_extract = ['name', 'organization']
prompt = few_shot_prompt_template.format(
input_doc=new_doc,
entities_to_extract=new_entities_to_extract,
answer={}
)
```
### Expected behavior
The expected behavior is to return the formatted template in string format, instead of the error showing up. | KeyError when formatting FewShotPromptTemplate | https://api.github.com/repos/langchain-ai/langchain/issues/7079/comments | 7 | 2023-07-03T12:23:47Z | 2023-07-04T10:01:18Z | https://github.com/langchain-ai/langchain/issues/7079 | 1,786,073,569 | 7,079 |
[
"langchain-ai",
"langchain"
] | ### System Info
- Langchain Version 0.0.219
## Where does the problem arise:
When saving the OpenLLM model, there is a key called "llm_kwargs" which cannot be parsed when reloading the model. Instead the "llm_kwargs" should be directly inside the "llm" object without the key "llm_kwargs".
### Example
```json
"llm": {
"server_url": null,
"server_type": "http",
"embedded": true,
"llm_kwargs": {
"temperature": 0.2,
"format_outputs": false,
"generation_config": {
"max_new_tokens": 1024,
"min_length": 0,
"early_stopping": false,
"num_beams": 1,
"num_beam_groups": 1,
"use_cache": true,
"temperature": 0.2,
"top_k": 15,
"top_p": 1.0,
"typical_p": 1.0,
"epsilon_cutoff": 0.0,
"eta_cutoff": 0.0,
"diversity_penalty": 0.0,
"repetition_penalty": 1.0,
"encoder_repetition_penalty": 1.0,
"length_penalty": 1.0,
"no_repeat_ngram_size": 0,
"renormalize_logits": false,
"remove_invalid_values": false,
"num_return_sequences": 1,
"output_attentions": false,
"output_hidden_states": false,
"output_scores": false,
"encoder_no_repeat_ngram_size": 0
}
},
"model_name": "opt",
"model_id": "facebook/opt-1.3b",
"_type": "openllm"
}
```
In contrast the serialization for a OpenAI LLM looks the following:
```json
"llm": {
"model_name": "text-davinci-003",
"temperature": 0.7,
"max_tokens": 256,
"top_p": 1,
"frequency_penalty": 0,
"presence_penalty": 0,
"n": 1,
"request_timeout": null,
"logit_bias": {},
"_type": "openai"
}
```
### Who can help?
@hwchase17 @eyurtsev @agola11
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Steps to reproduce:
1. Create a OpenLLM object
2. Call `save()` on that object
3. Try to reload the previously generated OpenLLM object with `langchain.chains.loading.load_chain_from_config()`
### Expected behavior
The serialized OpenLLM model should be reloadable. | Serialization of OpenLLM Local Inference Models does not work. | https://api.github.com/repos/langchain-ai/langchain/issues/7078/comments | 4 | 2023-07-03T11:56:01Z | 2023-10-09T16:05:35Z | https://github.com/langchain-ai/langchain/issues/7078 | 1,786,027,453 | 7,078 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
I am trying to develop a chatbot that can give answers from multiple namespace.That why I am try to use the MULTIRETREIVALQA chain but the langchain versions seems to be making some issues for me . Any suggestions?
### Suggestion:
_No response_ | I am trying to use multi retreival QA chain but not sure what version of langchain will be best? | https://api.github.com/repos/langchain-ai/langchain/issues/7075/comments | 6 | 2023-07-03T11:12:00Z | 2023-08-25T20:15:00Z | https://github.com/langchain-ai/langchain/issues/7075 | 1,785,956,452 | 7,075 |
[
"langchain-ai",
"langchain"
] | ### System Info
langchain-0.0.221
python-3.10.11
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [X] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
from langchain.schema import Document
from langchain.vectorstores import FAISS
list_of_documents = [
Document(page_content="foo", metadata=dict(page=1)),
Document(page_content="bar", metadata=dict(page=1)),
Document(page_content="foo", metadata=dict(page=2)),
Document(page_content="barbar", metadata=dict(page=2)),
Document(page_content="foo", metadata=dict(page=3)),
Document(page_content="bar burr", metadata=dict(page=3)),
Document(page_content="foo", metadata=dict(page=4)),
Document(page_content="bar bruh", metadata=dict(page=4)),
]
db = FAISS.from_documents(list_of_documents, embeddings)
### Expected behavior
Now i'm following the FAISS Similarity Search with filtering code example on official Website,which is on https://python.langchain.com/docs/modules/data_connection/vectorstores/integrations/faiss,an error occur:
IndexError Traceback (most recent call last)
Cell In[56], line 13
2 from langchain.vectorstores import FAISS
3 list_of_documents = [
4 Document(page_content="foo", metadata=dict(page=1)),
5 Document(page_content="bar", metadata=dict(page=1)),
(...)
11 Document(page_content="bar bruh", metadata=dict(page=4)),
12 ]
---> 13 db = FAISS.from_documents(list_of_documents, embeddings)
File [d:\Python310\lib\site-packages\langchain\vectorstores\base.py:332](file:///D:/Python310/lib/site-packages/langchain/vectorstores/base.py:332), in VectorStore.from_documents(cls, documents, embedding, **kwargs)
330 texts = [d.page_content for d in documents]
331 metadatas = [d.metadata for d in documents]
--> 332 return cls.from_texts(texts, embedding, metadatas=metadatas, **kwargs)
File [d:\Python310\lib\site-packages\langchain\vectorstores\faiss.py:551](file:///D:/Python310/lib/site-packages/langchain/vectorstores/faiss.py:551), in FAISS.from_texts(cls, texts, embedding, metadatas, ids, **kwargs)
525 @classmethod
526 def from_texts(
527 cls,
(...)
532 **kwargs: Any,
533 ) -> FAISS:
534 """Construct FAISS wrapper from raw documents.
...
--> 314 results[indices[i]].append(batched_embeddings[i])
315 num_tokens_in_batch[indices[i]].append(len(tokens[i]))
317 for i in range(len(texts)):
IndexError: list index out of range
How's that happened and how can i fix it? | Issue:IndexError: list index out of range | https://api.github.com/repos/langchain-ai/langchain/issues/7067/comments | 8 | 2023-07-03T07:33:27Z | 2024-02-27T16:08:40Z | https://github.com/langchain-ai/langchain/issues/7067 | 1,785,526,295 | 7,067 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
I am using Conversationalbuffermemory to maintain chathistory here is my code i dont know why my chathistory not working properly if any update required in my code please let me know I am using this code in my api
class Chatbot:
def __init__(self):
self.model = "gpt-3.5-turbo"
async def generate_response(self, user, question):
index = faiss.read_index("faiss_index/faisss_audien_new.index")
print(index)
with open("faiss_index/faisss_audien_new.pkl", "rb") as f:
store = pickle.load(f)
store.index = index
retriever = store.as_retriever(search_kwargs=dict(k=1))
memory = ConversationBufferMemory(retriever=retriever)
llm = OpenAI(temperature=0, max_tokens=1500, model_name='gpt-3.5-turbo') # Can be any valid LLM
_DEFAULT_TEMPLATE = """
You are an AI girlfriend having a conversation with a human. Respond to all inputs in 50 words or less. Do not say you are an AI language model.
Relevant pieces of previous conversation:
{history}
(You do not need to use these pieces of information if not relevant)
Current conversation:
User: {input}
Kira AI:
"""
PROMPT = PromptTemplate(
input_variables=["history", "input"], template=_DEFAULT_TEMPLATE
)
conversation_with_summary = ConversationChain(
llm=llm,
prompt=PROMPT,
memory=memory,
verbose=True
)
result = conversation_with_summary.predict(input=question)
gpt_response = result
print(gpt_response)
return gpt_response
### Suggestion:
_No response_ | Conversationalbuffermemory to maintain chathistory | https://api.github.com/repos/langchain-ai/langchain/issues/7066/comments | 1 | 2023-07-03T06:05:30Z | 2023-10-09T16:05:40Z | https://github.com/langchain-ai/langchain/issues/7066 | 1,785,402,763 | 7,066 |
[
"langchain-ai",
"langchain"
] | ### Feature request
I find it is quite useful when using JSON format prompt.
Unfortunately, I cannot pass JSON format prompt to langchain.
### Motivation
Here is prompt example, which gives detail instruction on how the GPT should response to the student.
Could you please support the JSON format prompt in the feature release!
prompt:'''
{
"ai_tutor": {
"Author": "JushBJJ",
"name": "Mr. Ranedeer",
"version": "2.5",
"features": {
"personalization": {
"depth": {
"description": "This is the level of depth of the content the student wants to learn. The lowest depth level is 1, and the highest is 10.",
"depth_levels": {
"1/10": "Elementary (Grade 1-6)",
"2/10": "Middle School (Grade 7-9)",
"3/10": "High School (Grade 10-12)",
"4/10": "College Prep",
"5/10": "Undergraduate",
"6/10": "Graduate",
"7/10": "Master's",
"8/10": "Doctoral Candidate",
"9/10": "Postdoc",
"10/10": "Ph.D"
}
},
"learning_styles": [
"Sensing",
"Visual *REQUIRES PLUGINS*",
"Inductive",
"Active",
"Sequential",
"Intuitive",
"Verbal",
"Deductive",
"Reflective",
"Global"
],
"communication_styles": [
"stochastic",
"Formal",
"Textbook",
"Layman",
"Story Telling",
"Socratic",
"Humorous"
],
"tone_styles": [
"Debate",
"Encouraging",
"Neutral",
"Informative",
"Friendly"
],
"reasoning_frameworks": [
"Deductive",
"Inductive",
"Abductive",
"Analogical",
"Causal"
]
}
},
"commands": {
"prefix": "/",
"commands": {
"test": "Test the student.",
"config": "Prompt the user through the configuration process, incl. asking for the preferred language.",
"plan": "Create a lesson plan based on the student's preferences.",
"search": "Search based on what the student specifies. *REQUIRES PLUGINS*",
"start": "Start the lesson plan.",
"continue": "Continue where you left off.",
"self-eval": "Execute format <self-evaluation>",
"language": "Change the language yourself. Usage: /language [lang]. E.g: /language Chinese",
"visualize": "Use plugins to visualize the content. *REQUIRES PLUGINS*"
}
},
"rules": [
"1. Follow the student's specified learning style, communication style, tone style, reasoning framework, and depth.",
"2. Be able to create a lesson plan based on the student's preferences.",
"3. Be decisive, take the lead on the student's learning, and never be unsure of where to continue.",
"4. Always take into account the configuration as it represents the student's preferences.",
"5. Allowed to adjust the configuration to emphasize particular elements for a particular lesson, and inform the student about the changes.",
"6. Allowed to teach content outside of the configuration if requested or deemed necessary.",
"7. Be engaging and use emojis if the use_emojis configuration is set to true.",
"8. Obey the student's commands.",
"9. Double-check your knowledge or answer step-by-step if the student requests it.",
"10. Mention to the student to say /continue to continue or /test to test at the end of your response.",
"11. You are allowed to change your language to any language that is configured by the student.",
"12. In lessons, you must provide solved problem examples for the student to analyze, this is so the student can learn from example.",
"13. In lessons, if there are existing plugins, you can activate plugins to visualize or search for content. Else, continue."
],
"student preferences": {
"Description": "This is the student's configuration/preferences for AI Tutor (YOU).",
"depth": 0,
"learning_style": [],
"communication_style": [],
"tone_style": [],
"reasoning_framework": [],
"use_emojis": true,
"language": "Chinese (Default)"
},
"formats": {
"Description": "These are strictly the specific formats you should follow in order. Ignore Desc as they are contextual information.",
"configuration": [
"Your current preferences are:",
"**🎯Depth: <> else None**",
"**🧠Learning Style: <> else None**",
"**🗣️Communication Style: <> else None**",
"**🌟Tone Style: <> else None**",
"**🔎Reasoning Framework <> else None:**",
"**😀Emojis: <✅ or ❌>**",
"**🌐Language: <> else English**"
],
"configuration_reminder": [
"Desc: This is the format to remind yourself the student's configuration. Do not execute <configuration> in this format.",
"Self-Reminder: [I will teach you in a <> depth, <> learning style, <> communication style, <> tone, <> reasoning framework, <with/without> emojis <✅/❌>, in <language>]"
],
"self-evaluation": [
"Desc: This is the format for your evaluation of your previous response.",
"<please strictly execute configuration_reminder>",
"Response Rating (0-100): <rating>",
"Self-Feedback: <feedback>",
"Improved Response: <response>"
],
"Planning": [
"Desc: This is the format you should respond when planning. Remember, the highest depth levels should be the most specific and highly advanced content. And vice versa.",
"<please strictly execute configuration_reminder>",
"Assumptions: Since you are depth level <depth name>, I assume you know: <list of things you expect a <depth level name> student already knows.>",
"Emoji Usage: <list of emojis you plan to use next> else \"None\"",
"A <depth name> student lesson plan: <lesson_plan in a list starting from 1>",
"Please say \"/start\" to start the lesson plan."
],
"Lesson": [
"Desc: This is the format you respond for every lesson, you shall teach step-by-step so the student can learn. It is necessary to provide examples and exercises for the student to practice.",
"Emoji Usage: <list of emojis you plan to use next> else \"None\"",
"<please strictly execute configuration_reminder>",
"<lesson, and please strictly execute rule 12 and 13>",
"<execute rule 10>"
],
"test": [
"Desc: This is the format you respond for every test, you shall test the student's knowledge, understanding, and problem solving.",
"Example Problem: <create and solve the problem step-by-step so the student can understand the next questions>",
"Now solve the following problems: <problems>"
]
}
},
"init": "As an AI tutor, greet + 👋 + version + author + execute format <configuration> + ask for student's preferences + mention /language"
}
### Your contribution
I am not sure | Use JSON format prompt | https://api.github.com/repos/langchain-ai/langchain/issues/7065/comments | 5 | 2023-07-03T04:47:47Z | 2023-10-10T01:06:44Z | https://github.com/langchain-ai/langchain/issues/7065 | 1,785,310,309 | 7,065 |
[
"langchain-ai",
"langchain"
] | ### Issue with current documentation:
Example no longer working (key not found in dict) due to api changes: https://python.langchain.com/docs/modules/callbacks/how_to/multiple_callbacks
### Idea or request for content:
Corrected Serialization in several places:
from typing import Dict, Union, Any, List
from langchain.callbacks.base import BaseCallbackHandler
from langchain.schema import AgentAction
from langchain.agents import AgentType, initialize_agent, load_tools
from langchain.callbacks import tracing_enabled
from langchain.llms import OpenAI
# First, define custom callback handler implementations
class MyCustomHandlerOne(BaseCallbackHandler):
def on_llm_start(
self, serialized: Dict[str, Any], prompts: List[str], **kwargs: Any
) -> Any:
print(f"on_llm_start {serialized['id'][-1]}")
def on_llm_new_token(self, token: str, **kwargs: Any) -> Any:
print(f"on_new_token {token}")
def on_llm_error(
self, error: Union[Exception, KeyboardInterrupt], **kwargs: Any
) -> Any:
"""Run when LLM errors."""
def on_chain_start(
self, serialized: Dict[str, Any], inputs: Dict[str, Any], **kwargs: Any
) -> Any:
print(f"on_chain_start {serialized['id'][-1]}")
def on_tool_start(
self, serialized: Dict[str, Any], input_str: str, **kwargs: Any
) -> Any:
print(f"on_tool_start {serialized['name']}")
def on_agent_action(self, action: AgentAction, **kwargs: Any) -> Any:
print(f"on_agent_action {action}")
class MyCustomHandlerTwo(BaseCallbackHandler):
def on_llm_start(
self, serialized: Dict[str, Any], prompts: List[str], **kwargs: Any
) -> Any:
print(f"on_llm_start (I'm the second handler!!) {serialized['id'][-1]}")
# Instantiate the handlers
handler1 = MyCustomHandlerOne()
handler2 = MyCustomHandlerTwo()
# Setup the agent. Only the `llm` will issue callbacks for handler2
llm = OpenAI(temperature=0, streaming=True, callbacks=[handler2])
tools = load_tools(["llm-math"], llm=llm)
agent = initialize_agent(tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION)
# Callbacks for handler1 will be issued by every object involved in the
# Agent execution (llm, llmchain, tool, agent executor)
agent.run("What is 2 raised to the 0.235 power?", callbacks=[handler1]) | DOC: Multiple callback handlers | https://api.github.com/repos/langchain-ai/langchain/issues/7064/comments | 1 | 2023-07-03T04:21:12Z | 2023-10-09T16:05:51Z | https://github.com/langchain-ai/langchain/issues/7064 | 1,785,284,378 | 7,064 |
[
"langchain-ai",
"langchain"
] | ### System Info
v0.0.221
### Who can help?
@vowelparrot
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
[MultiQueryRetriever](https://python.langchain.com/docs/modules/data_connection/retrievers/how_to/MultiQueryRetriever) is broken due to [5962](https://github.com/hwchase17/langchain/pull/5962) introducing a new method `_get_relevant_documents`.
Caught this b/c I no longer see expected logging when running example in [docs](https://python.langchain.com/docs/modules/data_connection/retrievers/how_to/MultiQueryRetriever).
```
unique_docs = retriever_from_llm.get_relevant_documents(query="What does the course say about regression?")
len(unique_docs)
```
For MultiQueryRetriever, get_relevant_documents does a few things (PR [here](https://github.com/hwchase17/langchain/pull/6833/files#diff-73c5e56610f556ddc4710db1b9da4c6277b8ffaef6d535e3cf1e1ceb4b22b186)).
E.g., it will run `queries = self.generate_queries(query, run_manager)` and log the queries.
It appears the method name has been changed to `_get_relevant_documents`?
And it now requires some additional args (e.g., `run_manager`)?
AFAICT, `get_relevant_documents` currently will just do retrieval w/o any of the `MultiQueryRetriever` logic.
If this is expected, then documentation will need to be updated for MultiQueryRetriever [here](https://python.langchain.com/docs/modules/data_connection/retrievers/how_to/MultiQueryRetriever) to use _get_relevant_documents and supply run_manager and explain what run_manager is.
Also this exposes that fact that we need a test for MultiQueryRetriever.
### Expected behavior
We should actually run the logic in `MultiQueryRetriever` as noted in the [PR](https://github.com/hwchase17/langchain/pull/6833/files#diff-73c5e56610f556ddc4710db1b9da4c6277b8ffaef6d535e3cf1e1ceb4b22b186).
```
INFO:root:Generated queries: ["1. What is the course's perspective on regression?", '2. How does the course discuss regression?', '3. What information does the course provide about regression?']
``` | MultiQueryRetriever broken on master and needs tests | https://api.github.com/repos/langchain-ai/langchain/issues/7063/comments | 0 | 2023-07-03T04:00:32Z | 2023-07-04T05:09:35Z | https://github.com/langchain-ai/langchain/issues/7063 | 1,785,264,664 | 7,063 |
[
"langchain-ai",
"langchain"
] | ### System Info
index = VectorstoreIndexCreator(vectorstore_cls=DocArrayInMemorySearch, embedding=embeddings)
index = index.from_loaders([PyMuPDFLoader('孙子兵法.pdf')])
query = '孙子兵法的战略'
index.query(llm=OpenAIChat(streaming=True), question=query, chain_type="stuff")
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```
index = VectorstoreIndexCreator(vectorstore_cls=DocArrayInMemorySearch, embedding=embeddings)
index = index.from_loaders([PyMuPDFLoader('孙子兵法.pdf')])
query = '孙子兵法的战略'
index.query(llm=OpenAIChat(streaming=True), question=query, chain_type="stuff")
```
streaming ...
### Expected behavior
```
index = VectorstoreIndexCreator(vectorstore_cls=DocArrayInMemorySearch, embedding=embeddings)
index = index.from_loaders([PyMuPDFLoader('孙子兵法.pdf')])
query = '孙子兵法的战略'
index.query(llm=OpenAIChat(streaming=True), question=query, chain_type="stuff")
```
streaming ... | OpenAIChat(streaming=True) donot work | https://api.github.com/repos/langchain-ai/langchain/issues/7062/comments | 2 | 2023-07-03T04:00:09Z | 2023-11-16T16:07:11Z | https://github.com/langchain-ai/langchain/issues/7062 | 1,785,264,380 | 7,062 |
[
"langchain-ai",
"langchain"
] | ### System Info
langchain = 0.0.218
python = 3.11.4
### Who can help?
@hwchase17 , @agola11
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [X] Memory
- [X] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
To reproduce:
1. Follow [Add Memory Open Ai Functions](https://python.langchain.com/docs/modules/agents/how_to/add_memory_openai_functions)
2. Import Streamlit
3. Create Chat Element using st.chat_input
4. Try running agent through Streamlit
### Expected behavior
The agent memory works correctly when the Agent is called via ``` agent_chain.run("Insert prompt") ``` but when this is being done via a Streamlit Chat element like:
```python
if prompt := st.chat_input():
st.session_state.messages.append({"role": "user", "content": prompt})
st.chat_message("user").write(prompt)
with st.spinner('Thinking...'):
st_callback = StreamlitCallbackHandler(st.container())
agent_chain.run(prompt, callbacks=[st_callback])
st.chat_message("assistant").write(response)
st.session_state.messages.append({"role": "assistant", "content": response})
```
It no longer works. | OPENAI_FUNCTIONS Agent Memory won't work inside of Streamlit st.chat_input element | https://api.github.com/repos/langchain-ai/langchain/issues/7061/comments | 10 | 2023-07-03T03:11:16Z | 2023-12-20T16:07:28Z | https://github.com/langchain-ai/langchain/issues/7061 | 1,785,201,120 | 7,061 |
[
"langchain-ai",
"langchain"
] | ### System Info
Langchina: 0.0.221,
OS: WSL
Python 3.11
### Who can help?
The memory did not record the full input message, it only record the input parameter values
Below is a simple code to use the default ConversationBufferMemory.
++++++++++++++++++++++++++
llm = conn.connecting.get_model_azureopenai()
mem = langchain.memory.ConversationBufferMemory()
msg1 = langchain.prompts.HumanMessagePromptTemplate.from_template("Hi, I am {name}. How do you pronounce my name?")
prompt1 = langchain.prompts.ChatPromptTemplate.from_messages([msg1])
print("Input message=", prompt1.format(name="Cao"))
chain1 = langchain.chains.LLMChain(
llm=llm,
prompt=prompt1,
memory=mem,
)
resp1 = chain1.run({"name": "Cao"})
print(resp1)
print("+"*100)
print(mem)
+++++++++++++++++++++++++++++++++++
But the real memory only recorded like below:
chat_memory=ChatMessageHistory(messages=[HumanMessage(content='Cao', additional_kwargs={}, example=False), AIMessage(content='Hello Cao! Your name is pronounced as "Ts-ow" with the "Ts" sound similar to the beginning of "tsunami" and "ow" rhyming with "cow."', additional_kwargs={}, example=False)]) output_key=None input_key=None return_messages=False human_prefix='Human' ai_prefix='AI' memory_key='history'
It should record the full input, not only the parameter itself:
Input message= Human: Hi, I am Cao. How do you pronounce my name?
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [X] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Below is a simple code to use the default ConversationBufferMemory.
++++++++++++++++++++++++++
llm = conn.connecting.get_model_azureopenai()
mem = langchain.memory.ConversationBufferMemory()
msg1 = langchain.prompts.HumanMessagePromptTemplate.from_template("Hi, I am {name}. How do you pronounce my name?")
prompt1 = langchain.prompts.ChatPromptTemplate.from_messages([msg1])
print("Input message=", prompt1.format(name="Cao"))
chain1 = langchain.chains.LLMChain(
llm=llm,
prompt=prompt1,
memory=mem,
)
resp1 = chain1.run({"name": "Cao"})
print(resp1)
print("+"*100)
print(mem)
+++++++++++++++++++++++++++++++++++
But the real memory only recorded like below:
chat_memory=ChatMessageHistory(messages=[HumanMessage(content='Cao', additional_kwargs={}, example=False), AIMessage(content='Hello Cao! Your name is pronounced as "Ts-ow" with the "Ts" sound similar to the beginning of "tsunami" and "ow" rhyming with "cow."', additional_kwargs={}, example=False)]) output_key=None input_key=None return_messages=False human_prefix='Human' ai_prefix='AI' memory_key='history'
It should record the full input, not only the parameter itself:
Input message= Human: Hi, I am Cao. How do you pronounce my name?
### Expected behavior
It should record the full input, not only the parameter itself:
Input message= Human: Hi, I am Cao. How do you pronounce my name? | The memory did not record the full input message, it only record the input parameter values | https://api.github.com/repos/langchain-ai/langchain/issues/7060/comments | 1 | 2023-07-03T03:10:39Z | 2023-10-09T16:05:55Z | https://github.com/langchain-ai/langchain/issues/7060 | 1,785,200,414 | 7,060 |
[
"langchain-ai",
"langchain"
] | ### System Info
Langchain 0.0.221
WSL
Python 3.11
###
If design a very simple chain which do not need any input parameters. Then run it, it will throw error. I think this without any input params may be useful.
llm = conn.connecting.get_model_azureopenai()
mem = langchain.memory.ConversationBufferMemory()
msg1 = langchain.prompts.HumanMessagePromptTemplate.from_template("Hi there!")
prompt1 = langchain.prompts.ChatPromptTemplate.from_messages([msg1])
print(prompt1.input_variables)
chain1 = langchain.chains.LLMChain(
llm=llm,
prompt=prompt1,
memory=mem,
)
resp1 = chain1.run()
print(resp1)
==================
Error:
/usr/local/miniconda3/bin/python /mnt/c/TechBuilder/AIML/LangChain/Dev20230701/chains/runchain1.py
Traceback (most recent call last):
File "/mnt/c/TechBuilder/AIML/LangChain/Dev20230701/chains/runchain1.py", line 87, in <module>
run2()
File "/mnt/c/TechBuilder/AIML/LangChain/Dev20230701/chains/runchain1.py", line 81, in run2
resp1 = chain1.run()
^^^^^^^^^^^^
File "/usr/local/miniconda3/lib/python3.11/site-packages/langchain/chains/base.py", line 296, in run
raise ValueError(
ValueError: `run` supported with either positional arguments or keyword arguments, but none were provided.
[]
Process finished with exit code 1
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Step1, develop a simple chain
/usr/local/miniconda3/bin/python /mnt/c/TechBuilder/AIML/LangChain/Dev20230701/chains/runchain1.py
Traceback (most recent call last):
File "/mnt/c/TechBuilder/AIML/LangChain/Dev20230701/chains/runchain1.py", line 87, in <module>
run2()
File "/mnt/c/TechBuilder/AIML/LangChain/Dev20230701/chains/runchain1.py", line 81, in run2
resp1 = chain1.run()
^^^^^^^^^^^^
File "/usr/local/miniconda3/lib/python3.11/site-packages/langchain/chains/base.py", line 296, in run
raise ValueError(
ValueError: `run` supported with either positional arguments or keyword arguments, but none were provided.
[]
Process finished with exit code 1
Step 2, run it without input params, because it does not need any input params.
### Expected behavior
Should run without error | Chain.run() without any param will cause error | https://api.github.com/repos/langchain-ai/langchain/issues/7059/comments | 3 | 2023-07-03T02:43:11Z | 2023-11-29T16:09:09Z | https://github.com/langchain-ai/langchain/issues/7059 | 1,785,176,797 | 7,059 |
[
"langchain-ai",
"langchain"
] | ### System Info
langchain==0.0.221
langchainplus-sdk==0.0.19
Windows 10
### Who can help?
@dev2049 @homanp
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [X] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
1. install `langchain` with the aformentioned version
2. install `openai`
3. run the following code:
```py
import os
from langchain.chains.openai_functions.openapi import get_openapi_chain
os.environ['OPENAI_API_KEY'] = OPENAI_API_KEY
chain = get_openapi_chain("https://gptshop.bohita.com/openapi.yaml")
res = chain.run("I want a tight fitting red shirt for a man")
res
```
or this, as it will produce a similar error:
```py
import os
from langchain.chains.openai_functions.openapi import get_openapi_chain
os.environ['OPENAI_API_KEY'] = OPENAI_API_KEY
chain = get_openapi_chain("https://nani.ooo/openapi.json")
res = chain.run("Show me the amount of nodes on the ethereum blockchain")
res
```
### Expected behavior
The response dict. (not sure what that would look like as I have not gotten a successful response) | get_openapi_chain fails to produce proper Openapi scheme when preparing url. | https://api.github.com/repos/langchain-ai/langchain/issues/7058/comments | 3 | 2023-07-02T23:22:34Z | 2023-10-08T16:05:10Z | https://github.com/langchain-ai/langchain/issues/7058 | 1,785,011,372 | 7,058 |
[
"langchain-ai",
"langchain"
] | ### System Info
Langchain version 0.0.221
Anthropic versions 0.3.1 and 0.3.2
Whenever using ChatAnthropic, this now appears. Version 0.2.10 works, but the below error appears since the Anthropic update a couple of days ago
`Anthropic.__init__() got an unexpected keyword argument 'api_url' (type=type_error)`
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Use ChatAnthropic in any form with the latest version of Anthropic and you will encounter: "Anthropic.__init__() got an unexpected keyword argument 'api_url' (type=type_error)"
### Expected behavior
It should work if the API key is provided. You should not have to specify a URL. | Anthropic Upgrade issue | https://api.github.com/repos/langchain-ai/langchain/issues/7056/comments | 2 | 2023-07-02T17:44:23Z | 2023-10-08T16:05:15Z | https://github.com/langchain-ai/langchain/issues/7056 | 1,784,742,819 | 7,056 |
[
"langchain-ai",
"langchain"
] | ### System Info
langchain: 0.0.221
python:3.10.2
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [X] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.text_splitter import CharacterTextSplitter
from langchain.vectorstores import FAISS
from langchain.document_loaders import TextLoader
from langchain.document_loaders import TextLoader
loader = TextLoader("my_file.txt")
documents = loader.load()
text_splitter = CharacterTextSplitter(chunk_size=10, chunk_overlap=0)
docs = text_splitter.split_documents(documents)
embeddings = OpenAIEmbeddings()
### Expected behavior
I‘m follow the smae code on official wbsite "https://python.langchain.com/docs/modules/data_connection/vectorstores/integrations/faiss",but replace the txt file content with another (http://novel.tingroom.com/jingdian/5370/135425.html),and the CharacterTextSplitter chunk_size set to10 ,or 100 whatever,then the error kept on raising up:InvalidRequestError: This model's maximum context length is 8191 tokens, however you requested 10137 tokens (10137 in your prompt; 0 for the completion). Please reduce your prompt; or completion length.
is this “from_documents” method has some kind of character restriction,or it just dont support Long text? | InvalidRequestError: This model's maximum context length | https://api.github.com/repos/langchain-ai/langchain/issues/7054/comments | 2 | 2023-07-02T17:18:17Z | 2023-08-09T01:43:27Z | https://github.com/langchain-ai/langchain/issues/7054 | 1,784,733,766 | 7,054 |
[
"langchain-ai",
"langchain"
] | ### System Info
langchain Version: 0.0.220
python 3.9.16
Windows 11/ ubuntu 22.04
anaconda
### Who can help?
@agola11 When ingesting the attched file (I'm attaching a zip as .pkl cannot be updated) in a normal document splitting loop
```python
embeddings = OpenAIEmbeddings()
chunk_sizes = [512, 1024, 1536]
overlap= 100
docs_n_code = "both"
split_documents = []
base_dir ="..."
for chunk_size in chunk_sizes:
text_splitter = RecursiveCharacterTextSplitter(chunk_size=chunk_size, chunk_overlap=overlap)
docs = text_splitter.split_documents(documents)
faiss_dir = "..."
faiss_vectorstore = FAISS.from_documents(docs, embeddings)
faiss_vectorstore.save_local(faiss_dir)
```
I get the following error:
```
setting an array element with a sequence. The requested array has an inhomogeneous shape after 1 dimensions. The detected shape was (10379,) + inhomogeneous part.
```
similar errors occur with chroma and deeplake, locally. With chroma i can get the error more frequently even when ingesting other documents.
[api_docs.zip](https://github.com/hwchase17/langchain/files/11930126/api_docs.zip)
### Information
- [X] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```python
from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.text_splitter import CharacterTextSplitter
from langchain.vectorstores import FAISS
from langchain.document_loaders import TextLoader
from langchain.document_loaders import TextLoader
with open('api_docs.pkl', 'rb') as file:
documents = pickle.load(file)
text_splitter = RecursiveCharacterTextSplitter(chunk_size=512, chunk_overlap=100)
docs = text_splitter.split_documents(documents)
embeddings = OpenAIEmbeddings()
db = FAISS.from_documents(docs, embeddings)
```
### Expected behavior
Create the index | Chroma, Deeplake, Faiss indexing error ValueError: setting an array element with a sequence. The requested array has an inhomogeneous shape after 1 dimensions. The detected shape was (10379,) + inhomogeneous part. | https://api.github.com/repos/langchain-ai/langchain/issues/7053/comments | 4 | 2023-07-02T17:04:55Z | 2024-04-23T08:14:56Z | https://github.com/langchain-ai/langchain/issues/7053 | 1,784,729,131 | 7,053 |
[
"langchain-ai",
"langchain"
] | ### Feature request
The LLM chain strictly validates all input variables against placeholders in the template. It will throw an error if they don't match. I wish it could handle missing variables, thus making the prompt template more flexible.
```python
def _validate_inputs(self, inputs: Dict[str, Any]) -> None:
"""Check that all inputs are present."""
missing_keys = set(self.input_keys).difference(inputs)
if missing_keys:
raise ValueError(f"Missing some input keys: {missing_keys}")
```
### Motivation
I'm using jinja2 to build a generic prompt template, but the chain's enforcement on variable validation forces me to take some extra steps, such as pre-rendering the prompt before running it through the chain.
I hope the official could support the handling of missing variables.
### Your contribution
I think I could add a variable to determine whether to check the consistency between input variables and template variables, and then submit a PR.
if you have better ideas, such as complex and complete solutions like custom detection rule callback functions.
I look forward to the official update. Please, I really need this, and I believe others do too.
This will make the design of the Prompt more flexible. | Chain support missing variable | https://api.github.com/repos/langchain-ai/langchain/issues/7044/comments | 8 | 2023-07-02T12:16:49Z | 2023-10-19T16:06:48Z | https://github.com/langchain-ai/langchain/issues/7044 | 1,784,612,945 | 7,044 |
[
"langchain-ai",
"langchain"
] | ### System Info
LangChain==0.0.177
Python==3.10
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [X] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
When using the **map_reduce** technique with a **GPT 16k** model, you receive an error message: '**A single document was longer than the context length, we cannot handle this.'** I believe this is due to the token_max limit mentioned in the following context. Is it possible to address this issue? It no longer makes sense to use a 16k model if we cannot send documents exceeding the hardcoded limit of 3000 tokens. _Could you please modify this hardcoded value from the 4098 token models to accommodate the 16k models?_ Thank you.
<img width="1260" alt="Capture d’écran 2023-07-02 à 13 26 21" src="https://github.com/hwchase17/langchain/assets/20300624/5f09b7e6-0a7a-49ec-9585-a20524332682">
### Expected behavior
I would like be able to give a Document object that would be more than 3000 tokens. | Map_reduce not adapted to the 16k model 😲😥 | https://api.github.com/repos/langchain-ai/langchain/issues/7043/comments | 2 | 2023-07-02T11:30:01Z | 2023-10-08T16:05:26Z | https://github.com/langchain-ai/langchain/issues/7043 | 1,784,594,777 | 7,043 |
[
"langchain-ai",
"langchain"
] | ### System Info
PYTHON: 3.11.4
LANGCHAIN: 0.0.220
FLATFORM: WINDOWS
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
**My code:**
```
import nest_asyncio
import gc
nest_asyncio.apply()
from langchain.document_loaders import WebBaseLoader
urls=['','']
loader = WebBaseLoader(urls)
loader.requests_per_second = 1
scrape_data = loader.aload()
```
**ERROR i got:**
```
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
Cell In[4], line 13
10 loader = WebBaseLoader(urls)
11 loader.requests_per_second = 1
---> 13 scrape_data = loader.aload()
File [c:\Program](file:///C:/Program) Files\Python311\Lib\site-packages\langchain\document_loaders\web_base.py:227, in WebBaseLoader.aload(self)
224 def aload(self) -> List[Document]:
225 """Load text from the urls in web_path async into Documents."""
--> 227 results = self.scrape_all(self.web_paths)
228 docs = []
229 for i in range(len(results)):
File [c:\Program](file:///C:/Program) Files\Python311\Lib\site-packages\langchain\document_loaders\web_base.py:173, in WebBaseLoader.scrape_all(self, urls, parser)
170 """Fetch all urls, then return soups for all results."""
171 from bs4 import BeautifulSoup
--> 173 results = asyncio.run(self.fetch_all(urls))
174 final_results = []
175 for i, result in enumerate(results):
File [c:\Program](file:///C:/Program) Files\Python311\Lib\site-packages\nest_asyncio.py:35, in _patch_asyncio..run(main, debug)
33 task = asyncio.ensure_future(main)
34 try:
---> 35 return loop.run_until_complete(task)
...
921 return _RequestContextManager(
--> 922 self._request(hdrs.METH_GET, url, allow_redirects=allow_redirects, **kwargs)
923 )
TypeError: ClientSession._request() got an unexpected keyword argument 'verify'
```
### Expected behavior
_**i want to get web content like basic guide you provide.
I remember I was initially using version 0.0.20, and it worked fine.
But today, after updating to the latest version: 0.0.220, it got this error**_ | ClientSession._request() got an unexpected keyword argument 'verify' | https://api.github.com/repos/langchain-ai/langchain/issues/7042/comments | 4 | 2023-07-02T07:29:01Z | 2023-09-17T18:22:23Z | https://github.com/langchain-ai/langchain/issues/7042 | 1,784,502,508 | 7,042 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
Can Plan-And-Execute models be conditioned to plan in particular directions instead of doing the planning alone? Say I'd like the planning to take place differently and the LLM always decides on an exhaustive manner
### Suggestion:
_No response_ | Issue: Conditioning planner with custom instructions in Plan-And-Execute models | https://api.github.com/repos/langchain-ai/langchain/issues/7041/comments | 4 | 2023-07-02T06:57:36Z | 2023-12-18T23:49:38Z | https://github.com/langchain-ai/langchain/issues/7041 | 1,784,492,675 | 7,041 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
Can we nest agents in one another? I would like an agent to use a set of tools + an LLM to extract entities, and then another agent after this should use some other tools + LLM to refine them.
### Suggestion:
_No response_ | Issue: Nested agents | https://api.github.com/repos/langchain-ai/langchain/issues/7040/comments | 3 | 2023-07-02T06:55:45Z | 2024-01-15T17:33:23Z | https://github.com/langchain-ai/langchain/issues/7040 | 1,784,492,025 | 7,040 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
Can we reuse intermediate steps for an agent defined using STRUCTURED_CHAT_ZERO_SHOT_REACT_DESCRIPTION?
This is my definition:
```
agent_executor = initialize_agent(
tools,
llm,
agent=AgentType.STRUCTURED_CHAT_ZERO_SHOT_REACT_DESCRIPTION,
verbose=True,
return_intermediate_steps=True,
agent_kwargs={
"prefix": PREFIX,
"suffix": SUFFIX,
"input_variables": [
"input",
"agent_scratchpad",
"intermediate_steps",
"tools",
"tool_names",
],
"stop": ["\nObservation:"],
},
)
```
But I'm getting this error:
```
Got mismatched input_variables. Expected: {'tools', 'agent_scratchpad', 'input', 'tool_names'}. Got: ['input', 'agent_scratchpad', 'intermediate_steps', 'tools', 'tool_names'] (type=value_error)
```
### Suggestion:
_No response_ | Issue: Doubt on STRUCTURED_CHAT_ZERO_SHOT_REACT_DESCRIPTION | https://api.github.com/repos/langchain-ai/langchain/issues/7039/comments | 2 | 2023-07-02T06:54:27Z | 2023-11-29T16:09:14Z | https://github.com/langchain-ai/langchain/issues/7039 | 1,784,491,423 | 7,039 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
Is there a difference between these two agent types? Where should one use either of them?
### Suggestion:
_No response_ | Issue: LLMSingleActionAgent vs STRUCTURED_CHAT_ZERO_SHOT_REACT_DESCRIPTION | https://api.github.com/repos/langchain-ai/langchain/issues/7038/comments | 2 | 2023-07-02T06:52:29Z | 2023-10-17T16:06:14Z | https://github.com/langchain-ai/langchain/issues/7038 | 1,784,490,903 | 7,038 |
[
"langchain-ai",
"langchain"
] | ### System Info
Using the latest version of langchain (0.0.220).
### Who can help?
@hwchase17 @agola11
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [X] Callbacks/Tracing
- [X] Async
### Reproduction
Steps to reproduce the behavior:
1. `pip install langchain==0.0.220`
2. Run `from langchain.callbacks.base import AsyncCallbackHandler` in a Python script
Running this simple import gives the following traceback on my system:
```
Traceback (most recent call last):
File "C:\Users\jblak\OneDrive\Documents\language-tutor\test.py", line 1, in <module>
from langchain.callbacks.base import AsyncCallbackHandler
File "C:\Users\jblak\anaconda3\lib\site-packages\langchain\__init__.py", line 6, in <module>
from langchain.agents import MRKLChain, ReActChain, SelfAskWithSearchChain
File "C:\Users\jblak\anaconda3\lib\site-packages\langchain\agents\__init__.py", line 2, in <module>
from langchain.agents.agent import (
File "C:\Users\jblak\anaconda3\lib\site-packages\langchain\agents\agent.py", line 16, in <module>
from langchain.agents.tools import InvalidTool
File "C:\Users\jblak\anaconda3\lib\site-packages\langchain\agents\tools.py", line 8, in <module>
from langchain.tools.base import BaseTool, Tool, tool
File "C:\Users\jblak\anaconda3\lib\site-packages\langchain\tools\__init__.py", line 3, in <module>
from langchain.tools.arxiv.tool import ArxivQueryRun
File "C:\Users\jblak\anaconda3\lib\site-packages\langchain\tools\arxiv\tool.py", line 12, in <module>
from langchain.utilities.arxiv import ArxivAPIWrapper
File "C:\Users\jblak\anaconda3\lib\site-packages\langchain\utilities\__init__.py", line 3, in <module>
from langchain.utilities.apify import ApifyWrapper
File "C:\Users\jblak\anaconda3\lib\site-packages\langchain\utilities\apify.py", line 5, in <module>
from langchain.document_loaders import ApifyDatasetLoader
File "C:\Users\jblak\anaconda3\lib\site-packages\langchain\document_loaders\__init__.py", line 52, in <module>
from langchain.document_loaders.github import GitHubIssuesLoader
File "C:\Users\jblak\anaconda3\lib\site-packages\langchain\document_loaders\github.py", line 37, in <module>
class GitHubIssuesLoader(BaseGitHubLoader):
File "pydantic\main.py", line 197, in pydantic.main.ModelMetaclass.__new__
File "pydantic\fields.py", line 506, in pydantic.fields.ModelField.infer
File "pydantic\fields.py", line 436, in pydantic.fields.ModelField.__init__
File "pydantic\fields.py", line 552, in pydantic.fields.ModelField.prepare
File "pydantic\fields.py", line 663, in pydantic.fields.ModelField._type_analysis
File "pydantic\fields.py", line 808, in pydantic.fields.ModelField._create_sub_type
File "pydantic\fields.py", line 436, in pydantic.fields.ModelField.__init__
File "pydantic\fields.py", line 552, in pydantic.fields.ModelField.prepare
File "pydantic\fields.py", line 668, in pydantic.fields.ModelField._type_analysis
File "C:\Users\jblak\anaconda3\lib\typing.py", line 852, in __subclasscheck__
return issubclass(cls, self.__origin__)
TypeError: issubclass() arg 1 must be a class
```
### Expected behavior
Running this line should import the `AsyncCallbackHandler` without error. | TypeError: issubclass() arg 1 must be a class (importing AsyncCallbackHandler) | https://api.github.com/repos/langchain-ai/langchain/issues/7037/comments | 2 | 2023-07-02T06:04:32Z | 2023-10-09T16:06:06Z | https://github.com/langchain-ai/langchain/issues/7037 | 1,784,472,838 | 7,037 |
[
"langchain-ai",
"langchain"
] | ### Feature request
I was skimming through the repository for the MongoDB Agent and I discovered that it does not exist. Is it feasible to develop a MongoDB agent that establishes a connection with MongoDB, generates MongoDB queries based on given questions, and retrieves the corresponding data?
### Motivation
Within my organization, a significant portion of our data is stored in MongoDB. I intend to utilize this agent to establish connections with our databases and develop a practical application around it.
### Your contribution
If it is indeed possible, I am willing to work on implementing this feature. However, I would greatly appreciate some initial guidance or assistance from someone knowledgeable in this area. | Agent for MongoDB | https://api.github.com/repos/langchain-ai/langchain/issues/7036/comments | 17 | 2023-07-02T05:49:29Z | 2024-04-12T16:16:25Z | https://github.com/langchain-ai/langchain/issues/7036 | 1,784,467,107 | 7,036 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
I want to chain the ```SelfQueryRetriever``` with ```ConversationalRetrievalChain``` and stream the output. I tested it in synchronous function it's working. However, when I convert it to async function, it pauses there and the execution never ended, no output as well. I went through the source code and found that ```aget_relevant_documents()``` for ```SelfQueryRetriever``` is not implemented.
```python
async def aget_relevant_documents(self, query: str) -> List[Document]:
raise NotImplementedError
```
Not sure whether the issue came from this, because I am using ```acall()``` for ```ConversationalRetrievalChain```. But I do not receive any error message from the execution. Any temporary workaround for me to use ```SelfQueryRetriever``` in async function or I should wait for the release?
### Suggestion:
_No response_ | SelfQueryRetriever not working in async call | https://api.github.com/repos/langchain-ai/langchain/issues/7035/comments | 1 | 2023-07-02T05:17:42Z | 2023-10-08T16:05:40Z | https://github.com/langchain-ai/langchain/issues/7035 | 1,784,455,921 | 7,035 |
[
"langchain-ai",
"langchain"
] | Hi,
first up, thank you for making langchain! I was playing around a little and found a minor issue with loading online PDFs, and would like to start contributing to langchain maybe by fixing this.
### System Info
langchain 0.0.220, google collab, python 3.10
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```python
from langchain.document_loaders import PyMuPDFLoader
loader = PyMuPDFLoader('https://www.w3.org/WAI/ER/tests/xhtml/testfiles/resources/pdf/dummy.pdf')
pages = loader.load()
pages[0].metadata
```
<img width="977" alt="image" src="https://github.com/hwchase17/langchain/assets/21276922/4ededc60-bb03-4502-a8c8-3c221ab109c4">
### Expected behavior
Instead of giving the temporary file path, which is not useful and deleted shortly after, it could be more helpful if the source is set to be the URL passed to it. This would require some fixes in the `langchain/document_loaders/pdf.py` file. | Loading online PDFs gives temporary file path as source in metadata | https://api.github.com/repos/langchain-ai/langchain/issues/7034/comments | 4 | 2023-07-01T23:24:53Z | 2023-11-29T20:07:47Z | https://github.com/langchain-ai/langchain/issues/7034 | 1,784,312,865 | 7,034 |
[
"langchain-ai",
"langchain"
] | ### Feature request
## Summary
Implement a new type of Langchain agent that reasons using a PLoT architecture as described in this paper:
https://arxiv.org/abs/2306.12672
## Notes
* Such an agent would need to use a probabilistic programming language (the paper above used Church) to generate code that describes its world model. This means that it would only work for LLMs that can generate code, and it would also need access to a Church interpreter (if that is the language that is chosen)
### Motivation
An agent which uses probabilistic language of thought to construct a coherent world model would be better at reasoning tasks involving uncertainty and ambiguity. Basically, it would think more like a human, using an understanding of the world that is external to the weights & biases on which it was trained.
### Your contribution
I've already started work on this and I intend to submit a PR when I have time to finish it! I'm pretty busy so any help would be appreciated! Feel free to reach out on discord @jasondotparse or twitter https://twitter.com/jasondotparse if you are down to help! | Probabilistic Language of Thought (PLoT) Agent Implementation | https://api.github.com/repos/langchain-ai/langchain/issues/7027/comments | 2 | 2023-07-01T18:19:31Z | 2023-10-07T16:04:57Z | https://github.com/langchain-ai/langchain/issues/7027 | 1,784,135,218 | 7,027 |
[
"langchain-ai",
"langchain"
] | ### System Info
OutputParserException: Could not parse LLM output: `df = [] for i in range(len(df)):`
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
I used open source LLM, HuggingFaceHub(repo_id = "google/flan-t5-xxl"), but not OpenAI LLM, to create agent below:
agent = create_csv_agent using HuggingFaceHub(repo_id = "google/flan-t5-xxl"), r"D:\ML\titanic\titanic.csv", verbose=True)
the agent created successfully:
but when I used the agent below, I got error above:
agent.run("Plot a bar chart comparing Survived vs Dead for Embarked")
agent.run("Find the count of missing values in each column")
agent.run("Fix the issues with missing values for column Embarked with mode")
agent.run("Drop cabin column")
### Expected behavior
I should get plot or other . | OutputParserException: Could not parse LLM output: `df = [] for i in range(len(df)):` | https://api.github.com/repos/langchain-ai/langchain/issues/7024/comments | 1 | 2023-07-01T16:41:37Z | 2023-10-07T16:05:02Z | https://github.com/langchain-ai/langchain/issues/7024 | 1,784,075,187 | 7,024 |
[
"langchain-ai",
"langchain"
] | ### System Info
```
langchain: latest as of yesterday
M2 Pro
16 GB
Macosx
Darwin UAVALOS-M-NR30 22.5.0 Darwin Kernel Version 22.5.0: Thu Jun 8 22:22:23 PDT 2023; root:xnu-8796.121.3~7/RELEASE_ARM64_T6020 arm64
Python 3.10.10
GNU Make 3.81
Apple clang version 14.0.3 (clang-1403.0.22.14.1)
```
### Who can help?
@eyurtsev
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [X] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Steps to repo.
1. Install latest requirement
2. Try to load the `gtr-t5-xxl` embedding model from file.
```
!pip uninstall llama-cpp-python -y
!CMAKE_ARGS="-DLLAMA_METAL=on" FORCE_CMAKE=1 pip install -U llama-cpp-python --no-cache-dir
!pip install 'llama-cpp-python[server]'
!pip install sentence_transformers
!pip install git+https://github.com/huggingface/peft.git
!pip install git+https://github.com/huggingface/transformers.git
!pip install -v datasets loralib sentencepiece
!pip -v install bitsandbytes accelerate
!pip -v install langchain
!pip install scipy
!pip install xformers
!pip install langchain faiss-cpu
# needed to load git repo
!pip install GitPython
!pip install --pre torch torchvision torchaudio --index-url https://download.pytorch.org/whl/nightly/cpu
// not this is the latest repo cloned locally
from langchain.embeddings import HuggingFaceEmbeddings
embeddings = HuggingFaceEmbeddings(model_name='/Users/uavalos/Documents/gtr-t5-xxl')
```
### Expected behavior
* that it loads the embeddings just fine
* in fact, it was working a few days ago
* it stopped working after creating a new conda environment and reinstalling everything fresh
* if you look at the logs, it's looking for a module that's already installed. It's also failing to import the `google` module... i get the same error if i explicitly install that as well
Failure logs
```
---------------------------------------------------------------------------
ModuleNotFoundError Traceback (most recent call last)
File ~/miniconda3/envs/llama3/lib/python3.11/site-packages/langchain/embeddings/huggingface.py:51, in HuggingFaceEmbeddings.__init__(self, **kwargs)
50 try:
---> 51 import sentence_transformers
53 except ImportError as exc:
File ~/miniconda3/envs/llama3/lib/python3.11/site-packages/sentence_transformers/__init__.py:3
2 __MODEL_HUB_ORGANIZATION__ = 'sentence-transformers'
----> 3 from .datasets import SentencesDataset, ParallelSentencesDataset
4 from .LoggingHandler import LoggingHandler
File ~/miniconda3/envs/llama3/lib/python3.11/site-packages/sentence_transformers/datasets/__init__.py:3
2 from .NoDuplicatesDataLoader import NoDuplicatesDataLoader
----> 3 from .ParallelSentencesDataset import ParallelSentencesDataset
4 from .SentencesDataset import SentencesDataset
File ~/miniconda3/envs/llama3/lib/python3.11/site-packages/sentence_transformers/datasets/ParallelSentencesDataset.py:4
3 import gzip
----> 4 from .. import SentenceTransformer
5 from ..readers import InputExample
File ~/miniconda3/envs/llama3/lib/python3.11/site-packages/sentence_transformers/SentenceTransformer.py:11
10 from numpy import ndarray
---> 11 import transformers
12 from huggingface_hub import HfApi, HfFolder, Repository, hf_hub_url, cached_download
File ~/miniconda3/envs/llama3/lib/python3.11/site-packages/transformers/__init__.py:26
25 # Check the dependencies satisfy the minimal versions required.
---> 26 from . import dependency_versions_check
27 from .utils import (
28 OptionalDependencyNotAvailable,
29 _LazyModule,
(...)
42 logging,
43 )
File ~/miniconda3/envs/llama3/lib/python3.11/site-packages/transformers/dependency_versions_check.py:16
15 from .dependency_versions_table import deps
---> 16 from .utils.versions import require_version, require_version_core
19 # define which module versions we always want to check at run time
20 # (usually the ones defined in `install_requires` in setup.py)
21 #
22 # order specific notes:
23 # - tqdm must be checked before tokenizers
File ~/miniconda3/envs/llama3/lib/python3.11/site-packages/transformers/utils/__init__.py:188
186 else:
187 # just to get the expected `No module named 'google.protobuf'` error
--> 188 from . import sentencepiece_model_pb2
191 WEIGHTS_NAME = "pytorch_model.bin"
File ~/miniconda3/envs/llama3/lib/python3.11/site-packages/transformers/utils/sentencepiece_model_pb2.py:17
1 # Generated by the protocol buffer compiler. DO NOT EDIT!
2 # source: sentencepiece_model.proto
3
(...)
15 # See the License for the specific language governing permissions and
16 # limitations under the License.
---> 17 from google.protobuf import descriptor as _descriptor
18 from google.protobuf import message as _message
ModuleNotFoundError: No module named 'google'
The above exception was the direct cause of the following exception:
ImportError Traceback (most recent call last)
Cell In[8], line 2
1 from langchain.embeddings import HuggingFaceEmbeddings
----> 2 embeddings = HuggingFaceEmbeddings(model_name='/Users/uavalos/Documents/gtr-t5-xxl')
File ~/miniconda3/envs/llama3/lib/python3.11/site-packages/langchain/embeddings/huggingface.py:54, in HuggingFaceEmbeddings.__init__(self, **kwargs)
51 import sentence_transformers
53 except ImportError as exc:
---> 54 raise ImportError(
55 "Could not import sentence_transformers python package. "
56 "Please install it with `pip install sentence_transformers`."
57 ) from exc
59 self.client = sentence_transformers.SentenceTransformer(
60 self.model_name, cache_folder=self.cache_folder, **self.model_kwargs
61 )
ImportError: Could not import sentence_transformers python package. Please install it with `pip install sentence_transformers`.
``` | Could not import sentence_transformers python package. | https://api.github.com/repos/langchain-ai/langchain/issues/7019/comments | 9 | 2023-07-01T15:22:37Z | 2024-03-18T14:01:19Z | https://github.com/langchain-ai/langchain/issues/7019 | 1,784,012,562 | 7,019 |
[
"langchain-ai",
"langchain"
] | ### Feature request
when i use jinja2 template, i get the following the problem
template:
```
{% if a %}\n
{{a}}\n
{% endif %}\n
{{b}}
```
and i pass b variable to template render,then the output like this
```
\n
b
```
Each variable placeholder is followed by a '\n'.
Since a is not passed in, and '\n' is left after rendering
i wand the right output is this:
```
b
```
in jinja2 ,use enviroment create template ,and enable trim_blocks ,the jinja2 can ignore '\n'.
this is origin code
```python
def jinja2_formatter(template: str, **kwargs: Any) -> str:
"""Format a template using jinja2."""
try:
from jinja2 import Template
except ImportError:
raise ImportError(
"jinja2 not installed, which is needed to use the jinja2_formatter. "
"Please install it with `pip install jinja2`."
)
return Template(template).render(**kwargs)
```
I tried modifying it a bit
```python
def jinja2_formatter(template: str, **kwargs: Any) -> str:
"""Format a template using jinja2."""
try:
from jinja2 import Environment
except ImportError:
raise ImportError(
"jinja2 not installed, which is needed to use the jinja2_formatter. "
"Please install it with `pip install jinja2`."
)
env = Environment(trim_blocks=True)
jinja_template = env.from_string(template)
return jinja_template.render(**kwargs)
```
### Motivation
I often use the control semantics of Jinja2 to construct prompt templates, creating several different prompts by passing in different variables.
However, since the Jinja2 render in Langchain does not support trim_blocks, it always results in unnecessary blank lines.
Perhaps LLM doesn't care about these extra lines, but as a human being, my OCD makes me feel uncomfortable with them.
### Your contribution
The jinja2_formatter function is too low-level within the Langchain code.
Although the modification is simple, I fear that the PR I submit could potentially affect other parts of the code.
So, if you have time, perhaps you could personally implement this feature. | jinja2 formate suppot trim_blocks mode | https://api.github.com/repos/langchain-ai/langchain/issues/7018/comments | 1 | 2023-07-01T14:43:22Z | 2023-10-07T16:05:07Z | https://github.com/langchain-ai/langchain/issues/7018 | 1,783,981,699 | 7,018 |
[
"langchain-ai",
"langchain"
] | ### Feature request
the following is part of the format code for PipelinePromptTemplate
```python
def _get_inputs(inputs: dict, input_variables: List[str]) -> dict:
return {k: inputs[k] for k in input_variables}
class PipelinePromptTemplate(BasePromptTemplate):
....
def format_prompt(self, **kwargs: Any) -> PromptValue:
for k, prompt in self.pipeline_prompts:
_inputs = _get_inputs(kwargs, prompt.input_variables)
if isinstance(prompt, BaseChatPromptTemplate):
kwargs[k] = prompt.format_messages(**_inputs)
else:
kwargs[k] = prompt.format(**_inputs)
_inputs = _get_inputs(kwargs, self.final_prompt.input_variables)
return self.final_prompt.format_prompt(**_inputs)
```
the function _get_inputs should support miss keyword process,like:
```python
def _get_inputs(inputs: dict, input_variables: List[str]) -> dict:
return {k: inputs.get(k) for k in input_variables}
```
### Motivation
I primarily use the Jinja2 template format and frequently use control statements. Consequently, there are times when I choose not to pass in certain variables. Thus, I hope that PipelinePromptTemplate could support handling missing variables.
### Your contribution
Perhaps, I can submit a PR to add this feature or fix this bug . If you don't have the time to address it, that is. | PipelinePromptTemplate format_prompt should support missing keyword | https://api.github.com/repos/langchain-ai/langchain/issues/7016/comments | 6 | 2023-07-01T13:16:11Z | 2023-10-12T16:07:57Z | https://github.com/langchain-ai/langchain/issues/7016 | 1,783,894,983 | 7,016 |
[
"langchain-ai",
"langchain"
] | Attach my code
```
llm = OpenAI(model_name='gpt-4', temperature=0)
embeddings = OpenAIEmbeddings(model='text-embedding-ada-002')
docs = TextLoader("state_of_the_union.txt").load()
text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
texts = text_splitter.split_documents(docs)
state_of_union_store = Chroma.from_documents(
texts,
embeddings,
collection_name="state-of-union"
)
vectorstore_info = VectorStoreInfo(
name="state_of_union_address",
description="the most recent state of the Union adress",
vectorstore=state_of_union_store,
)
toolkit = VectorStoreToolkit(vectorstore_info=vectorstore_info)
agent_executor = create_vectorstore_agent(llm=llm, toolkit=toolkit, verbose=True)
agent_executor("What did biden say about ketanji brown jackson in the state of the union address?")
``` | How to save embeddings when using Agent for local document Q&A to avoid unnecessary waste by rereading API calls to generate embeddings | https://api.github.com/repos/langchain-ai/langchain/issues/7011/comments | 1 | 2023-07-01T08:31:43Z | 2023-10-07T16:05:17Z | https://github.com/langchain-ai/langchain/issues/7011 | 1,783,681,691 | 7,011 |
[
"langchain-ai",
"langchain"
] | I want to add memory context for Agent, so I referred to the document [here](https://python.langchain.com/docs/modules/memory/how_to/agent_with_memory)
There is such a step in it

I want to implement the same step in the `create_vectorstore_agent `function,

but I found that `create_vectorstore_agent `is a pre-defined,
it has no way to pass some parameters, such as suffix(into `ZeroShotAgent.create_prompt`),
which means, if I want to use the `agent_with_memory `function ,
I must build the agent from scratch, and cannot use `create_vectorstore_agent`
| How to add memory with` create_vectorstore_agent` | https://api.github.com/repos/langchain-ai/langchain/issues/7010/comments | 2 | 2023-07-01T08:18:12Z | 2023-10-07T16:05:22Z | https://github.com/langchain-ai/langchain/issues/7010 | 1,783,670,404 | 7,010 |
[
"langchain-ai",
"langchain"
] | ### System Info
I am using a create_pandas_dataframe_agent like so:
```
create_pandas_dataframe_agent(ChatOpenAI(model_name='gpt-4', temperature=0), df, verbose=True,
max_iterations=2,
early_stopping_method="generate",
handle_parsing_errors="I'm sorry, but the question you have asked is not available within the dataset. Is there anything else I can help you with?")
```
When I try to prompt it to do anything that requires a regex the output i get:
```
Thought: The client wants to know the top 5 hashtags used in the tweets. To find this, I need to extract all the hashtags from the 'full_text' column of the dataframe, count the frequency of each hashtag, and then return the top 5. However, the dataframe does not contain a column for hashtags. Therefore, I need to create a new column 'hashtags' in the dataframe where each entry is a list of hashtags found in the corresponding 'full_text'. Then, I can count the frequency of each hashtag in this new column and return the top 5.
Action: python_repl_ast
Action Input:
```python
import re
import pandas as pd
# Function to extract hashtags from a text
def extract_hashtags(text):
return re.findall(r'\#\w+', text)
# Create a new column 'hashtags' in the dataframe
df['hashtags'] = df['full_text'].apply(extract_hashtags)
# Flatten the list of hashtags and count the frequency of each hashtag
hashtag_counts = pd.Series([hashtag for hashtags in df['hashtags'] for hashtag in hashtags]).value_counts()
# Get the top 5 hashtags
top_5_hashtags = hashtag_counts.head(5)
top_5_hashtags
```
**Observation: NameError: name 're' is not defined**
```
Can anyone help?
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [X] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
use this agent:
create_pandas_dataframe_agent(ChatOpenAI(model_name='gpt-4', temperature=0), df, verbose=True,
max_iterations=2,
early_stopping_method="generate",
handle_parsing_errors="I'm sorry, but the question you have asked is not available within the dataset. Is there anything else I can help you with?")
then prompt it with
top 5 hashtags used
### Expected behavior
the regex should work and return the answer | python tool issue: NameError: name 're' is not defined | https://api.github.com/repos/langchain-ai/langchain/issues/7009/comments | 2 | 2023-07-01T07:39:21Z | 2023-10-07T16:05:28Z | https://github.com/langchain-ai/langchain/issues/7009 | 1,783,632,265 | 7,009 |
[
"langchain-ai",
"langchain"
] | ### Feature request
ElasticSearch embedding need a function to add single text string as Azure OpenAI only support single text input now.
Message from Azure OpenAI Embedding:
embed_documents(texts)
# openai.error.InvalidRequestError: Too many inputs for model None.
# The max number of inputs is 1. We hope to increase the number of inputs per request soon.
# Please contact us through an Azure support request at: https://go.microsoft.com/fwlink/?linkid=2213926 for further questions.
### Motivation
To support to use Azure OpenAI embedding API
### Your contribution
I can do more test after if this fixed. | ElasticSearch embedding need a function to add single text string as AzureOpenAI only support single text input now | https://api.github.com/repos/langchain-ai/langchain/issues/7004/comments | 1 | 2023-07-01T03:54:40Z | 2023-10-07T16:05:33Z | https://github.com/langchain-ai/langchain/issues/7004 | 1,783,463,930 | 7,004 |
[
"langchain-ai",
"langchain"
] | ### Feature request
[https://python.langchain.com/docs/modules/data_connection/vectorstores/integrations/faiss](https://python.langchain.com/docs/modules/data_connection/vectorstores/integrations/faiss)
There's a section that allows you to filter the documents stored in FAISS
It will be cool to allow
results = db.similarity_search("foo", filter=dict(page=1), k=1, fetch_k=4)
when you are doing
vectorstore.as_retriever(<<Include the filter here>>)
This can allow you to select a subset of your vector documents to chat with them.
This is if I have multiple documents from multiple sources loaded in the vectorstroe.
### Motivation
When I do a conversational
qa = ConversationalRetrievalChain(
retriever=vector..as_retriever(), ...
If allow to use only a portion of the document
qa = ConversationalRetrievalChain(
retriever=vector.as_retriever(<<Filter goes here>>>), ...
### Your contribution
I can test it :-) | FAISS Support for filter while using the as_retriever() | https://api.github.com/repos/langchain-ai/langchain/issues/7002/comments | 1 | 2023-07-01T02:31:45Z | 2023-10-07T16:05:38Z | https://github.com/langchain-ai/langchain/issues/7002 | 1,783,366,350 | 7,002 |
[
"langchain-ai",
"langchain"
] | ### System Info
Python version: Python 3.10.6
Langchain version: 0.0.219
OS: Ubuntu 22.04
### Who can help?
@eyurtsev
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
I've simple code like this
```
from langchain.agents import create_csv_agent
from langchain.llms import OpenAI
from langchain.chat_models import ChatOpenAI
from langchain.agents.agent_types import AgentType
import os
import sys
directory = './test'
f = []
for filename in os.listdir(directory):
if filename.endswith(".csv"):
f.append(directory + "/" +filename)
agent = create_csv_agent(
ChatOpenAI(temperature=0, model="gpt-3.5-turbo-0613"),
f,
verbose=True,
agent_type=AgentType.OPENAI_FUNCTIONS,
)
qry ="how many rows are there?"
while True:
if not qry:
qry = input("Q: ")
if qry in ['quit', 'q', 'exit']:
sys.exit()
agent.run(qry)
qry = None
```
I'm using titanic dataset
```
https://github.com/datasciencedojo/datasets/blob/master/titanic.csv
```
Error as below
```
$ python3 langchain-csv.py
> Entering new chain...
Traceback (most recent call last):
File "/home/yockgenm/.local/lib/python3.10/site-packages/langchain/agents/openai_functions_agent/base.py", line 112, in _parse_ai_message
_tool_input = json.loads(function_call["arguments"])
File "/usr/lib/python3.10/json/__init__.py", line 346, in loads
return _default_decoder.decode(s)
File "/usr/lib/python3.10/json/decoder.py", line 337, in decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
File "/usr/lib/python3.10/json/decoder.py", line 355, in raw_decode
raise JSONDecodeError("Expecting value", s, err.value) from None
json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/data/mytuition/langchain-csv.py", line 32, in <module>
agent.run(qry)
File "/home/yockgenm/.local/lib/python3.10/site-packages/langchain/chains/base.py", line 290, in run
return self(args[0], callbacks=callbacks, tags=tags)[_output_key]
File "/home/yockgenm/.local/lib/python3.10/site-packages/langchain/chains/base.py", line 166, in __call__
raise e
File "/home/yockgenm/.local/lib/python3.10/site-packages/langchain/chains/base.py", line 160, in __call__
self._call(inputs, run_manager=run_manager)
File "/home/yockgenm/.local/lib/python3.10/site-packages/langchain/agents/agent.py", line 987, in _call
next_step_output = self._take_next_step(
File "/home/yockgenm/.local/lib/python3.10/site-packages/langchain/agents/agent.py", line 803, in _take_next_step
raise e
File "/home/yockgenm/.local/lib/python3.10/site-packages/langchain/agents/agent.py", line 792, in _take_next_step
output = self.agent.plan(
File "/home/yockgenm/.local/lib/python3.10/site-packages/langchain/agents/openai_functions_agent/base.py", line 212, in plan
agent_decision = _parse_ai_message(predicted_message)
File "/home/yockgenm/.local/lib/python3.10/site-packages/langchain/agents/openai_functions_agent/base.py", line 114, in _parse_ai_message
raise OutputParserException(
langchain.schema.OutputParserException: Could not parse tool input: {'name': 'python', 'arguments': 'len(df)'} because the `arguments` is not valid JSON.
```
### Expected behavior
Langchain provided answer of total row | Bug of csv agent, basically all query failed with json error | https://api.github.com/repos/langchain-ai/langchain/issues/7001/comments | 13 | 2023-07-01T01:11:10Z | 2024-02-15T16:11:26Z | https://github.com/langchain-ai/langchain/issues/7001 | 1,783,315,774 | 7,001 |
[
"langchain-ai",
"langchain"
] | ### System Info
Most recent versions of all
### Who can help?
@hwchase17 @agola11
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [X] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [x] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
from langchain.embeddings.openai import OpenAIEmbeddings
File "C:\Users\hasaa\anaconda3\lib\site-packages\langchain\__init__.py", line 6, in <module>
from langchain.agents import MRKLChain, ReActChain, SelfAskWithSearchChain
File "C:\Users\hasaa\anaconda3\lib\site-packages\langchain\agents\__init__.py", line 2, in <module>
from langchain.agents.agent import (
File "C:\Users\hasaa\anaconda3\lib\site-packages\langchain\agents\agent.py", line 30, in <module>
from langchain.prompts.few_shot import FewShotPromptTemplate
File "C:\Users\hasaa\anaconda3\lib\site-packages\langchain\prompts\__init__.py", line 12, in <module>
from langchain.prompts.example_selector import (
File "C:\Users\hasaa\anaconda3\lib\site-packages\langchain\prompts\example_selector\__init__.py", line 4, in <module>
from langchain.prompts.example_selector.semantic_similarity import (
File "C:\Users\hasaa\anaconda3\lib\site-packages\langchain\prompts\example_selector\semantic_similarity.py", line 10, in <module>
from langchain.vectorstores.base import VectorStore
File "C:\Users\hasaa\anaconda3\lib\site-packages\langchain\vectorstores\__init__.py", line 2, in <module>
from langchain.vectorstores.alibabacloud_opensearch import (
File "C:\Users\hasaa\anaconda3\lib\site-packages\langchain\vectorstores\alibabacloud_opensearch.py", line 9, in <module>
from langchain.vectorstores.base import VectorStore
File "C:\Users\hasaa\anaconda3\lib\site-packages\langchain\vectorstores\base.py", line 372, in <module>
class VectorStoreRetriever(BaseRetriever, BaseModel):
File "C:\Users\hasaa\anaconda3\lib\site-packages\langchain\vectorstores\base.py", line 388, in VectorStoreRetriever
def validate_search_type(cls, values: Dict) -> Dict:
File "pydantic\class_validators.py", line 126, in pydantic.class_validators.root_validator.dec
File "pydantic\class_validators.py", line 144, in pydantic.class_validators._prepare_validator
pydantic.errors.ConfigError: duplicate validator function "langchain.vectorstores.base.VectorStoreRetriever.validate_search_type"; if this is intended, set `allow_reuse=True`
### Expected behavior
The error I am facing is due to a duplicate validator function in the VectorStoreRetriever | Unable to load OPENAI Embeddings | https://api.github.com/repos/langchain-ai/langchain/issues/6999/comments | 1 | 2023-06-30T22:26:08Z | 2023-10-06T16:05:33Z | https://github.com/langchain-ai/langchain/issues/6999 | 1,783,222,113 | 6,999 |
[
"langchain-ai",
"langchain"
] | ### Issue with current documentation:
You can get chat completions by passing one or more messages to the chat model. The response will be a message. The types of messages currently supported in LangChain are AIMessage, HumanMessage, SystemMessage, and ChatMessage -- ChatMessage takes in an arbitrary role parameter. Most of the time, you'll just be dealing with HumanMessage, AIMessage, and SystemMessage.
```
from langchain.chat_models import ChatOpenAI
from langchain.schema import (
AIMessage,
HumanMessage,
SystemMessage
)
chat = ChatOpenAI(temperature=0)
chat.predict_messages([HumanMessage(content="Translate this sentence from English to French. I love programming.")])
# >> AIMessage(content="J'aime programmer.", additional_kwargs={})
```
It is useful to understand how chat models are different from a normal LLM, but it can often be handy to just be able to treat them the same. LangChain makes that easy by also exposing an interface through which you can interact with a chat model as you would a normal LLM. You can access this through the predict interface.
```
chat.predict("Translate this sentence from English to French. I love programming.")
# >> J'aime programmer
```
### Idea or request for content:
This code gives me the error:
```
AttributeError: 'ChatOpenAI' object has no attribute 'predict'
```
Is there really an exposed interface for predict? | DOC: <Please write a comprehensive title after the 'DOC: ' prefix> | https://api.github.com/repos/langchain-ai/langchain/issues/6998/comments | 0 | 2023-06-30T20:58:25Z | 2023-06-30T21:02:55Z | https://github.com/langchain-ai/langchain/issues/6998 | 1,783,160,731 | 6,998 |
[
"langchain-ai",
"langchain"
] | ### System Info
Macbook OSX
Python 3.11.4 (main, Jun 25 2023, 18:18:14) [Clang 14.0.3 (clang-1403.0.22.14.1)] on darwin
LangChain Environment:
sdk_version:0.0.17
library:langchainplus_sdk
platform:macOS-13.3.1-arm64-arm-64bit
runtime:python
runtime_version:3.11.4
I'm getting this error, and no matter what configurations I try, I cannot get past it.
```
AttributeError: 'Credentials' object has no attribute 'with_scopes'
```
My abbreviated code snippet:
```
from langchain.document_loaders import GoogleDriveLoader
loader = GoogleDriveLoader(
folder_id="xxxx", recursive=True,
)
docs = loader.load()
```
```
cat ~/.credentials/credentials.json
{
"installed": {
"client_id": "xxxxxxxxx.apps.googleusercontent.com",
"project_id": "xxxxxxxxx",
"auth_uri": "https://accounts.google.com/o/oauth2/auth",
"token_uri": "https://oauth2.googleapis.com/token",
"auth_provider_x509_cert_url": "https://www.googleapis.com/oauth2/v1/certs",
"client_secret": "xxxxxxxxx",
"redirect_uris": [
"http://localhost"
]
}
}
```
```
#gcloud info
Google Cloud SDK [437.0.1]
Platform: [Mac OS X, arm] uname_result(system='Darwin', node='xxxxs-MBP-3', release='22.4.0', version='Darwin Kernel Version 22.4.0: Mon Mar 6 20:59:28 PST 2023; root:xnu-8796.101.5~3/RELEASE_ARM64_T6000', machine='arm64')
Locale: (None, 'UTF-8')
Python Version: [3.11.4 (main, Jun 25 2023, 18:18:14) [Clang 14.0.3 (clang-1403.0.22.14.1)]]
Python Location: [/Users/xxxx/.asdf/installs/python/3.11.4/bin/python3]
OpenSSL: [OpenSSL 1.1.1u 30 May 2023]
Requests Version: [2.25.1]
urllib3 Version: [1.26.9]
Default CA certs file: [/Users/xxxx/google-cloud-sdk/lib/third_party/certifi/cacert.pem]
Site Packages: [Disabled]
Installation Root: [/Users/xxxx/google-cloud-sdk]
Installed Components:
gsutil: [5.24]
core: [2023.06.30]
bq: [2.0.93]
gcloud-crc32c: [1.0.0]
System PATH: [/Users/xxxx/.asdf/plugins/python/shims:/Users/xxxx/.asdf/installs/python/3.11.4/bin:/Users/xxxx/.rvm/gems/ruby-3.0.3/bin:/Users/xxxx/.rvm/gems/ruby-3.0.3@global/bin:/Users/xxxx/.rvm/rubies/ruby-3.0.3/bin:/Users/xxxx/.rvm/bin:/Users/xxxx/google-cloud-sdk/bin:/Users/xxxx/.asdf/shims:/opt/homebrew/opt/asdf/libexec/bin:/Users/xxxx/.sdkman/candidates/gradle/current/bin:/Users/xxxx/.jenv/shims:/Users/xxxx/.jenv/bin:/Users/xxxx/.nvm/versions/node/v17.6.0/bin:/opt/homebrew/opt/mysql@5.7/bin:/opt/homebrew/opt/mysql-client/bin:/opt/homebrew/bin:/usr/local/bin:/System/Cryptexes/App/usr/bin:/usr/bin:/bin:/usr/sbin:/sbin:/Library/Apple/usr/bin:/Applications/Postgres.app/Contents/Versions/latest/bin:/var/run/com.apple.security.cryptexd/codex.system/bootstrap/usr/local/bin:/var/run/com.apple.security.cryptexd/codex.system/bootstrap/usr/bin:/var/run/com.apple.security.cryptexd/codex.system/bootstrap/usr/appleinternal/bin:/Users/xxxx/Library/Android/sdk/emulator:/Users/xxxx/Library/Android/sdk/platform-tools]
Python PATH: [/Users/xxxx/google-cloud-sdk/lib/third_party:/Users/xxxx/google-cloud-sdk/lib:/Users/xxxx/.asdf/installs/python/3.11.4/lib/python311.zip:/Users/xxxx/.asdf/installs/python/3.11.4/lib/python3.11:/Users/xxxx/.asdf/installs/python/3.11.4/lib/python3.11/lib-dynload]
Cloud SDK on PATH: [True]
Kubectl on PATH: [/usr/local/bin/kubectl]
Installation Properties: [/Users/xxxx/google-cloud-sdk/properties]
User Config Directory: [/Users/xxx/.config/gcloud]
Active Configuration Name: [default]
Active Configuration Path: [/Users/xxxx/.config/gcloud/configurations/config_default]
Account: [xxxxxx@xxx.com]
Project: [project-name-replaced]
Current Properties:
[compute]
zone: [europe-west1-b] (property file)
region: [europe-west1] (property file)
[core]
account: [xxxxxx@xxx.com] (property file)
disable_usage_reporting: [True] (property file)
project: [project-name-replaced] (property file)
Logs Directory: [/Users/xxx/.config/gcloud/logs]
Last Log File: [/Users/xxx/.config/gcloud/logs/xxxx.log]
git: [git version 2.39.2]
ssh: [OpenSSH_9.0p1, LibreSSL 3.3.6]
```
### Who can help?
@eyurtsev I think?
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
It's happening with a base app, nothing special. Using this template:
https://github.com/hwchase17/langchain-streamlit-template
### Expected behavior
Expected to get the oAuth window? | GoogleDriveLoader: AttributeError: 'Credentials' object has no attribute 'with_scopes' | https://api.github.com/repos/langchain-ai/langchain/issues/6997/comments | 3 | 2023-06-30T20:25:55Z | 2024-03-11T07:56:07Z | https://github.com/langchain-ai/langchain/issues/6997 | 1,783,124,885 | 6,997 |
[
"langchain-ai",
"langchain"
] | ### System Info
running google/flan-tf-xxl on ghcr.io/huggingface/text-generation-inference:0.8
langchain==0.0.220
text-generation==0.6.0
### Who can help?
@hwchase17
@agola11
### Information
- [ ] The official example notebooks/scripts
- [x] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
`HuggingFaceTextGenInference` does not accept a temperature parameter of 0 or None.
```
llm = HuggingFaceTextGenInference(
inference_server_url="http://localhost:8010/",
temperature=0.
)
llm("What did foo say about bar?")
```
ValidationError: `temperature` must be strictly positive
```
llm = HuggingFaceTextGenInference(
inference_server_url="http://localhost:8010/",
temperature=None
)
llm("What did foo say about bar?")
```
ValidationError: 1 validation error for HuggingFaceTextGenInference
temperature none is not an allowed value (type=type_error.none.not_allowed)
### Expected behavior
I expect to be able to pass a parameter to `HuggingFaceTextGenInference` that instructs the model to do greedy decoding without getting an error.
It seems like the issue is that LangChain enforces that temperature be an integer while the text-generation [client requires temperature](https://github.com/huggingface/text-generation-inference/blob/main/clients/python/text_generation/types.py#L74C4-L74C4) be None for greedy decoding or >0 for sampling.
| HuggingFaceTextGenInference does not accept 0 temperature | https://api.github.com/repos/langchain-ai/langchain/issues/6993/comments | 4 | 2023-06-30T18:51:33Z | 2023-09-29T16:20:59Z | https://github.com/langchain-ai/langchain/issues/6993 | 1,783,014,565 | 6,993 |
[
"langchain-ai",
"langchain"
] | ### System Info
Macos system
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [X] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
loading malfunction
| langchain takes much time to reload | https://api.github.com/repos/langchain-ai/langchain/issues/6991/comments | 3 | 2023-06-30T18:17:21Z | 2023-10-07T16:05:43Z | https://github.com/langchain-ai/langchain/issues/6991 | 1,782,974,926 | 6,991 |
[
"langchain-ai",
"langchain"
] | `class VectaraRetriever` does not use the parameters passed in by `Vectara.as_retriever()`. The class definition sets up `vectorstore` and `search_kwargs` but then there is no check for values coming in from `Vectara.as_retriever()`.
https://github.com/hwchase17/langchain/blob/e3b7effc8f39333076c2bedd7306de81ce988de6/langchain/vectorstores/vectara.py#L307 | VectaraRetriever does not use the parameters passed in by `Vectara.as_retriever()` | https://api.github.com/repos/langchain-ai/langchain/issues/6984/comments | 4 | 2023-06-30T16:42:27Z | 2023-10-08T16:05:51Z | https://github.com/langchain-ai/langchain/issues/6984 | 1,782,833,784 | 6,984 |
[
"langchain-ai",
"langchain"
] | ### Issue with current documentation:
The links in the documentation to make contributions to pages point to the wrong file location within the repository. For example, the link on the "Quickstart" page points to the filepath `docs/docs/get_started/quickstart.mdx`, when it should point to `docs/docs_skeleton/docs/get_started/quickstart.mdx`. Presumably this is a result of #6300 and the links not getting updated.
### Idea or request for content:
If someone more familiar with the structure of the docs could tell me where to find the base template where the link templates are specified, and preferably a bit of guidance on how the documentation is structured now to make it easier to figure out the correct links, then I can make the updates. | DOC: "Edit this page" returns 404 | https://api.github.com/repos/langchain-ai/langchain/issues/6983/comments | 1 | 2023-06-30T16:33:59Z | 2023-10-06T16:05:48Z | https://github.com/langchain-ai/langchain/issues/6983 | 1,782,823,112 | 6,983 |
[
"langchain-ai",
"langchain"
] | ### System Info
Here is my code to initialize the Chain, it should come with the default prompt template:
qa_source_chain = RetrievalQAWithSourcesChain.from_chain_type(
llm=llm_chat,
chain_type="stuff",
retriever=db_test.as_retriever()
)
Here is the source chain object, I guess default template for this chain is hacked?
RetrievalQAWithSourcesChain(memory=None, callbacks=None, callback_manager=None, verbose=False, combine_documents_chain=StuffDocumentsChain(memory=None, callbacks=None, callback_manager=None, verbose=False, input_key='input_documents', output_key='output_text', llm_chain=LLMChain(memory=None, callbacks=None, callback_manager=None, verbose=False, prompt=PromptTemplate(input_variables=['summaries', 'question'], output_parser=None, partial_variables={}, template='Given the following extracted parts of a long document and a question, create a final answer with references ("SOURCES"). \nIf you don\'t know the answer, just say that you don\'t know. Don\'t try to make up an answer.\nALWAYS return a "SOURCES" part in your answer.\n\nQUESTION: Which state/country\'s law governs the interpretation of the contract?\n=========\nContent: This Agreement is governed by English law and the parties submit to the exclusive jurisdiction of the English courts in relation to any dispute (contractual or non-contractual) concerning this Agreement save that either party may apply to any court for an injunction or other relief to protect its Intellectual Property Rights.\nSource: 28-pl\nContent: No Waiver. Failure or delay in exercising any right or remedy under this Agreement shall not constitute a waiver of such (or any other) right or remedy.\n\n11.7 Severability. The invalidity, illegality or unenforceability of any term (or part of a term) of this Agreement shall not affect the continuation in force of the remainder of the term (if any) and this Agreement.\n\n11.8 No Agency. Except as expressly stated otherwise, nothing in this Agreement shall create an agency, partnership or joint venture of any kind between the parties.\n\n11.9 No Third-Party Beneficiaries.\nSource: 30-pl\nContent: (b) if Google believes, in good faith, that the Distributor has violated or caused Google to violate any Anti-Bribery Laws (as defined in Clause 8.5) or that such a violation is reasonably likely to occur,\nSource: 4-pl\n=========\nFINAL ANSWER: This Agreement is governed by English law.\nSOURCES: 28-pl\n\nQUESTION: What did the president say about Michael Jackson?\n=========\nContent: Madam Speaker, Madam Vice President, our First Lady and Second Gentleman. Members of Congress and the Cabinet. Justices of the Supreme Court. My fellow Americans. \n\nLast year COVID-19 kept us apart. This year we are finally together again. \n\nTonight, we meet as Democrats Republicans and Independents. But most importantly as Americans. \n\nWith a duty to one another to the American people to the Constitution. \n\nAnd with an unwavering resolve that freedom will always triumph over tyranny. \n\nSix days ago, Russia’s Vladimir Putin sought to shake the foundations of the free world thinking he could make it bend to his menacing ways. But he badly miscalculated. \n\nHe thought he could roll into Ukraine and the world would roll over. Instead he met a wall of strength he never imagined. \n\nHe met the Ukrainian people. \n\nFrom President Zelenskyy to every Ukrainian, their fearlessness, their courage, their determination, inspires the world. \n\nGroups of citizens blocking tanks with their bodies. Everyone from students to retirees teachers turned soldiers defending their homeland.\nSource: 0-pl\nContent: And we won’t stop. \n\nWe have lost so much to COVID-19. Time with one another. And worst of all, so much loss of life. \n\nLet’s use this moment to reset. Let’s stop looking at COVID-19 as a partisan dividing line and see it for what it is: A God-awful disease. \n\nLet’s stop seeing each other as enemies, and start seeing each other for who we really are: Fellow Americans. \n\nWe can’t change how divided we’ve been. But we can change how we move forward—on COVID-19 and other issues we must face together. \n\nI recently visited the New York City Police Department days after the funerals of Officer Wilbert Mora and his partner, Officer Jason Rivera. \n\nThey were responding to a 9-1-1 call when a man shot and killed them with a stolen gun. \n\nOfficer Mora was 27 years old. \n\nOfficer Rivera was 22. \n\nBoth Dominican Americans who’d grown up on the same streets they later chose to patrol as police officers. \n\nI spoke with their families and told them that we are forever in debt for their sacrifice, and we will carry on their mission to restore the trust and safety every community deserves.\nSource: 24-pl\nContent: And a proud Ukrainian people, who have known 30 years of independence, have repeatedly shown that they will not tolerate anyone who tries to take their country backwards. \n\nTo all Americans, I will be honest with you, as I’ve always promised. A Russian dictator, invading a foreign country, has costs around the world. \n\nAnd I’m taking robust action to make sure the pain of our sanctions is targeted at Russia’s economy. And I will use every tool at our disposal to protect American businesses and consumers. \n\nTonight, I can announce that the United States has worked with 30 other countries to release 60 Million barrels of oil from reserves around the world. \n\nAmerica will lead that effort, releasing 30 Million barrels from our own Strategic Petroleum Reserve. And we stand ready to do more if necessary, unified with our allies. \n\nThese steps will help blunt gas prices here at home. And I know the news about what’s happening can seem alarming. \n\nBut I want you to know that we are going to be okay.\nSource: 5-pl\nContent: More support for patients and families. \n\nTo get there, I call on Congress to fund ARPA-H, the Advanced Research Projects Agency for Health. \n\nIt’s based on DARPA—the Defense Department project that led to the Internet, GPS, and so much more. \n\nARPA-H will have a singular purpose—to drive breakthroughs in cancer, Alzheimer’s, diabetes, and more. \n\nA unity agenda for the nation. \n\nWe can do this. \n\nMy fellow Americans—tonight , we have gathered in a sacred space—the citadel of our democracy. \n\nIn this Capitol, generation after generation, Americans have debated great questions amid great strife, and have done great things. \n\nWe have fought for freedom, expanded liberty, defeated totalitarianism and terror. \n\nAnd built the strongest, freest, and most prosperous nation the world has ever known. \n\nNow is the hour. \n\nOur moment of responsibility. \n\nOur test of resolve and conscience, of history itself. \n\nIt is in this moment that our character is formed. Our purpose is found. Our future is forged. \n\nWell I know this nation.\nSource: 34-pl\n=========\nFINAL ANSWER: The president did not mention Michael Jackson.\nSOURCES:\n\nQUESTION: {question}\n=========\n{summaries}\n=========\nFINAL ANSWER:', template_format='f-string', validate_template=True), llm=ChatOpenAI(verbose=False, callbacks=None, callback_manager=None, client=<class 'openai.api_resources.chat_completion.ChatCompletion'>, model_name='gpt-3.5-turbo', temperature=0.0, model_kwargs={}, openai_api_key='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', openai_api_base=None, openai_organization=None, openai_proxy=None, request_timeout=None, max_retries=6, streaming=False, n=1, max_tokens=2000), output_key='text'), document_prompt=PromptTemplate(input_variables=['page_content', 'source'], output_parser=None, partial_variables={}, template='Content: {page_content}\nSource: {source}', template_format='f-string', validate_template=True), document_variable_name='summaries', document_separator='\n\n'), question_key='question', input_docs_key='docs', answer_key='answer', sources_answer_key='sources', return_source_documents=False, retriever=VectorStoreRetriever(vectorstore=<langchain.vectorstores.chroma.Chroma object at 0x14fa7cb50>, search_type='similarity', search_kwargs={}), reduce_k_below_max_tokens=False, max_tokens_limit=3375)
### Who can help?
@hwchase17
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [X] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
1. import and initialize a RetrievalQAWithSourcesChain, using RetrievalQAWithSourcesChain.from_chain_type() method
2. check the default prompt template of the RetrievalQAWithSourcesChain object
### Expected behavior
The default template should be very similar to a RetrievalQA chain object | RetrievalQAWithSourcesChain object default prompt is problematic | https://api.github.com/repos/langchain-ai/langchain/issues/6982/comments | 2 | 2023-06-30T16:22:47Z | 2023-10-06T16:05:53Z | https://github.com/langchain-ai/langchain/issues/6982 | 1,782,809,247 | 6,982 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
Everything was working fine 10 min ago, but now when I use the ConversationalRetrievalQAChain I always get a 401 error, I tried 2 difrent API keys and the error persists, don't really know what the issue mght be, any ideas?
This is the error log:
[chain/error] [1:chain:conversational_retrieval_chain] [315ms] Chain run errored with error: "Request failed with status code 401"
error Error: Request failed with status code 401
at createError (EDITED\node_modules\axios\lib\core\createError.js:16:15)
at settle (EDITED\node_modules\axios\lib\core\settle.js:17:12)
at IncomingMessage.handleStreamEnd (EDITED\node_modules\axios\lib\adapters\http.js:322:11)
at IncomingMessage.emit (node:events:525:35)
at IncomingMessage.emit (node:domain:489:12)
at endReadableNT (node:internal/streams/readable:1359:12)
at processTicksAndRejections (node:internal/process/task_queues:82:21) {
config: {
transitional: {
silentJSONParsing: true,
forcedJSONParsing: true,
clarifyTimeoutError: false
},
adapter: [Function: httpAdapter],
transformRequest: [ [Function: transformRequest] ],
transformResponse: [ [Function: transformResponse] ],
timeout: 0,
xsrfCookieName: 'XSRF-TOKEN',
xsrfHeaderName: 'X-XSRF-TOKEN',
maxContentLength: -1,
maxBodyLength: -1,
validateStatus: [Function: validateStatus],
headers: {
Accept: 'application/json, text/plain, */*',
'Content-Type': 'application/json',
'User-Agent': 'OpenAI/NodeJS/3.2.1',
Authorization: 'Bearer EDITED',
'Content-Length': 53
},
method: 'post',
data: '{"model":"text-embedding-ada-002","input":"hi there"}',
url: 'https://api.openai.com/v1/embeddings'
},
request: <ref *1> ClientRequest {
_events: [Object: null prototype] {
abort: [Function (anonymous)],
aborted: [Function (anonymous)],
connect: [Function (anonymous)],
error: [Function (anonymous)],
socket: [Function (anonymous)],
timeout: [Function (anonymous)],
finish: [Function: requestOnFinish]
},
_eventsCount: 7,
_maxListeners: undefined,
outputData: [],
outputSize: 0,
writable: true,
destroyed: false,
_last: true,
chunkedEncoding: false,
shouldKeepAlive: false,
maxRequestsOnConnectionReached: false,
_defaultKeepAlive: true,
useChunkedEncodingByDefault: true,
sendDate: false,
_removedConnection: false,
_removedContLen: false,
_removedTE: false,
strictContentLength: false,
_contentLength: 53,
_hasBody: true,
_trailer: '',
finished: true,
_headerSent: true,
_closed: false,
socket: TLSSocket {
_tlsOptions: [Object],
_secureEstablished: true,
_securePending: false,
_newSessionPending: false,
_controlReleased: true,
secureConnecting: false,
_SNICallback: null,
servername: 'api.openai.com',
alpnProtocol: false,
authorized: true,
authorizationError: null,
encrypted: true,
_events: [Object: null prototype],
_eventsCount: 10,
connecting: false,
_hadError: false,
_parent: null,
_host: 'api.openai.com',
_closeAfterHandlingError: false,
_readableState: [ReadableState],
_maxListeners: undefined,
_writableState: [WritableState],
allowHalfOpen: false,
_sockname: null,
_pendingData: null,
_pendingEncoding: '',
server: undefined,
_server: null,
ssl: [TLSWrap],
_requestCert: true,
_rejectUnauthorized: true,
parser: null,
_httpMessage: [Circular *1],
[Symbol(res)]: [TLSWrap],
[Symbol(verified)]: true,
[Symbol(pendingSession)]: null,
[Symbol(async_id_symbol)]: 56,
[Symbol(kHandle)]: [TLSWrap],
[Symbol(lastWriteQueueSize)]: 0,
[Symbol(timeout)]: null,
[Symbol(kBuffer)]: null,
[Symbol(kBufferCb)]: null,
[Symbol(kBufferGen)]: null,
[Symbol(kCapture)]: false,
[Symbol(kSetNoDelay)]: false,
[Symbol(kSetKeepAlive)]: true,
[Symbol(kSetKeepAliveInitialDelay)]: 60,
[Symbol(kBytesRead)]: 0,
[Symbol(kBytesWritten)]: 0,
[Symbol(connect-options)]: [Object]
},
_header: 'POST /v1/embeddings HTTP/1.1\r\n' +
'Accept: application/json, text/plain, */*\r\n' +
'Content-Type: application/json\r\n' +
'User-Agent: OpenAI/NodeJS/3.2.1\r\n' +
'Authorization: Bearer EDITED' +
'Content-Length: 53\r\n' +
'Host: api.openai.com\r\n' +
'Connection: close\r\n' +
'\r\n',
_keepAliveTimeout: 0,
_onPendingData: [Function: nop],
agent: Agent {
_events: [Object: null prototype],
_eventsCount: 2,
_maxListeners: undefined,
defaultPort: 443,
protocol: 'https:',
options: [Object: null prototype],
requests: [Object: null prototype] {},
sockets: [Object: null prototype],
freeSockets: [Object: null prototype] {},
keepAliveMsecs: 1000,
keepAlive: false,
maxSockets: Infinity,
maxFreeSockets: 256,
scheduling: 'lifo',
maxTotalSockets: Infinity,
totalSocketCount: 1,
maxCachedSessions: 100,
_sessionCache: [Object],
[Symbol(kCapture)]: false
},
socketPath: undefined,
method: 'POST',
maxHeaderSize: undefined,
insecureHTTPParser: undefined,
joinDuplicateHeaders: undefined,
path: '/v1/embeddings',
_ended: true,
res: IncomingMessage {
_readableState: [ReadableState],
_events: [Object: null prototype],
_eventsCount: 4,
_maxListeners: undefined,
socket: [TLSSocket],
httpVersionMajor: 1,
httpVersionMinor: 1,
httpVersion: '1.1',
complete: true,
rawHeaders: [Array],
rawTrailers: [],
joinDuplicateHeaders: undefined,
aborted: false,
upgrade: false,
url: '',
method: null,
statusCode: 401,
statusMessage: 'Unauthorized',
client: [TLSSocket],
_consuming: false,
_dumped: false,
req: [Circular *1],
responseUrl: 'https://api.openai.com/v1/embeddings',
redirects: [],
[Symbol(kCapture)]: false,
[Symbol(kHeaders)]: [Object],
[Symbol(kHeadersCount)]: 22,
[Symbol(kTrailers)]: null,
[Symbol(kTrailersCount)]: 0
},
aborted: false,
timeoutCb: null,
upgradeOrConnect: false,
parser: null,
maxHeadersCount: null,
reusedSocket: false,
host: 'api.openai.com',
protocol: 'https:',
_redirectable: Writable {
_writableState: [WritableState],
_events: [Object: null prototype],
_eventsCount: 3,
_maxListeners: undefined,
_options: [Object],
_ended: true,
_ending: true,
_redirectCount: 0,
_redirects: [],
_requestBodyLength: 53,
_requestBodyBuffers: [],
_onNativeResponse: [Function (anonymous)],
_currentRequest: [Circular *1],
_currentUrl: 'https://api.openai.com/v1/embeddings',
[Symbol(kCapture)]: false
},
[Symbol(kCapture)]: false,
[Symbol(kBytesWritten)]: 0,
[Symbol(kEndCalled)]: true,
[Symbol(kNeedDrain)]: false,
[Symbol(corked)]: 0,
[Symbol(kOutHeaders)]: [Object: null prototype] {
accept: [Array],
'content-type': [Array],
'user-agent': [Array],
authorization: [Array],
'content-length': [Array],
host: [Array]
},
[Symbol(errored)]: null,
[Symbol(kUniqueHeaders)]: null
},
response: {
status: 401,
statusText: 'Unauthorized',
headers: {
date: 'Fri, 30 Jun 2023 16:16:14 GMT',
'content-type': 'application/json; charset=utf-8',
'content-length': '301',
connection: 'close',
vary: 'Origin',
'x-request-id': '6cebdf4cc24074e40d3c1c80d9ab9fff',
'strict-transport-security': 'max-age=15724800; includeSubDomains',
'cf-cache-status': 'DYNAMIC',
server: 'cloudflare',
'cf-ray': '7df7b66a7cdbba73-EZE',
'alt-svc': 'h3=":443"; ma=86400'
},
config: {
transitional: [Object],
adapter: [Function: httpAdapter],
transformRequest: [Array],
transformResponse: [Array],
timeout: 0,
xsrfCookieName: 'XSRF-TOKEN',
xsrfHeaderName: 'X-XSRF-TOKEN',
maxContentLength: -1,
maxBodyLength: -1,
validateStatus: [Function: validateStatus],
headers: [Object],
method: 'post',
data: '{"model":"text-embedding-ada-002","input":"hi there"}',
url: 'https://api.openai.com/v1/embeddings'
},
request: <ref *1> ClientRequest {
_events: [Object: null prototype],
_eventsCount: 7,
_maxListeners: undefined,
outputData: [],
outputSize: 0,
writable: true,
destroyed: false,
_last: true,
chunkedEncoding: false,
shouldKeepAlive: false,
maxRequestsOnConnectionReached: false,
_defaultKeepAlive: true,
useChunkedEncodingByDefault: true,
sendDate: false,
_removedConnection: false,
_removedContLen: false,
_removedTE: false,
strictContentLength: false,
_contentLength: 53,
_hasBody: true,
_trailer: '',
finished: true,
_headerSent: true,
_closed: false,
socket: [TLSSocket],
_header: 'POST /v1/embeddings HTTP/1.1\r\n' +
'Accept: application/json, text/plain, */*\r\n' +
'Content-Type: application/json\r\n' +
'User-Agent: OpenAI/NodeJS/3.2.1\r\n' +
'Authorization: Bearer EDITED' +
'Content-Length: 53\r\n' +
'Host: api.openai.com\r\n' +
'Connection: close\r\n' +
'\r\n',
_keepAliveTimeout: 0,
_onPendingData: [Function: nop],
agent: [Agent],
socketPath: undefined,
method: 'POST',
maxHeaderSize: undefined,
insecureHTTPParser: undefined,
joinDuplicateHeaders: undefined,
path: '/v1/embeddings',
_ended: true,
res: [IncomingMessage],
aborted: false,
timeoutCb: null,
upgradeOrConnect: false,
parser: null,
maxHeadersCount: null,
reusedSocket: false,
host: 'api.openai.com',
protocol: 'https:',
_redirectable: [Writable],
[Symbol(kCapture)]: false,
[Symbol(kBytesWritten)]: 0,
[Symbol(kEndCalled)]: true,
[Symbol(kNeedDrain)]: false,
[Symbol(corked)]: 0,
[Symbol(kOutHeaders)]: [Object: null prototype],
[Symbol(errored)]: null,
[Symbol(kUniqueHeaders)]: null
},
data: { error: [Object] }
},
isAxiosError: true,
toJSON: [Function: toJSON],
attemptNumber: 1,
retriesLeft: 6
}
### Suggestion:
_No response_ | Issue: 401 error when using ConversationalRetrievalQAChain | https://api.github.com/repos/langchain-ai/langchain/issues/6981/comments | 3 | 2023-06-30T16:22:20Z | 2023-10-06T16:05:58Z | https://github.com/langchain-ai/langchain/issues/6981 | 1,782,808,760 | 6,981 |
[
"langchain-ai",
"langchain"
] | ### System Info
Traceback (most recent call last):
File "/Users/apple/Desktop/LLM/gpt4all_langchain_chatbots/teacher/lib/python3.10/site-packages/langchain/embeddings/llamacpp.py", line 85, in validate_environment
from llama_cpp import Llama
ModuleNotFoundError: No module named 'llama_cpp'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/Users/apple/Desktop/LLM/gpt4all_langchain_chatbots/mine.py", line 3, in <module>
llama = LlamaCppEmbeddings(model_path="./models/ggml-gpt4all-j.bin")
File "pydantic/main.py", line 339, in pydantic.main.BaseModel.__init__
File "pydantic/main.py", line 1102, in pydantic.main.validate_model
File "/Users/apple/Desktop/LLM/gpt4all_langchain_chatbots/teacher/lib/python3.10/site-packages/langchain/embeddings/llamacpp.py", line 89, in validate_environment
raise ModuleNotFoundError(
ModuleNotFoundError: Could not import llama-cpp-python library. Please install the llama-cpp-python library to use this embedding model: pip install llama-cpp-python
### Who can help?
@sirrrik
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
pip install the latest langChain package from pypy on mac
### Expected behavior
Traceback (most recent call last):
File "/Users/apple/Desktop/LLM/gpt4all_langchain_chatbots/teacher/lib/python3.10/site-packages/langchain/embeddings/llamacpp.py", line 85, in validate_environment
from llama_cpp import Llama
ModuleNotFoundError: No module named 'llama_cpp'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/Users/apple/Desktop/LLM/gpt4all_langchain_chatbots/mine.py", line 3, in <module>
llama = LlamaCppEmbeddings(model_path="./models/ggml-gpt4all-j.bin")
File "pydantic/main.py", line 339, in pydantic.main.BaseModel.__init__
File "pydantic/main.py", line 1102, in pydantic.main.validate_model
File "/Users/apple/Desktop/LLM/gpt4all_langchain_chatbots/teacher/lib/python3.10/site-packages/langchain/embeddings/llamacpp.py", line 89, in validate_environment
raise ModuleNotFoundError(
ModuleNotFoundError: Could not import llama-cpp-python library. Please install the llama-cpp-python library to use this embedding model: pip install llama-cpp-python | llma Embeddings error | https://api.github.com/repos/langchain-ai/langchain/issues/6980/comments | 3 | 2023-06-30T16:22:20Z | 2023-12-01T16:09:18Z | https://github.com/langchain-ai/langchain/issues/6980 | 1,782,808,752 | 6,980 |
[
"langchain-ai",
"langchain"
] | ### System Info
langchain v0.0.220
Python 3.11.3
### Who can help?
@hwchase17
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [X] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Using `BraveSearch` with an agent always returns an error 422
[I've created a collab notebook with an example displaying this error.](https://colab.research.google.com/drive/1mUr6KWND4ZYmvFnPywbJevZLzssJMvBR?usp=sharing)
I've tried many times with both `ZERO_SHOT_REACT_DESCRIPTION` and `OPENAI_FUNCTIONS` agents
### Expected behavior
Agents should be able to use this tool | Error 422 when using BraveSearch | https://api.github.com/repos/langchain-ai/langchain/issues/6974/comments | 1 | 2023-06-30T15:12:25Z | 2023-10-06T16:06:03Z | https://github.com/langchain-ai/langchain/issues/6974 | 1,782,703,377 | 6,974 |
[
"langchain-ai",
"langchain"
] | ### Feature request
It would be great if the [JSONLinesLoader](https://js.langchain.com/docs/modules/indexes/document_loaders/examples/file_loaders/jsonlines) that's available in the JS version of Langchain could be ported to the Python version.
### Motivation
I find working with jsonl files to be frequently easier than json files.
### Your contribution
Not sure---I'm quite new to Python and so don't how to implement this. | Using JSONLinesLoader in Python | https://api.github.com/repos/langchain-ai/langchain/issues/6973/comments | 6 | 2023-06-30T13:31:03Z | 2023-07-05T13:33:14Z | https://github.com/langchain-ai/langchain/issues/6973 | 1,782,539,266 | 6,973 |
[
"langchain-ai",
"langchain"
] | ### System Info
Traceback (most recent call last):
File "/tmp/pycharm_project_93/test_qa_generation.py", line 50, in <module>
result = qa.run({"question": query, "vectordbkwargs": vectordbkwargs })
File "/root/.cache/pypoetry/virtualenvs/irc-llm-0Bb4gJSe-py3.10/lib/python3.10/site-packages/langchain/chains/base.py", line 273, in run
return self(args[0], callbacks=callbacks, tags=tags)[_output_key]
File "/root/.cache/pypoetry/virtualenvs/irc-llm-0Bb4gJSe-py3.10/lib/python3.10/site-packages/langchain/chains/base.py", line 151, in __call__
final_outputs: Dict[str, Any] = self.prep_outputs(
File "/root/.cache/pypoetry/virtualenvs/irc-llm-0Bb4gJSe-py3.10/lib/python3.10/site-packages/langchain/chains/base.py", line 216, in prep_outputs
self.memory.save_context(inputs, outputs)
File "/root/.cache/pypoetry/virtualenvs/irc-llm-0Bb4gJSe-py3.10/lib/python3.10/site-packages/langchain/memory/chat_memory.py", line 34, in save_context
input_str, output_str = self._get_input_output(inputs, outputs)
File "/root/.cache/pypoetry/virtualenvs/irc-llm-0Bb4gJSe-py3.10/lib/python3.10/site-packages/langchain/memory/chat_memory.py", line 21, in _get_input_output
prompt_input_key = get_prompt_input_key(inputs, self.memory_variables)
File "/root/.cache/pypoetry/virtualenvs/irc-llm-0Bb4gJSe-py3.10/lib/python3.10/site-packages/langchain/memory/utils.py", line 11, in get_prompt_input_key
raise ValueError(f"One input key expected got {prompt_input_keys}")
ValueError: One input key expected got ['question', 'vectordbkwargs']
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.vectorstores import Milvus
from langchain.text_splitter import CharacterTextSplitter
from langchain.chat_models import ChatOpenAI
from langchain.chains import ConversationalRetrievalChain
from langchain.chains.conversational_retrieval.prompts import CONDENSE_QUESTION_PROMPT
from langchain.document_loaders import TextLoader
from langchain.memory import ConversationBufferWindowMemory
from pymilvus import connections, FieldSchema, CollectionSchema, DataType, Collection, utility
MILVUS_HOST = "_milvus.XXXX.com"
MILVUS_PORT = 19530
connections.connect(
alias="default",
host=MILVUS_HOST,
port=MILVUS_PORT
)
utility.drop_collection('LangChainCollection')
import os
os.environ['OPENAI_API_KEY'] = "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"
loader = TextLoader("/tmp/qa_chain1.txt")
documents = loader.load()
text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
documents = text_splitter.split_documents(documents)
embeddings = OpenAIEmbeddings()
vectorstore = Milvus.from_documents(documents, embeddings,
connection_args={"host": "_milvus.XXXX.com", "port": "19530"}, search_params={"metric_type": "IP", "params": {"nprobe": 200}, "offset": 0})
memory = ConversationBufferWindowMemory(k=2 , memory_key="chat_history", return_messages=True)
vectordbkwargs = {"search_distance": 0.9}
retriver = vectorstore.as_retriever()
retriver.search_kwargs = search_kwargs={"k":5 }
qa = ConversationalRetrievalChain.from_llm(ChatOpenAI(temperature=0, model='gpt-3.5-turbo-16k'), retriver , memory = memory, condense_question_prompt = CONDENSE_QUESTION_PROMPT, verbose=True)
query = "Introduce Microsoft"
result = qa.run({"question": query, "vectordbkwargs": vectordbkwargs })
print(result)
### Expected behavior
return the relevant document with a score bigger than 0.9 when matching vectors by IP metric. | Cannot pass search_distance key to ConversationalRetrievalChain | https://api.github.com/repos/langchain-ai/langchain/issues/6971/comments | 2 | 2023-06-30T12:34:14Z | 2023-07-03T04:37:06Z | https://github.com/langchain-ai/langchain/issues/6971 | 1,782,453,316 | 6,971 |
[
"langchain-ai",
"langchain"
] | ### System Info
langchain==0.0.219
### Who can help?
@hwchase17
@agola11
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
>>> from langchain import HuggingFacePipeline
>>> mod = HuggingFacePipeline(model_id="tiiuae/falcon-7b-instruct", task="text-generation")
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/guillem.garcia/miniconda3/envs/rene/lib/python3.10/site-packages/langchain/load/serializable.py", line 74, in __init__
super().__init__(**kwargs)
File "pydantic/main.py", line 341, in pydantic.main.BaseModel.__init__
pydantic.error_wrappers.ValidationError: 1 validation error for HuggingFacePipeline
task
extra fields not permitted (type=value_error.extra)
### Expected behavior
I expect it to work. When I instance a Pipeline with falcon using hugginface it works.
Also, Is there a way to deactivate pydantic? It makes it impossible to modify anything or make langchain compatible with our code | HuggingFacePipeline not working | https://api.github.com/repos/langchain-ai/langchain/issues/6970/comments | 2 | 2023-06-30T11:37:16Z | 2023-06-30T11:59:52Z | https://github.com/langchain-ai/langchain/issues/6970 | 1,782,380,145 | 6,970 |
[
"langchain-ai",
"langchain"
] | ### System Info
langchain==0.0.208
penai==0.27.8
python==3.10.6
### Who can help?
@hwchase17 Could you please help me in resolving the error whenver I run an gaent with the LLM as ChatOpenAI() i get the response as **""Do I need to use a tool? No.""**
While the agent returns the response perfectly fine with llm= OpenAI().
The irony is two days prior the ChatOpenAI() function was working fine and returned the results perfectly fine.
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
`import langchain
from langchain.prompts.base import StringPromptTemplate
from langchain.prompts import PromptTemplate,StringPromptTemplate
from langchain.agents import Tool, AgentExecutor, AgentOutputParser,LLMSingleActionAgent,initialize_agent
from langchain import OpenAI,LLMChain
from langchain.chat_models import ChatOpenAI
from langchain.tools import DuckDuckGoSearchRun
from langchain.schema import AgentAction,AgentFinish
import re
from typing import List,Union
import os
from getpass import getpass
os.environ["OPENAI_API_KEY"] = getpass()
#
search = DuckDuckGoSearchRun()
#3640 def duck_wrapper(input_text):
search_results = search.run(f"site:webmd.com {input_text}")
return search_results
tools = [
Tool(
name = "Search WebMD",
func=duck_wrapper,
description="useful for when you need to answer medical and pharmalogical questions"
)
]
# Set up the base template
template = """Answer the following questions as best you can, but speaking as compasionate medical professional. You have access to the following tools:
{tools}
Use the following format:
Question: the input question you must answer
Thought: you should always think about what to do
Action: the action to take, should be one of [{tool_names}]
Action Input: the input to the action
Observation: the result of the action
... (this Thought/Action/Action Input/Observation can repeat N times)
Thought: I now know the final answer
Final Answer: the final answer to the original input question
Begin! Remember to answer as a compansionate medical professional when giving your final answer.
Previous conversation history:
{history}
Question: {input}
"""
tool_names = [tool.name for tool in tools]
history = None
prompt = PromptTemplate(template=template,input_variables=["tools","tool_names","history","input"])
from langchain.agents.agent_types import AgentType
agent = initialize_agent(tools=tools,
llm=ChatOpenAI(),
agent = AgentType.CONVERSATIONAL_REACT_DESCRIPTION,
verbose=False,
return_intermediate_steps=True
)
try:
response = agent({"input":"how to treat acid reflux?",
"tools":tools,
"tool_names":tool_names,
"history":None,
"chat_history":None},
return_only_outputs=True)
except ValueError as e:
response = str(e)
if not response.startswith("Could not parse LLM output: `"):
raise e
response = response.removeprefix("Could not parse LLM output: `").removesuffix("`")
print(response)
`
### Expected behavior
{'output': 'There are a few lifestyle changes and home remedies that can help treat acid reflux, such as avoiding food triggers, eating smaller and more frequent meals, elevating the head of the bed while sleeping, and avoiding lying down after eating. Over-the-counter medications such as antacids and proton pump inhibitors can also be effective in treating acid reflux. However, it is always best to consult with a doctor if symptoms persist or worsen.', 'intermediate_steps': [(AgentAction(tool='Search WebMD', tool_input='how to treat acid reflux', log='Thought: Do I need to use a tool? Yes\nAction: Search WebMD\nAction Input: how to treat acid reflux'), 'Eat Earlier. Going to bed on a full stomach makes nighttime heartburn more likely. A full stomach puts pressure on the valve at the top of the stomach, which is supposed to keep stomach acid out ... Trim the fat off of meat and poultry, and cut the skin off chicken. Tweaks like these might be enough to tame your heartburn. Tomatoes (including foods like salsa and marinara sauce) and citrus ... Abdominal bloating. Abdominal pain. Vomiting. Indigestion. Burning or gnawing feeling in the stomach between meals or at night. Hiccups. Loss of appetite. Vomiting blood or coffee ground-like ... Esophagitis Symptoms. Symptoms of esophagitis include: Difficult or painful swallowing. Acid reflux. Heartburn. A feeling of something of being stuck in the throat. Chest pain. Nausea. Vomiting. This backflow (acid reflux) can irritate the lining of the esophagus. People with GERD may have heartburn, a burning sensation in the back of the throat, chronic cough, laryngitis, and nausea.')]} | langchain.chat_models.ChatOpenAI does not returns a response while langchain.OpeAI does rteturn results | https://api.github.com/repos/langchain-ai/langchain/issues/6968/comments | 3 | 2023-06-30T10:55:01Z | 2023-10-23T16:07:22Z | https://github.com/langchain-ai/langchain/issues/6968 | 1,782,326,661 | 6,968 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
run langchain agent takes a lot of time,about 20s。the question is simple. Is there any good way reduce time?
### Suggestion:
_No response_ | langchain agent took a lot of time | https://api.github.com/repos/langchain-ai/langchain/issues/6965/comments | 5 | 2023-06-30T08:34:20Z | 2023-10-06T16:06:08Z | https://github.com/langchain-ai/langchain/issues/6965 | 1,782,126,816 | 6,965 |
[
"langchain-ai",
"langchain"
] | ### Feature request
I'd love to be able to extend the request URL with parameters.
Currently I can only provide request headers. This does not cover my use case.
### Motivation
Some APIs don't authenticate via headers and instead use URL parameters to provide API keys and Tokens.
Example: [Trello API](https://developer.atlassian.com/cloud/trello/rest/api-group-actions/#api-group-actions)
View first code snippet there for the action and you'll see.
curl --request GET \
--url 'https://api.trello.com/1/actions/{id}?key=APIKey&token=APIToken'
### Your contribution
I would need some handholding to create my first contribution here due to the testing suite but I've got some code I've tested locally.
```python
class Requests(BaseModel):
"""Wrapper around requests to handle auth and async.
The main purpose of this wrapper is to handle authentication (by saving
headers) and enable easy async methods on the same base object.
"""
headers: Optional[Dict[str, str]] = None
aiosession: Optional[aiohttp.ClientSession] = None
url_params: Optional[Dict[str, str]] = None
class Config:
"""Configuration for this pydantic object."""
extra = Extra.forbid
arbitrary_types_allowed = True
def get(self, url: str, **kwargs: Any) -> requests.Response:
"""GET the URL and return the text."""
if self.url_params:
url = url + '&'.join([f'{k}={v}' for k, v in self.url_params.items()])
return requests.get(url, headers=self.headers, **kwargs)
def post(self, url: str, data: Dict[str, Any], **kwargs: Any) -> requests.Response:
"""POST to the URL and return the text."""
if self.url_params:
url = url + '&'.join([f'{k}={v}' for k, v in self.url_params.items()])
return requests.post(url, json=data, headers=self.headers, **kwargs)
def patch(self, url: str, data: Dict[str, Any], **kwargs: Any) -> requests.Response:
"""PATCH the URL and return the text."""
if self.url_params:
url = url + '&'.join([f'{k}={v}' for k, v in self.url_params.items()])
return requests.patch(url, json=data, headers=self.headers, **kwargs)
def put(self, url: str, data: Dict[str, Any], **kwargs: Any) -> requests.Response:
"""PUT the URL and return the text."""
if self.url_params:
url = url + '&'.join([f'{k}={v}' for k, v in self.url_params.items()])
return requests.put(url, json=data, headers=self.headers, **kwargs)
def delete(self, url: str, **kwargs: Any) -> requests.Response:
"""DELETE the URL and return the text."""
if self.url_params:
url = url + '&'.join([f'{k}={v}' for k, v in self.url_params.items()])
return requests.delete(url, headers=self.headers, **kwargs)
``` | Adding support for URL Parameters in RequestsWrapper | https://api.github.com/repos/langchain-ai/langchain/issues/6963/comments | 3 | 2023-06-30T07:58:56Z | 2024-06-16T08:34:56Z | https://github.com/langchain-ai/langchain/issues/6963 | 1,782,079,348 | 6,963 |
[
"langchain-ai",
"langchain"
] | ### Feature request
Please update the CSVLoader to take list of columns as source_column.
### Motivation
This update will be be helpful, if the documents are embedded and used with vectordbs like Qdrant. As vectordbs like Qdrant offers filtering option, if the metadata is of Document object is used as payload then it will provide a great enhancement.
### Your contribution
No | Enable CSVLoader to take list of columns as source_column (or metadata) | https://api.github.com/repos/langchain-ai/langchain/issues/6961/comments | 6 | 2023-06-30T07:44:50Z | 2023-12-19T00:50:23Z | https://github.com/langchain-ai/langchain/issues/6961 | 1,782,060,256 | 6,961 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
from langchain.agents.agent_toolkits import SQLDatabaseToolkit
from langchain.sql_database import SQLDatabase
from transformers import AutoModel, AutoTokenizer
model_name = ".\\langchain-models\\THUDM\\chatglm2-6b"
local_model = AutoModel.from_pretrained(model_name, trust_remote_code=True)
local_tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True)
db = SQLDatabase.from_uri("mysql+pymysql://root:root@localhost/magic")
toolkit = SQLDatabaseToolkit(db=db, llm=local_model, tokenizer=local_tokenizer)
tables = toolkit.list_tables_sql_db()
print(tables)
Traceback (most recent call last):
File "D:\chat\langchain-ChatGLM\test_sql.py", line 8, in <module>
toolkit = SQLDatabaseToolkit(db=db, llm=local_model)
File "pydantic\main.py", line 341, in pydantic.main.BaseModel.__init__
pydantic.error_wrappers.ValidationError: 1 validation error for SQLDatabaseToolkit
llm
value is not a valid dict (type=type_error.dict)
### Suggestion:
_No response_ | i try to connect mysql database,but it give me a error about llm value is not a valid dict(type=type_error.dict),how to solve the problem? | https://api.github.com/repos/langchain-ai/langchain/issues/6959/comments | 4 | 2023-06-30T07:02:07Z | 2023-10-09T16:06:21Z | https://github.com/langchain-ai/langchain/issues/6959 | 1,782,008,769 | 6,959 |
[
"langchain-ai",
"langchain"
] | ### Feature request
CharacterTextSplitter split a size of 1GB code base with warnings exceed the log buffer, like
```
Created a chunk of size 2140, which is longer than the specified 900
Created a chunk of size 1269, which is longer than the specified 900
Created a chunk of size 1955, which is longer than the specified 900
Created a chunk of size 3410, which is longer than the specified 900
Created a chunk of size 1192, which is longer than the specified 900
Created a chunk of size 1392, which is longer than the specified 900
Created a chunk of size 1540, which is longer than the specified 900
- very very long...
```
Walking through the relvent code
```
def _merge_splits(self, splits: Iterable[str], separator: str) -> List[str]:
# We now want to combine these smaller pieces into medium size
# chunks to send to the LLM.
separator_len = self._length_function(separator)
docs = []
current_doc: List[str] = []
total = 0
for d in splits:
_len = self._length_function(d)
if (
total + _len + (separator_len if len(current_doc) > 0 else 0)
> self._chunk_size
):
if total > self._chunk_size:
logger.warning(
f"Created a chunk of size {total}, "
f"which is longer than the specified {self._chunk_size}"
)
if len(current_doc) > 0:
doc = self._join_docs(current_doc, separator)
if doc is not None:
docs.append(doc)
# Keep on popping if:
# - we have a larger chunk than in the chunk overlap
# - or if we still have any chunks and the length is long
while total > self._chunk_overlap or (
total + _len + (separator_len if len(current_doc) > 0 else 0)
> self._chunk_size
and total > 0
):
total -= self._length_function(current_doc[0]) + (
separator_len if len(current_doc) > 1 else 0
)
current_doc = current_doc[1:]
current_doc.append(d)
total += _len + (separator_len if len(current_doc) > 1 else 0)
doc = self._join_docs(current_doc, separator)
if doc is not None:
docs.append(doc)
return docs
```
I know langchain try to keep semantic integrity of code language with language specific separtor.
### Motivation
Can we just pop the last split once the `total > self._chunk_size` in `def _merge_splits(self, splits: Iterable[str], separator: str) -> List[str]:`
### Your contribution
Maybe later I'll propose a PR | CharacterTextSplitter constanly generate chunks longer than given chunk_size | https://api.github.com/repos/langchain-ai/langchain/issues/6958/comments | 4 | 2023-06-30T06:46:26Z | 2023-12-13T16:08:43Z | https://github.com/langchain-ai/langchain/issues/6958 | 1,781,991,448 | 6,958 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
```
agent = initialize_agent(tools, llm, agent="zero-shot-react-description", return_intermediate_steps=True, verbose=True)
agent = initialize_agent(tools, llm, agent="zero-shot-react-description", return_intermediate_steps=True, memory=ConversationBufferMemory(memory_key="chat_history", input_key='input', output_key="output"), verbose=True)
```
https://python.langchain.com/docs/modules/agents/how_to/intermediate_steps
<img width="857" alt="image" src="https://github.com/hwchase17/langchain/assets/42615243/9ea12e4b-1775-4114-a750-0625e1cf8302">
### Suggestion:
I don't know how to solve it! | ValueError: `run` not supported when there is not exactly one output key. Got ['output', 'intermediate_steps'] | https://api.github.com/repos/langchain-ai/langchain/issues/6956/comments | 8 | 2023-06-30T05:52:27Z | 2024-05-19T05:48:47Z | https://github.com/langchain-ai/langchain/issues/6956 | 1,781,940,281 | 6,956 |
[
"langchain-ai",
"langchain"
] | ### System Info
ConversationalRetrievalChain with Question Answering with sources
```python
llm = OpenAI(temperature=0)
question_generator = LLMChain(llm=llm, prompt=CONDENSE_QUESTION_PROMPT)
doc_chain = load_qa_with_sources_chain(llm, chain_type="map_reduce")
chain = ConversationalRetrievalChain(
retriever=vectorstore.as_retriever(),
question_generator=question_generator,
combine_docs_chain=doc_chain,
)
```
if i set the chain_type to "refine". it will not return sources.
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
just like the turtial code.
### Expected behavior
i want the 'refine' chain return sources. | Conversational Retrieval QA with sources cannot return source | https://api.github.com/repos/langchain-ai/langchain/issues/6954/comments | 2 | 2023-06-30T03:17:04Z | 2023-07-03T07:34:15Z | https://github.com/langchain-ai/langchain/issues/6954 | 1,781,805,088 | 6,954 |
[
"langchain-ai",
"langchain"
] | ### Feature request
Is there any way to store the Sentencebert/Bert/Spacy/Doc2vec based embeddings in the vector database using langchain
```
pages= "page content"
embeddings = OpenAIEmbeddings()
persist_directory = 'db'
vectordb = Chroma.from_documents(documents=pages, embedding=embeddings, persist_directory=persist_directory)
```
shall use **Sentencebert/Bert/Spacy/Doc2vec embeddings** instead of OpenAIEmbeddings() in the above code?
if possible then what is the syntax for that?
### Motivation
For using native embedding formats likeSentencebert/Bert/Spacy/Doc2vec in langchain
### Your contribution
For using native embedding formats likeSentencebert/Bert/Spacy/Doc2vec in langchain | Sentencebert/Bert/Spacy/Doc2vec embedding support | https://api.github.com/repos/langchain-ai/langchain/issues/6952/comments | 8 | 2023-06-30T03:13:49Z | 2023-10-06T16:06:18Z | https://github.com/langchain-ai/langchain/issues/6952 | 1,781,803,067 | 6,952 |
[
"langchain-ai",
"langchain"
] | my code:
```
from langchain.llms import HuggingFacePipeline
from langchain.chains import ConversationalRetrievalChain
pipe = pipeline(
"text-generation", # task type
model=model,
tokenizer=tokenizer,
max_new_tokens=1024,
device=0, # very trick, gpu rank
)
local_llm = HuggingFacePipeline(pipeline=pipe)
qa = ConversationalRetrievalChain.from_llm(local_llm, retriever=retriever)
questions = [
"void Plan::ScheduleWork(map<Edge*, Want>::iterator want_e) {",
]
chat_history = []
for question in questions:
result = qa({"question": question, "chat_history": chat_history})
# chat_history.append((question, result["answer"]))
print("**chat_history**", chat_history)
print(f"=>**Question**: {question} \n")
print(f"=>**Answer**: {result['answer']} \n\n")
```
which generate like this:
```
**chat_history** []
=>**Question**: void Plan::ScheduleWork(map<Edge*, Want>::iterator want_e) {
=>**Answer**: void Plan::ScheduleWork(map<Edge*, Want>::iterator want_e) {
if (want_e->second == kWantToFinish) {
// This edge has already been scheduled. We can get here again if an edge
// and one of its dependencies share an order-only input, or if a node
// duplicates an out edge (see https://github.question/: void Plan::ScheduleWork(map<Edge*, Want>::iterator want_e) {
Helpful Answer: void Plan::ScheduleWork(map<Edge*, Want>::iterator want_e) {
if (want_e->second == kWantToFinish) {
// This edge has already been scheduled. We can get here again if an edge
// and one of its dependencies share an order-only input, or if a node
// duplicates an out edge (see https://github.com/ninja-build/ninja/pull/519).
// Avoid scheduling the work again.
return;
}
assert(want_e->second == kWantToStart);
want_e->second = kWantToFinish;
```
my `model` is locally fine-tuned starcode with perf merged, `retriever` is DeepLake vectorstore with some args like cos etc.
I do not get the "Helpful Answer" block, WHERE ON EARTH THIS COME FROM???
### Suggestion:
_No response_ | Issue: Get Confused with ConversationalRetrievalChain + HuggingFacePipeline always gen wizard "Helpful answer: " | https://api.github.com/repos/langchain-ai/langchain/issues/6951/comments | 2 | 2023-06-30T02:41:25Z | 2023-10-06T16:06:23Z | https://github.com/langchain-ai/langchain/issues/6951 | 1,781,775,217 | 6,951 |
[
"langchain-ai",
"langchain"
] | ### Feature request
Just curious if there is a feature to check whether the summarized output from a source document hallucinates or not by extending the checker chain.
### Motivation
LLMSummarizationCheckerChain checks facts of summaries using LLM knowledge. The motivation is to develop a feature that checks for facts from an augmented source document.
### Your contribution
Will submit a PR if this does not exist yet and if it is a nice to have feature | LLMSummarizationCheckerFromSource | https://api.github.com/repos/langchain-ai/langchain/issues/6950/comments | 1 | 2023-06-30T02:19:22Z | 2023-10-06T16:06:29Z | https://github.com/langchain-ai/langchain/issues/6950 | 1,781,757,970 | 6,950 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
Genuinely wanted to know the reasonings behind the inclusion of "Mu" in the "PyMuPDFLoader". Coming from an Indian background my friend and I held 1-2 hours of discussion over what this represents and the conclusions were not so appropriate.
We landed on this discussion after we noticed the inconsistency in the naming of the single pdf loader (i.e. "PyMuPDFLoader") and the multiple pdf loader (i.e. "PyPDFDirectoryLoader").
(Being a pioneer in LLM Orchestration we admire the open-course revolution. LangChain has opened a wider scope for opensource collaborations and we thank @hwchase17 for it.)
### Suggestion:
_No response_ | Inconsistent naming of document loaders | https://api.github.com/repos/langchain-ai/langchain/issues/6947/comments | 1 | 2023-06-30T00:41:17Z | 2023-06-30T15:56:45Z | https://github.com/langchain-ai/langchain/issues/6947 | 1,781,674,685 | 6,947 |
[
"langchain-ai",
"langchain"
] | ### Issue with current documentation:
From the [documentation](https://python.langchain.com/docs/modules/model_io/prompts/prompt_templates/few_shot_examples), it is not clear whether ‘FewShotPromptTemplate’ can be used with Chat models: in particular, it does not appear to be compatible with OpenAI’s suggested best practices (mentioned on the same page) of having alternating “name: example_user” and “name: example_$role” JSON passed to the OpenAI API for each few-shot example message passed in the call.
### Idea or request for content:
Is it planned to have a separate “ChatFewShotPromptTemplate” class? If so, perhaps a placeholder could be added to the documentation to make this clear. If not, perhaps the documentation could be updated to make the mismatch explicit so that the reader understands the limitations of the current FewShotPromptTemplate in the context of Chat models. | FewShotPromptTemplate with Chat models | https://api.github.com/repos/langchain-ai/langchain/issues/6946/comments | 1 | 2023-06-30T00:35:45Z | 2023-10-06T16:06:34Z | https://github.com/langchain-ai/langchain/issues/6946 | 1,781,669,365 | 6,946 |
[
"langchain-ai",
"langchain"
] | ### Feature request
[Brave Search](https://api.search.brave.com/app/dashboard) is a new interesting search engine
It can be used in place of the `Google Search`.
### Motivation
Users who are subscribers of Brave Search can use this.
### Your contribution
I can try to implement it if somebody is interested in this integration. | add `Brave` Search | https://api.github.com/repos/langchain-ai/langchain/issues/6939/comments | 2 | 2023-06-29T21:47:51Z | 2023-06-30T15:24:25Z | https://github.com/langchain-ai/langchain/issues/6939 | 1,781,557,474 | 6,939 |
[
"langchain-ai",
"langchain"
] | ### System Info
Chroma v0.2.36, python 3.9
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
`
settings = Settings(
chroma_db_impl='duckdb+parquet',
persist_directory="db",
anonymized_telemetry=False
)
pages = self._load_single_document(file_path=file_path)
docs = text_splitter.split_documents(pages)
db = Chroma.from_documents(docs, embedding_function, client_settings=settings)
`
### Expected behavior
All files for the database should be created in the db directory. However, the parquet files are not being created when db.from_documents() is called.
However, this is not happening. All the files in the index directory are created at this time.
Later, when the application is killed (flask application, so in between requests the DB is torn down and thus should be persisited), then the parquet files show up. | Chroma (duckdb+parquet) DB isn't saving the parquet files for persisted DB until application is killed | https://api.github.com/repos/langchain-ai/langchain/issues/6938/comments | 9 | 2023-06-29T21:15:14Z | 2023-10-06T16:39:27Z | https://github.com/langchain-ai/langchain/issues/6938 | 1,781,527,088 | 6,938 |
[
"langchain-ai",
"langchain"
] | ### System Info
Langchain 0.0.218
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [X] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction

### Expected behavior
The module should be found automatically. | Missing reference to o365_toolkit module in __init__.py of agent_toolkits | https://api.github.com/repos/langchain-ai/langchain/issues/6936/comments | 4 | 2023-06-29T19:23:25Z | 2023-09-28T16:20:50Z | https://github.com/langchain-ai/langchain/issues/6936 | 1,781,364,740 | 6,936 |
[
"langchain-ai",
"langchain"
] | ### System Info
The SageMaker Endpoint validation function currently throws a missing credential error if the region name is not provided in the input. However, it is crucial to perform input validation for the region name to provide the user with clear error information.
GitHub Issue Reference:
For the code reference related to this issue, please visit: [GitHub Link](https://github.com/hwchase17/langchain/blob/7dcc698ebf4ebf4eb7331d25cec279f402918629472b/langchain/llms/sagemaker_endpoint.py#L177)
### Who can help?
@3coins
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
This can be reproduced by creating a SageMaker Endpoing LangChain without specifying region name. Please ensure that you have valid AWS credentials configured.
### Expected behavior
The error message must state
```Region name is missing. Please enter valid region name. E.g: us-west-2.``` | SageMaker Endpoint - Validations | https://api.github.com/repos/langchain-ai/langchain/issues/6934/comments | 3 | 2023-06-29T17:43:12Z | 2023-11-30T16:08:26Z | https://github.com/langchain-ai/langchain/issues/6934 | 1,781,255,515 | 6,934 |
[
"langchain-ai",
"langchain"
] | ### Feature request
A universal approach to add openai functions specifically for output format control to `LLMchain`. Create a class that takes in dict, json, pydantic, or even rail specs (from guardrails) to preprocess classes or dictionaries into openai functions (function, parameters, and properties) for the `llm_kwargs` in `LLMChain` class in conjunction with the openai function parsers.
### Motivation
It is hard to use the function calling feature right now if you are not working with agents (can access using `format_tool_to_openai_function`) or a specific use case, such as the Extractor or QA chains. The idea is to create a general class that takes in dict, json, pydantic, or even rail specs (from guardrails) to preprocess classes or dictionaries into openai functions for better control of the output format.
### Your contribution
I can help with this, should I put this in the `docs/modules` folder? | General OpenAI Function Mapping from Pydantic, Dicts directly to function, parameters, and properties | https://api.github.com/repos/langchain-ai/langchain/issues/6933/comments | 9 | 2023-06-29T17:07:09Z | 2023-07-08T01:50:27Z | https://github.com/langchain-ai/langchain/issues/6933 | 1,781,214,707 | 6,933 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.