issue_owner_repo listlengths 2 2 | issue_body stringlengths 0 261k โ | issue_title stringlengths 1 925 | issue_comments_url stringlengths 56 81 | issue_comments_count int64 0 2.5k | issue_created_at stringlengths 20 20 | issue_updated_at stringlengths 20 20 | issue_html_url stringlengths 37 62 | issue_github_id int64 387k 2.46B | issue_number int64 1 127k |
|---|---|---|---|---|---|---|---|---|---|
[
"langchain-ai",
"langchain"
] | ### System Info
Langchain version: 0.0.320
Python 3.9.6
Issue:
The LLM output ` ```json \n{\n \"action\": \"...``` ` for a StructuredChatAgent using StructuredChatOutputParser.
The regex in the parser `re.compile(r"```(?:json)?\n(.*?)```", re.DOTALL)` does not allow for a space after `json`.
LLM: Claude v2
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
create an agent using StructuredChatAgent and a tool that is called by the agent.
If the output from the LLM contains a space after `json`, it will call `AgentFinish`
### Expected behavior
The expected behaviour is that the output should match the regex. | StructuredChatOutputParser regex not accounting for space | https://api.github.com/repos/langchain-ai/langchain/issues/12158/comments | 1 | 2023-10-23T14:15:02Z | 2023-10-27T09:37:53Z | https://github.com/langchain-ai/langchain/issues/12158 | 1,957,280,320 | 12,158 |
[
"langchain-ai",
"langchain"
] | ### System Info
Hi,
I am working on a student teacher model . I am running into a problem where I agent does not generate final answer after the expected observations. Below is my code and output
Code
```
assignment_instruction_2 = """
1. Ask his name and then welcome him
2. Ask the topic he wants to understand
3. Ask about his understanding about the topic
3. Let him answer
4. Then take out the relevant information from the get_next_content tool
5. If it is incomplete then tell him complete answer
"""
FORMAT_INSTRUCTIONS = f"""Use the following format:
Question: the input question you must answer
Thought: you should always think about what to do
Action: the action to take, should be one of [{'get_topics', 'get_next_content', 'get_topic_tool'}]
Action Input: the input to the action
Observation: the detailed,at most comprehensive result of the action
... (this Thought/Action/Action Input/Observation can repeat N times)
Thought: I now know the final answer based on my observation, i do not need anything more
Final Answer: the final answer to the original input question is the full detailed explanation from the Observation provided as bullet points.
You have to provide the answer maximum after 2 Thoughts.
"""
content_assignment = f"""You are the author of the book, You will be asked an assigenment by the user, which are typically students only.
help them to understand the topic according to the instructions delimited by /* */. follow them sequestially
/*{assignment_instruction_2} */
memory_assignment = ConversationBufferMemory(memory_key="chat_history", return_messages=True)
llm_assignment = ChatOpenAI(openai_api_key=os.getenv("OPENAI_API_KEY"), temperature=0, model = "gpt-4")
agent_chain_assignment = initialize_agent(
tools,
llm_assignment,
agent=AgentType.CHAT_CONVERSATIONAL_REACT_DESCRIPTION,
verbose=True,
memory=memory_assignment,
max_iterations=3,
# early_stopping_method="generate",
format_instructions=FORMAT_INSTRUCTIONS,
agent_kwargs={"system_message": content_assignment},
handle_parsing_errors=_handle_error
)
print("<================= TOPIC Understand AGENT ====================>")
while(True):
user_message = input("User: ")
if user_message == 'exit':
break
response = agent_chain_assignment.run(user_message)
print("LLM : ",response)
print("===============================")
```
Ouput:
``` ```
<================= TOPIC Understand AGENT ====================>
User: hi, sagar
> Entering new AgentExecutor chain...
```json
{
"action": "Final Answer",
"action_input": "Hello Sagar, welcome! What topic would you like to understand better today?"
}
```
> Finished chain.
LLM : Hello Sagar, welcome! What topic would you like to understand better today?
===============================
User: farming
> Entering new AgentExecutor chain...
Could not parse LLM output: That's a broad topic, Sagar. Could you tell me what you already understand about farming? This will help me provide you with the most relevant information.
Observation: Could not parse LLM output: That's a broad topic,
Thought:Could not parse LLM output: I'm sorry for the confusion, Sagar. You mentioned that you want to understand more about farming. Could you please specify what exactly you know about farming? This will help me to provide you with the most relevant information.
Observation: Could not parse LLM output: I'm sorry for the conf
Thought:Could not parse LLM output: I'm sorry for the confusion, Sagar. You mentioned that you want to understand more about farming. Could you please specify what exactly you know about farming? This will help me to provide you with the most relevant information.
Observation: Could not parse LLM output: I'm sorry for the conf
Thought:
> Finished chain.
LLM : Agent stopped due to iteration limit or time limit.
===============================
``` ```
It was supposed to output the final answer after the first answer (right after the user said "farming"). But it is running in loop and sometime it also consider that the user have already answered.
Can I make some corrections in the code where I can agent produce final answer right where it needs too.
Thanks,
Sagar
### Who can help?
@hwchase17 @agola11
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Provided in the question itself
### Expected behavior
provided in the explaination | Agent is not stopping after each answer | https://api.github.com/repos/langchain-ai/langchain/issues/12157/comments | 2 | 2023-10-23T13:32:40Z | 2024-02-08T16:15:15Z | https://github.com/langchain-ai/langchain/issues/12157 | 1,957,182,305 | 12,157 |
[
"langchain-ai",
"langchain"
] | ### System Info
Hey folks, I think i stumbled on a bug (or i'm using langchain wrong)
Langchain version : 0.0.320
Platform Ubuntu 23.04
Python: 3.11.4
### Who can help?
@hwchase17 , @agola11
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Run the following:
`
llm = llms.VertexAI(model_name="code-bison@001", max_output_tokens=1000, temperature=0.0)
prediction = llm.predict(""" write a fibonacci sequence in python""")
from pprint import pprint
pprint(prediction)`
### Expected behavior
We get a prediction
(adding more info since the form has ran out of fields)
I think the bug is in [llms/vertexai.py:301](https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/llms/vertexai.py#L301C23-L301C23) . Variable res is a TextGenerationResponse as opposed to MultiCandidateTextGenerationResponse
Hence there are no "candidates" as you would expect from a chat model.
This happens because: Google's sdk (vertexai/language_models_language_models.py)
method:
>_ChatSessionBase. _parse_chat_prediction_response
returns a `MultiCandidateTextGenerationResponse`
but both `CodeChatSession` and `CodeGenerationModel` return a `TextGenerationResponse`
I think the fix might be replacing
`generations.append([_response_to_generation(r) for r in res.candidates])`
with
```
if self.is_codey_model:
generations.append([_response_to_generation(res)])
else:
generations.append([_response_to_generation(r) for r in res.candidates])
```
happy to send a pr if I helps | Langchain crashes when retrieving results from vertexai codey models | https://api.github.com/repos/langchain-ai/langchain/issues/12156/comments | 2 | 2023-10-23T13:02:18Z | 2024-02-08T16:15:20Z | https://github.com/langchain-ai/langchain/issues/12156 | 1,957,115,754 | 12,156 |
[
"langchain-ai",
"langchain"
] | ### System Info
`langchain==0.0.320`
### Who can help?
_No response_
### Information
- [x] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```py
from langchain.vectorstores import MatchingEngine
texts = [
"The cat sat on",
"the mat.",
"I like to",
"eat pizza for",
"dinner.",
"The sun sets",
"in the west.",
]
vector_store = MatchingEngine.from_components(
texts=texts,
project_id="<my_project_id>",
region="<my_region>",
gcs_bucket_uri="<my_gcs_bucket>",
index_id="<my_matching_engine_index_id>",
endpoint_id="<my_matching_engine_endpoint_id>",
)
vector_store.add_texts(texts=texts)
```
### Expected behavior
I think line 136 in matching_engine.py should be
```py
jsons.append(json_)
``` | MatchingEngine.add_texts() doesn't work | https://api.github.com/repos/langchain-ai/langchain/issues/12154/comments | 1 | 2023-10-23T12:28:00Z | 2023-11-19T18:10:42Z | https://github.com/langchain-ai/langchain/issues/12154 | 1,957,048,261 | 12,154 |
[
"langchain-ai",
"langchain"
] | ### System Info
`langchain==0.0.320`
Example file: https://drive.google.com/file/d/1zDj3VXohUO7x4udu9RY9KxWbzBUIrsvb/view?usp=sharing
```
from PyPDF2 import PdfReader
from langchain.text_splitter import CharacterTextSplitter
# Specify the PDF file to be processed
pdf = "Deep Learning.pdf"
# Initialize a PdfReader object to read the PDF file
pdf_reader = PdfReader(pdf)
# Initialize an empty string to store the extracted text from the PDF
text = ""
for i, page in enumerate(pdf_reader.pages):
text += f" ### Page {i}:\n\n" + page.extract_text()
assert text.count(" ### ") == 100
text_splitter = CharacterTextSplitter(
separator=" ### ",
)
chunks = text_splitter.split_text(text)
len(chunks) ## = 76
```
Length chunks = 76 while there are 100 counts of the separator ` ### `. All the splits by ` ### ` look decent, so not sure why they are being merged strangely. Please help advice! Thanks!
Thank you so much!
### Who can help?
@hwchase17
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [X] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```
from PyPDF2 import PdfReader
from langchain.text_splitter import CharacterTextSplitter
# Specify the PDF file to be processed
pdf = "Deep Learning.pdf"
# Initialize a PdfReader object to read the PDF file
pdf_reader = PdfReader(pdf)
# Initialize an empty string to store the extracted text from the PDF
text = ""
for i, page in enumerate(pdf_reader.pages):
text += f" ### Page {i}:\n\n" + page.extract_text()
assert text.count(" ### ") == 100
text_splitter = CharacterTextSplitter(
separator=" ### ",
)
chunks = text_splitter.split_text(text)
len(chunks) ## = 76
```
### Expected behavior
`len(chunks) == 100` | CharacterTextSplitter return incorrect chunks | https://api.github.com/repos/langchain-ai/langchain/issues/12151/comments | 2 | 2023-10-23T09:37:40Z | 2024-02-08T16:15:26Z | https://github.com/langchain-ai/langchain/issues/12151 | 1,956,756,560 | 12,151 |
[
"langchain-ai",
"langchain"
] | ### System Info
Ubuntu: 22.04.3 LTS
python: 3.10
pip: 22.0.2
langchain: 0.0.319
vector stores: faiss
llm model: llama-2-13b-chat.Q4_K_M.gguf and mistral-7b-openorca.Q4_K_M.gguf
embeddings model: thenlper/gte-large
### Who can help?
anyone
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [X] Embedding Models
- [X] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
1. Prompt is created via PromptTemplate.
In Prompt:
```
You are a financial consultant. Try to answer the question based on the information below.
If you cannot answer the question based on the information, say that you cannot find the answer or are unable to find the answer.
Therefore, try to understand the context of the question and answer only on the basis of the information given. Do not write answers that are irrelevant.
If you do not have enough information to end a sentence with a period, then do not write that sentence.
Context: {context}
Question: {question}
Helpful answer:
```
2. Load vectoron storage using HuggingFaceEmbeddings and FAISS:
```
embeddings = HuggingFaceEmbeddings(model_name="thenlper/gte-large", model_kwargs={'device': 'cpu'})
db = FAISS.load_local(persist_directory, embeddings)
retriever = db.as_retriever(search_kwargs={"k": 2})
```
3. Load the LLM via LlamaCpp or CTransformers
LlamaCpp code:
```
callback_manager = CallbackManager([StreamingStdOutCallbackHandler()])
llm = LlamaCpp(
model_path=model_path,
n_gpu_layers=-1,
use_mlock=True,
n_batch=512,
n_ctx=2048,
temperature=0.8,
f16_kv=True,
callback_manager=callback_manager,
verbose=True,
top_k=1
)
```
CTransformers Code:
```
callback_manager = CallbackManager([StreamingStdOutCallbackHandler()])
llm = CTransformers(
model=model_path,
model_type="llama",
config={'max_new_tokens': 2048, 'temperature': 0.8},
callbacks=callback_manager
)
```
4. Create Retriever using RetrievalQA:
```
qa = RetrievalQA.from_chain_type(
llm=llm,
chain_type="stuff",
retriever=retriever,
return_source_documents=return_source_documents,
chain_type_kwargs={"prompt": custom_prompt}
)
```
5. Query the created retriever: qa("your question")
### Expected behavior
I look forward to receiving a completed response from LLM.
However, when using LlamaCpp, the response reaches about 1000 characters and ends. The last sentence from LLM is abruptly cut off.
When CTransformers are used, the response can reach different lengths, after which an error occurs: "Number of tokens (513) exceeded maximum context length (512)" and there are many of these errors. After that I get a response where the beginning is correct and then the same words are repeated. The repetition of words starts from the moment the errors about the number of tokens begin. | LlamaCpp truncates the response from LLM, and CTransformers gives a token overrun error | https://api.github.com/repos/langchain-ai/langchain/issues/12150/comments | 2 | 2023-10-23T09:32:40Z | 2024-02-13T16:10:07Z | https://github.com/langchain-ai/langchain/issues/12150 | 1,956,747,660 | 12,150 |
[
"langchain-ai",
"langchain"
] | ### System Info
python == 3.10.12
langchain == 0.0.312
### Who can help?
@hwchase17 @agola11
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```open_llm = ChatOpenAI(temperature=0, openai_api_key=OPENAI_API_KEY, model_name='gpt-3.5-turbo-16k', max_tokens=3000)
# test chain (LCEL)
human_message = HumanMessagePromptTemplate.from_template('test prompt: {input}')
system_message = SystemMessagePromptTemplate.from_template('system')
all_messages = ChatPromptTemplate.from_messages([system_message, human_message])
test_chain = all_messages | open_llm | StrOutputParser()
# LLMChain
class MyLLMChain(LLMChain):
output_key: str = "output" #: :meta private
prompt_infos = [
{
"name": "mastermind",
"description": "for general questions.",
"chain": MyLLMChain(llm=openai_chatbot, prompt=PromptTemplate(template='''{input}''',
input_variables=["input"]
))
},
{
"name": "test",
"description": "for testing",
"chain": test_chain
}]
destinations = [f"{p['name']}: {p['description']}" for p in prompt_infos]
destinations_str = "\n".join(destinations)
router_template = MULTI_PROMPT_ROUTER_TEMPLATE.format(destinations=destinations_str)
router_prompt = PromptTemplate(
template=router_template,
input_variables=["input"],
# output_parser=OpenAIFunctionsAgentOutputParser()
# output_parser=RouterOutputParser(),
output_parser=OutputFixingParser.from_llm(parser=RouterOutputParser(), llm=openai_chatbot),
)
router_chain = LLMRouterChain.from_llm(openai_chatbot, router_prompt)
destination_chains = {}
for p_info in prompt_infos:
name = p_info["name"]
chain = p_info["chain"]
destination_chains[name] = chain
default_chain = ConversationChain(llm=openai_chatbot, output_key="output")
agent_yan_chain = MultiPromptChain(
router_chain=router_chain,
destination_chains=destination_chains,
default_chain=default_chain,
verbose=True
)
```
Error message:
```
ValidationError: 1 validation error for MultiPromptChain
destination_chains -> test
Can't instantiate abstract class Chain with abstract methods _call, input_keys, output_keys (type=type_error)
```
### Expected behavior
LCEL should work with MultiPromptChain | LCEL not working with MultiPromptChain | https://api.github.com/repos/langchain-ai/langchain/issues/12149/comments | 7 | 2023-10-23T09:10:57Z | 2024-06-26T05:16:03Z | https://github.com/langchain-ai/langchain/issues/12149 | 1,956,707,298 | 12,149 |
[
"langchain-ai",
"langchain"
] | ### System Info
Python 3.11, langchain 0.0.315 and mac
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Create a FastAPI endpoint/router
```python
from fastapi import APIRouter, Response
from pydantic import BaseModel
from typing import Optional
from langchain.schema import Document
router = APIRouter()
class QueryOut(BaseModel):
answer: str
sources: list[Document]
@router.post("/query")
async def get_answer():
sources = [Document(page_content="foo", metadata={"baa": "baaaa"})]
return QueryOut(answer="answer", sources=sources)
```
Then call it and boom
```bash
File "/Users/FRANCESCO.ZUPPICHINI/Documents/dbi_alphaai_knowledge_bot/.venv/lib/python3.11/site-packages/pydantic/main.py", line 164, in __init__
__pydantic_self__.__pydantic_validator__.validate_python(data, self_instance=__pydantic_self__)
TypeError: BaseModel.validate() takes 2 positional arguments but 3 were given
```
Is this linked to the fact that langchain is still using pydantic v1? If so, I love you guys but put that 30M to good use ๐
### Expected behavior
Should not explode | Langchain Document schema explodes with FastAPI | https://api.github.com/repos/langchain-ai/langchain/issues/12147/comments | 2 | 2023-10-23T07:35:17Z | 2023-10-23T13:17:32Z | https://github.com/langchain-ai/langchain/issues/12147 | 1,956,539,711 | 12,147 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
Hello, I have this use-case where there are different types of documents. I can parse documents using document loaders using langchain. But, there are images also in these documents. I want to store them as metadata and if answer generated from a context chunk it show the image also. Please help.
### Suggestion:
_No response_ | How to show images from PDFs, PPT, DOCs documents as part of answer? | https://api.github.com/repos/langchain-ai/langchain/issues/12144/comments | 2 | 2023-10-23T02:16:49Z | 2024-02-08T16:15:35Z | https://github.com/langchain-ai/langchain/issues/12144 | 1,956,221,942 | 12,144 |
[
"langchain-ai",
"langchain"
] | ### System Info
version:
- langchain-0.0.320
- py311
### Who can help?
@eyurtsev
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [X] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Running the [example code](https://python.langchain.com/docs/integrations/retrievers/arxiv)
```python
from langchain.retrievers import ArxivRetriever
retriever = ArxivRetriever(load_max_docs=2)
docs = retriever.get_relevant_documents(query="1605.08386")
```
The above code gives the error:
`AttributeError: partially initialized module 'arxiv' has no attribute 'Search' (most likely due to a circular import)`
### Expected behavior
Should print out
```
{'Published': '2016-05-26',
'Title': 'Heat-bath random walks with Markov bases',
'Authors': 'Caprice Stanley, Tobias Windisch',
'Summary': 'Graphs on lattice points are studied whose edges come from a finite set of\nallowed moves of arbitrary length. We show that the diameter of these graphs on\nfibers of a fixed integer matrix can be bounded from above by a constant. We\nthen study the mixing behaviour of heat-bath random walks on these graphs. We\nalso state explicit conditions on the set of moves so that the heat-bath random\nwalk, a generalization of the Glauber dynamics, is an expander in fixed\ndimension.'}
``` | AttributeError: partially initialized module 'arxiv' has no attribute 'Search' (most likely due to a circular import) | https://api.github.com/repos/langchain-ai/langchain/issues/12143/comments | 2 | 2023-10-23T01:04:04Z | 2023-12-06T02:37:31Z | https://github.com/langchain-ai/langchain/issues/12143 | 1,956,153,788 | 12,143 |
[
"langchain-ai",
"langchain"
] | ### System Info
latest version langchain locks peer dependency: "googleapis" version to 126.0.1, wondering why it has to be googleapis v126.0.1. it introduces tech debts.
Please fix the peer dep version for `googleapis` if there is no specific reasons.
### Who can help?
@hwchase17
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
when try to install both latest version langchain and googleapis latest version in nodejs via npm.
### Expected behavior
langchain shouldn't prevent latest googleapis to be installed. | latest version langchain locks peer dependency: "googleapis" version to 126.0.1 | https://api.github.com/repos/langchain-ai/langchain/issues/12142/comments | 2 | 2023-10-23T00:55:32Z | 2024-01-31T16:33:43Z | https://github.com/langchain-ai/langchain/issues/12142 | 1,956,147,176 | 12,142 |
[
"langchain-ai",
"langchain"
] | ### Issue with current documentation:
On this page
This example:
```
class SearchInput(BaseModel):
query: str = Field(description="should be a search query")
@tool("search", return_direct=True, args_schema=SearchInput)
def search_api(query: str) -> str:
"""Searches the API for the query."""
return "Results"
search_api
```
Fails with:
ValidationError: 1 validation error for StructuredTool
args_schema
subclass of BaseModel expected (type=type_error.subclass; expected_class=BaseModel)
### Idea or request for content:
nothing missing, but example fails
Name: langchain
Version: 0.0.311 | DOC: Example from Custom Tool documentation fails | https://api.github.com/repos/langchain-ai/langchain/issues/12138/comments | 2 | 2023-10-22T18:25:42Z | 2023-10-22T18:48:16Z | https://github.com/langchain-ai/langchain/issues/12138 | 1,956,004,892 | 12,138 |
[
"langchain-ai",
"langchain"
] | ### Issue with current documentation:
When starting Docusaurus in a local container environment using the `make docs_build` command, it runs at `localhost:3000`, but a connection from the host machine is not possible.
The container has both `poetry` and `yarn` installed.
Even though port 3000 is forwarded to the host, the connection is not established.
https://github.com/langchain-ai/langchain/blob/master/docs/package.json#L7
I am proficient with Python but struggle with React, which might be influencing my troubleshooting.
Thank you!!
### Idea or request for content:
By adding the `--host 0.0.0.0` option to the `start` command in `package.json`, the issue might be resolved.
```diff
-"start": "rm -rf ./docs/api && docusaurus start",
+"start": "rm -rf ./docs/api && docusaurus start --host 0.0.0.0",
```
- With this change, it would be possible to access the port that's being forwarded from within the container to the host environment.
- However, using the `--host 0.0.0.0` option can have security implications, so caution is advised especially in production environments. | DOC: Issue Connecting to Docusaurus in Local Container Environment | https://api.github.com/repos/langchain-ai/langchain/issues/12127/comments | 1 | 2023-10-22T06:06:45Z | 2024-02-06T16:16:26Z | https://github.com/langchain-ai/langchain/issues/12127 | 1,955,775,678 | 12,127 |
[
"langchain-ai",
"langchain"
] | ### Feature request
Microsoft OneNote loader like EverNoteLoader or ObsidianLoader
### Motivation
N/A
### Your contribution
N/A | Is there any loader for Microsoft OneNote like EverNoteLoader or ObsidianLoader | https://api.github.com/repos/langchain-ai/langchain/issues/12125/comments | 10 | 2023-10-22T04:36:08Z | 2024-06-07T11:49:38Z | https://github.com/langchain-ai/langchain/issues/12125 | 1,955,757,359 | 12,125 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
I found that using a `RetrievalQA` for streaming outputs gibberish response. For example, using a `RetrievalQA` with code below on the `state_of_the_union.txt` example:
```
doc_chain = load_qa_chain(
llm=ChatOpenAI(
streaming=True,
openai_api_key=api_key,
callbacks=[StreamingStdOutCallbackHandler()]
),
chain_type="map_reduce",
verbose=False,
)
retrieval_qa = RetrievalQA(
combine_documents_chain=doc_chain,
retriever=retriever,
)
```
With this call: `retrieval_qa.run("What did the president say about Ketanji Brown Jackson")` outputs this streamed response:
```
"And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nationโs top legal minds, who will continue Justice Breyerโs legacy of excellence.""And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nationโs top legal minds, who will continue Justice Breyerโs legacy of excellence.""And I did that 4 days ago, when I nominated Circuit Court of Appeals Judge Ketanji Brown Jackson. One of our nationโs top legal minds, who will continue Justice Breyerโs legacy of excellence."The given portion of the document does not mention anything about what the president said about Ketanji Brown Jackson.The given portion of the document does not provide any information about what the president said about Ketanji Brown Jackson.
```
While using `VectorDBChain`:
```
qa = VectorDBQA.from_chain_type(
llm=ChatOpenAI(
streaming=True,
openai_api_key=api_key,
callbacks=[StreamingStdOutCallbackHandler()]
),
chain_type="map_reduce",
vectorstore=db
)
```
Outputs this:
```
The President said that he nominated Circuit Court of Appeals Judge Ketanji Brown Jackson, who is one of our nation's top legal minds and will continue Justice Breyer's legacy of excellence.
```
Is there any reason as to why the `VectorDBQA` chains have been deprecated in favour of the `RetrievalQA` chains? At first I thought the gibbering streaming was related to using PGVector, but I even tried it with Chroma and am having the same issue.
### Suggestion:
_No response_ | Issue: `VectorDBQA` is a better chain than `RetrievalQA` when it comes to streaming model response | https://api.github.com/repos/langchain-ai/langchain/issues/12124/comments | 2 | 2023-10-22T04:31:46Z | 2024-02-06T16:16:31Z | https://github.com/langchain-ai/langchain/issues/12124 | 1,955,756,498 | 12,124 |
[
"langchain-ai",
"langchain"
] | ### Feature request
Feature request to Integrate a new Langchain tool that make recommandtions on steam games based on user's Steam ID and provide game information based on given Steam game name.
### Motivation
We recognize the current challenges users face when discovering games on Steamโcluttered search results and lack of tailored information. To revolutionize this experience, we propose integrating Langchain, offering personalized game recommendations based on a user's Steam ID and comprehensive game details based on specific titles.
Langchain's algorithm will provide personalized game suggestions aligned with users' preferences, eliminating irrelevant options. It'll also furnish detailed game insightsโprices, discounts, popularity, and latest newsโfor informed decision-making.
This integration aims to streamline game discovery, enhance user satisfaction, and foster a vibrant gaming community on Steam. We're eager to discuss this enhancement and its potential to transform the Steam experience.
### Your contribution
We are a group of 4 looking to contribute to Langchain as a course project and will be creating a PR for this issue in mid November. | Feature: Steam game recommendation tool | https://api.github.com/repos/langchain-ai/langchain/issues/12120/comments | 8 | 2023-10-21T20:44:10Z | 2023-12-15T04:37:11Z | https://github.com/langchain-ai/langchain/issues/12120 | 1,955,656,927 | 12,120 |
[
"langchain-ai",
"langchain"
] | ### System Info
Langchain 0.0.285, python 3.11.5, Windows 11
### Who can help?
@hwchase17
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [X] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
``` python
from langchain.embeddings import HuggingFaceInstructEmbeddings
from langchain.vectorstores import Chroma
from langchain.text_splitter import CharacterTextSplitter
from langchain.llms import OpenAI
from langchain.chains import ConversationalRetrievalChain
from langchain.document_loaders import TextLoader
from langchain.memory import ConversationBufferMemory
from mysecrets import secrets
loader = TextLoader("./sotu.txt")
documents = loader.load()
text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
documents = text_splitter.split_documents(documents)
model_name = "hkunlp/instructor-large"
model_kwargs = {'device': 'cpu'}
encode_kwargs = {'normalize_embeddings': True}
embed_instruction = "Represent the document for summary"
embedding_function = HuggingFaceInstructEmbeddings(
model_name=model_name,
model_kwargs=model_kwargs,
encode_kwargs=encode_kwargs,
embed_instruction = embed_instruction
)
vectorstore = Chroma.from_documents(documents, embedding_function)
memory = ConversationBufferMemory(memory_key="chat_history", return_messages=True)
llm = OpenAI(openai_api_key=secrets["OPENAI-APIKEY"],temperature=0)
qa = ConversationalRetrievalChain.from_llm(llm=llm, retriever=vectorstore.as_retriever(), memory=memory, return_source_documents=True)
query = "What did the president say about Ketanji Brown Jackson"
result = qa({"question": query})
```
### Expected behavior
When `return_source_documents` is set to False, code runs as intended. When ConversationalRetrievalChain uses the default value for `memory`, code runs as intended. If ConversationalRetrievalChain is used with memory and source documents are to be returned, the code fails since chat_memory.py (https://github.com/langchain-ai/langchain/blame/master/libs/langchain/langchain/memory/chat_memory.py) is expecting only one key. | ConversationalRetrievalChain cannot return source documents when using ConversationBufferWindowMemory | https://api.github.com/repos/langchain-ai/langchain/issues/12118/comments | 2 | 2023-10-21T20:06:18Z | 2023-10-22T00:11:00Z | https://github.com/langchain-ai/langchain/issues/12118 | 1,955,647,569 | 12,118 |
[
"langchain-ai",
"langchain"
] | ### Feature request
There seems to be a conflation between configuration code and inheritance trees that make it nearly impossible to extend langchain functionality without recreating entire class inheritance. I ran into this almost immediately when trying to customize agents and executors to my application needs, but also take advantage of the pre-assembled styles (e.g. Zero-Shot react). I had to recreate my style of zero shot executor and agent by copying the parameters from the subclasses into my forked instance of the bases.
My proposal is to decouple these two concepts and have a single inheritance tree for functionality (e.g. agents, chains) and a separate notion of a โconfigโ class that can hold the parameterization for an object. The factory methods on the core classes can then accept a config parameter to create an instance with the desired parameterization. This enables extension in two dimensions - any core object (agent, chain, etc) x any configuration. Furthermore new configurations can easily be created by end users or even extensions of existing ones (maybe you want to swap out one parameter from a predefined config without copy pasting the whole code).
### Motivation
Architectural improvement and future extensibility of the framework
### Your contribution
PR forthcoming with a POC when I have some time to abstract what I did for my projects into the langchain source but wanted to seed this for discussion! | Architecture: Decouple Configuration from Inheritance for better extensibility | https://api.github.com/repos/langchain-ai/langchain/issues/12117/comments | 2 | 2023-10-21T19:33:24Z | 2024-02-06T16:16:36Z | https://github.com/langchain-ai/langchain/issues/12117 | 1,955,638,250 | 12,117 |
[
"langchain-ai",
"langchain"
] | ### Feature request
Feature request to integrate the SendGrid API into Langchain for enhanced email communication and notifications.
**Relevant links:**
1. [SendGrid API Documentation](https://sendgrid.com/docs/API_Reference/index.html): The official documentation for the SendGrid API, offering comprehensive information on how to use the API for sending emails and managing email communication.
2. [SendGrid GitHub Repository](https://github.com/sendgrid/sendgrid-python): The GitHub repository for SendGrid's official Python library, which can be utilized for API integration.
### Motivation
Langchain is committed to offering a seamless platform for language and communication-related tools. By integrating the SendGrid API, we aim to bring advanced email communication capabilities to the platform, enhancing its utility and user experience.
SendGrid is a renowned email platform that provides a reliable and efficient way to send, receive, and manage emails. By incorporating the SendGrid API into Langchain, we enable users to send emails, manage notifications, and enhance their communication within the platform.
Our goal is to provide Langchain users with a feature-rich email system that allows them to send notifications, updates, and alerts directly from Langchain. This integration will streamline communication, offering users a seamless experience and the ability to keep stakeholders informed and engaged through email.
In summary, the SendGrid API integration project is designed to extend Langchain's capabilities by incorporating a powerful email communication tool. This will help users effectively manage email notifications and stay connected with stakeholders within the platform.
### Your contribution
We are 4 University of Toronto students and are very interested in contributing to Langchain for a course project. We will be
creating a PR that implements this feature sometime in mid November. | SendGrid API Integration For Enhanced Email Communication | https://api.github.com/repos/langchain-ai/langchain/issues/12116/comments | 2 | 2023-10-21T19:00:04Z | 2024-02-09T16:13:23Z | https://github.com/langchain-ai/langchain/issues/12116 | 1,955,627,328 | 12,116 |
[
"langchain-ai",
"langchain"
] | ### Feature request
Request to Integrate Stack Exchange API into Langchain for enhanced information access and interaction.
**Relevant links:**
1. **Stack Exchange API Documentation**: The official documentation for the Stack Exchange API, providing detailed information on how to interact with the API and retrieve data.
[Stack Exchange API Documentation](https://api.stackexchange.com/docs)
2. **Stack Exchange API Client GitHub Repository**: Community-supported libraries or tools that interact with the API.
[Stack Exchange API Client GitHub Repository](https://github.com/benatespina/StackExchangeApiClient)
### Motivation
Langchain aims to provide a comprehensive and seamless platform for accessing and interacting with a wide array of linguistic and language-related resources. In this context, integrating the Stack Exchange API would greatly enhance the utility of the platform.
Stack Exchange hosts a plethora of specialized communities, each dedicated to specific domains of knowledge and expertise. These communities generate valuable content in the form of questions, answers, and discussions. By integrating the Stack Exchange API into Langchain, we can enable users to access this wealth of information directly from within the platform.
What we aim to provide is a convenient gateway for Langchain users to explore Stack Exchange communities, search for relevant questions and answers, and actively participate in discussions. This integration would not only simplify the process of finding authoritative information on various topics but also facilitate direct engagement with experts and enthusiasts in their respective fields.
In summary, our Stack Exchange API integration project is designed to enrich the Langchain experience by offering a direct link to the vast knowledge repositories of Stack Exchange communities. It will empower users to seamlessly navigate between these platforms, harnessing the collective wisdom of these communities for their linguistic and language-related endeavors.
### Your contribution
We are 4 University of Toronto students and are very interested in contributing to Langchain for a course project. We will be
creating a PR that implements this feature sometime in mid November. | Stack Exchange API Integration | https://api.github.com/repos/langchain-ai/langchain/issues/12115/comments | 2 | 2023-10-21T18:49:17Z | 2024-02-06T16:16:46Z | https://github.com/langchain-ai/langchain/issues/12115 | 1,955,624,167 | 12,115 |
[
"langchain-ai",
"langchain"
] | ### System Info
#### Description:
I encountered an issue with the UnstructuredURLLoader class from the "langchain" library, specifically in the libs/langchain/langchain/document_loaders/url.py module. When trying to handle exceptions for a failed request, I observed that the exception raised by the library doesn't inherit from the base class Exception. This makes it challenging to handle exceptions properly in my code.
#### **Actual Behavior**:
The exception raised by the **`UnstructuredURLLoader`** doesn't inherit from the base class **`Exception`**, causing difficulty in handling exceptions effectively.
#### **Error Message**:
```Error fetching or processing https://en.wikipesdfdia.org/, exception: HTTPSConnectionPool(host='en.wikipesdfdia.org', port=443): Max retries exceeded with url: / (Caused by NameResolutionError("<urllib3.connection.HTTPSConnection object at 0x000002113AD12F10>: Failed to resolve 'en.wikipesdfdia.org' ([Errno 11001] getaddrinfo failed)"))```
### Who can help?
@eyurtsev @agola11
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [X] Callbacks/Tracing
- [ ] Async
### Reproduction
```py
from langchain.document_loaders import UnstructuredURLLoader
from urllib3 import HTTPSConnectionPool
url = 'https://en.wikipesdfdia.org/'
try:
unstructured_loader = UnstructuredURLLoader([url])
all_content = unstructured_loader.load()
print("Request was successful")
except HTTPSConnectionPool as e:
# Handle HTTPConnectionPool exceptions
print(f"An HTTPConnectionPool exception occurred: {e}")
except Exception as e:
# Handle other exceptions
print(f"An unexpected error occurred: {e}")
```
### Expected behavior
I expected the exception raised by the **`UnstructuredURLLoader`** to be derived from the base class **`Exception`**, which would allow for proper exception handling. | Issue with Exception Handling: UnstructuredURLLoader Does Not Raise Exceptions Inheriting from Base Class Exception | https://api.github.com/repos/langchain-ai/langchain/issues/12112/comments | 2 | 2023-10-21T16:52:54Z | 2024-02-06T16:16:51Z | https://github.com/langchain-ai/langchain/issues/12112 | 1,955,575,850 | 12,112 |
[
"langchain-ai",
"langchain"
] | ### System Info
Langchain:0.0.319
Python:3.11.2
System: macOS 13.5.2 arm64
### Who can help?
@hwchase17
@agola11
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
I implemented a generic module to handle many types of chat models. I executed the following script after I set `OPENAI_API_KEY=xxx` in `.env` file.
```python
from dataclasses import dataclass, field
from langchain.schema.language_model import BaseLanguageModel
from langchain.chat_models import ChatOpenAI
from dotenv import load_dotenv
# Load environment variables
load_dotenv()
@dataclass
class ModelConfig:
"""Model Configuration"""
model: BaseLanguageModel
arguments: field(
default_factory=dict
) # This could be any arguments for BaseLanguageModel child class
def init_model(self) -> BaseLanguageModel:
return self.model(**self.arguments)
if __name__ == "__main__":
model = ModelConfig(model=ChatOpenAI, arguments=dict())
print(model)
```
And then the output contains **a leaked API key**!
```sh
<class 'openai.api_resources.chat_completion.ChatCompletion'> openai_api_key=xxxx>
```
### Expected behavior
My idea off the top of my head is to use, say, `__str__` directive inside the original class. But this seems just a workaround for ChatOpenAI class. So something more generic feature in `BaseLanguageModel` would be more decent. Like should have a common property in the base class, which handle API token across any LLM models and this property might be a special class that can hide the value when other class and functions call this.
https://github.com/langchain-ai/langchain/blob/08f5e6b8012f5eda2609103f33676199a3781a15/libs/langchain/langchain/chat_models/openai.py#L181-L238
## Similar issue.
#8499 | API Key Leakage in the BaseLanguageModel | https://api.github.com/repos/langchain-ai/langchain/issues/12110/comments | 3 | 2023-10-21T16:08:21Z | 2024-02-10T16:11:07Z | https://github.com/langchain-ai/langchain/issues/12110 | 1,955,553,596 | 12,110 |
[
"langchain-ai",
"langchain"
] | ### System Info
When upgrading from version 0.317 to 0.318, there is a bug regarding the Pydantic model validations. This issue doesn't happen in versions before 0.318.
File "...venv/lib/python3.10/site-packages/langchain/retrievers/google_vertex_ai_search.py", line 268, in __init__
self._serving_config = self._client.serving_config_path(
File "pydantic/main.py", line 357, in pydantic.main.BaseModel.__setattr__
ValueError: "GoogleVertexAISearchRetriever" object has no field "_serving_config"
### Who can help?
I tagged both of you, @eyurtsev and @kreneskyp, regarding this version 0.318 change, in case it is related: https://github.com/langchain-ai/langchain/pull/11936.
Thanks for this amazing library!
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Execute the following code:
```
def main():
aiplatform.init(project=PROJECT_ID)
llm = VertexAI(model_name=MODEL, temperature=0.0)
retriever = GoogleVertexAISearchRetriever(
project_id=PROJECT_ID, search_engine_id=DATA_STORE_ID
)
search_query = "Who was the CEO of DeepMind in 2021?"
retrieval_qa = RetrievalQA.from_chain_type(
llm=llm, chain_type="stuff", retriever=retriever
)
answer = retrieval_qa.run(search_query)
print(answer)
if __name__ == "__main__":
main()
```
### Expected behavior
To output the answer from the model, as it does in versions before 0.318 | "GoogleVertexAISearchRetriever" object has no field "_serving_config". It's a Pydantic-related bug. | https://api.github.com/repos/langchain-ai/langchain/issues/12100/comments | 2 | 2023-10-21T11:31:24Z | 2023-10-24T08:20:04Z | https://github.com/langchain-ai/langchain/issues/12100 | 1,955,451,984 | 12,100 |
[
"langchain-ai",
"langchain"
] | ### Feature request
Feature request to Integrate a new Langchain tool that summarizes reddit user interactions/discussion based on user input.
### Motivation
Reddit is a vast platform with numerous discussions and threads covering a wide range of topics. Users often seek information, insights, and opinions on specific subjects of interest within these discussions. However, navigating through these discussions and identifying the most valuable insights can be a time-consuming and sometimes overwhelming process.
What we aim to provide is a summarization toolkit that simplifies the experience of exploring Reddit. With this toolkit, users can input a topic or subject they are interested in, and it will not only aggregate relevant discussions (posts) from Reddit but also generate concise and coherent summaries of these discussions. This tool streamlines the process of distilling key information, popular opinions, and diverse perspectives from Reddit threads, making it easier for users to stay informed and engaged with the platform's wealth of content.
In summary, our Reddit summarization toolkit is designed to save users time and effort by delivering informative and easily digestible summaries of Reddit discussions, ultimately enhancing their experience in accessing and comprehending the vast amount of information and opinions present on the platform.
### Your contribution
We are a group of 4 looking to contribute to Langchain as a course project and will be creating a PR for this issue in mid/late November. | feature: Reddit API Tool | https://api.github.com/repos/langchain-ai/langchain/issues/12097/comments | 3 | 2023-10-21T05:49:41Z | 2023-12-03T17:29:57Z | https://github.com/langchain-ai/langchain/issues/12097 | 1,955,338,588 | 12,097 |
[
"langchain-ai",
"langchain"
] | When running on colab, it has following error:
```
[nltk_data] Downloading package punkt to /root/nltk_data...
[nltk_data] Unzipping tokenizers/punkt.zip.
[nltk_data] Downloading package averaged_perceptron_tagger to
[nltk_data] /root/nltk_data...
[nltk_data] Unzipping taggers/averaged_perceptron_tagger.zip.
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
[<ipython-input-2-3092bb1466a5>](https://localhost:8080/#) in <cell line: 7>()
5
6 # Get elements
----> 7 raw_pdf_elements = partition_pdf(filename=path+"LLaVA.pdf",
8 # Using pdf format to find embedded image blocks
9 extract_images_in_pdf=True,
8 frames
[/usr/local/lib/python3.10/dist-packages/unstructured_inference/models/base.py](https://localhost:8080/#) in get_model(model_name, **kwargs)
70 else:
71 raise UnknownModelException(f"Unknown model type: {model_name}")
---> 72 model.initialize(**initialize_params)
73 models[model_name] = model
74 return model
TypeError: UnstructuredYoloXModel.initialize() got an unexpected keyword argument 'extract_images_in_pdf'
```
https://github.com/langchain-ai/langchain/blob/5dbe456aae755e3190c46316102e772dfcb6e148/cookbook/Semi_structured_and_multi_modal_RAG.ipynb#L103 | How to run Semi_structured_and_multi_modal_RAG.ipynb | https://api.github.com/repos/langchain-ai/langchain/issues/12096/comments | 2 | 2023-10-21T02:51:14Z | 2024-02-08T16:15:50Z | https://github.com/langchain-ai/langchain/issues/12096 | 1,955,247,057 | 12,096 |
[
"langchain-ai",
"langchain"
] | ### Feature request
Cohere's upcoming (currently in BETA) Co.Chat + RAG API Endpoint will offer additional parameters such as "documents" and "connectors" that can be used for RAG. This is the major new feature of the Cohere Coral models (RAG)
See https://docs.cohere.com/reference/chat
GPT-4 (and the Other Chat Models following soon after) I am sure will have similar ability to offer RAG with its various new chat models.
Would the ability to add additional parameters to API chat endpoints (such as a document folder or a url etc...) be able to be supported via the Langchain Runnable Interface once these new Chat endpoints API's are GA?
This support would be needed for the external RAG for both the Input Type (_PromptValue?_) and the Output Type. (ChatMessage)
### Motivation
The main motivation for this proposal for updating the Input and Output Types in the Langchain Runnable Interface _(if that is indeed the correct place from a code standpoint this would be supported_) is to support RAG inputs and outputs (HumanMessage and AIMessage) with the newer Chat Models from Cohere, OpenAI etc. and in very near future support multimodal LLMs which are fast becoming the norm.
### Your contribution
I would need a bit of help with the PR. | Chat API Endpoints with RAG | https://api.github.com/repos/langchain-ai/langchain/issues/12094/comments | 3 | 2023-10-21T00:44:36Z | 2024-02-01T21:05:22Z | https://github.com/langchain-ai/langchain/issues/12094 | 1,955,167,241 | 12,094 |
[
"langchain-ai",
"langchain"
] | ### Feature request
Implement MemGPT for better memory management with local and hosted models:
https://github.com/cpacker/MemGPT
### Motivation
This can help automate the manually creation and retrieval of vector stores.
### Your contribution
I'm not really familiar with the langchain repo at the moment, I could provide feedback on the implementation from a users perspective. | Integrate MemGPT | https://api.github.com/repos/langchain-ai/langchain/issues/12091/comments | 8 | 2023-10-20T20:33:10Z | 2024-07-10T16:05:21Z | https://github.com/langchain-ai/langchain/issues/12091 | 1,954,978,462 | 12,091 |
[
"langchain-ai",
"langchain"
] | ### Feature request
Many of the non-OpenAI LLMs struggle to complete the first "Thought" part of the prompt. Falcon, Llama, Mistral, Zephyr, and many others will successfully choose the action and action input, but leave out an initial thought.
### Motivation
For the very first interaction, the agent scratchpad is empty, and the first thought isn't always necessarily helpful, especially if the first action and action input are valid.
Interestingly, removing the "thought" portion of the prompt often results in the LLM properly completing the thought. However, doing so breaks Langchain because Langchain is expecting the agent_scratchpad variable to be present in the template for completion.
An alternative could be to make the entire "Thought" part of the prompt only present if agent_scratchpad isn't empty. However, that would require "building" the prompt template conditionally.
### Your contribution
Most of this takes place in either the MRKL agent:
https://github.com/langchain-ai/langchain/tree/master/libs/langchain/langchain/agents/mrkl
Or potentially the ReAct Docstore agent:
https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/agents/react/base.py | Improve flexibility of ReAct agent on the first iteration | https://api.github.com/repos/langchain-ai/langchain/issues/12087/comments | 1 | 2023-10-20T19:18:11Z | 2023-10-25T15:54:28Z | https://github.com/langchain-ai/langchain/issues/12087 | 1,954,895,641 | 12,087 |
[
"langchain-ai",
"langchain"
] | ### System Info
LangChain 0.0.317
atlassian-python-api 3.41.3
### Who can help?
@hwchase17
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
When playing with documents from Confluence (Cloud hosting in my case), I have noticed that I am getting multiple copies of the same document in my vector store or from my Confluence retriever (custom implementation not in LangChain). In both cases, the vector store is populated with LangChain's Confluence document loader and my retriever is also using that same document loader. The problem happens in `ConfluenceLoader.paginate_request()` in the `while` loop until we reach `max_pages`. In summary, you will get copies of the same documents until you reach `max_pages`. For example, if your query (I'm using CQL in my use-case) returns 10 documents and `max_pages` is set to `200`, you will get 20 copies of each document. This also makes the search process much slower.
The previous pagination system of Confluence REST server was leveraging an index controlled by the `start` parameter. According to this [page](https://developer.atlassian.com/cloud/confluence/change-notice-moderize-search-rest-apis/), it has been deprecated in favor of a cursor system. It is now recommended to follow `_links.next` instead of relying on the `start` parameter and getting an empty result when documents to be returned have been exhausted.
This change is now in effect for the Cloud hosting of Confluence, not sure about private deployments. You can see for yourself by running a script that looks like this:
```python
from atlassian import Confluence
site = "templates" # Public site
confluence = Confluence(url=f"https://{site}.atlassian.net/")
cql='type=page AND text~"is"'
response_1 = confluence.cql(cql=cql, start=0)
print(response_1) # Easier to see in debug mode
next_start = len(response_1["results"])*1000
response_2 = confluence.cql(cql=cql, start=next_start)
print(response_2) # Notice returned documents are the same as in response_1
```
### Expected behavior
The Confluence document loader should handle properly pagination for both the old (using `start`) and new way (cursor) to maintain backward compatibility for private Confluence deployments. It should also properly stop when `max_pages` has been reached. | Confluence document loader not handling pagination correctly anymore (when using Confluence Cloud) | https://api.github.com/repos/langchain-ai/langchain/issues/12082/comments | 5 | 2023-10-20T15:41:10Z | 2024-05-20T07:16:20Z | https://github.com/langchain-ai/langchain/issues/12082 | 1,954,551,450 | 12,082 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
Hello,
I am using langchain offline on a local machine. I'd like to split document over tokens user TokenTextSplitter.
Unfortunately, I can't get to make the class to user a local tokenizer.
I tried to do
`text_splitter = TokenTextSplitter(model_name='/my/path/to/my/tokenizer/', chunk_size = 50, chunk_overlap = 10`
like I did for the HuggingFaceEmbedding (and it worked pretty well).
But I get the following error:
Could not automaticcaly map '/my/path/to/my/tokenizer/' to a tokenizer. Please use 'tiktoken.get_encoding' to explicitly get the tokenizer you expect
Couldn't find any info in the documentation about setup an offline / local tokenizer.
### Suggestion:
_No response_ | Issue: TokenTextSplitter with local tokenizer ? | https://api.github.com/repos/langchain-ai/langchain/issues/12078/comments | 4 | 2023-10-20T13:46:35Z | 2023-10-20T14:30:20Z | https://github.com/langchain-ai/langchain/issues/12078 | 1,954,330,346 | 12,078 |
[
"langchain-ai",
"langchain"
] | ### System Info
Running SQLDatabaseChain with LangChain version 0.0.319 and Snowflake return SQL query which is to be executed on snowflake database in next step. But returned query contains the prefix "SQLQuery:\n" which will broke the whole chain when the query gets executed on snowflake. How to get rid of this "SQLQuery:\n" prefix from query.
Note: Using AWS Bedrock endpoint with Anthropic Claudev2 LLM.
Using default prompt provided in the langchain:
`Given an input question, first create a syntactically correct {dialect} query to run, then look at the results of the query and return the answer. Unless the user specifies in his question a specific number of examples he wishes to obtain, always limit your query to at most {top_k} results. You can order the results by a relevant column to return the most interesting examples in the database.
Never query for all the columns from a specific table, only ask for a the few relevant columns given the question.
Pay attention to use only the column names that you can see in the schema description. Be careful to not query for columns that do not exist. Also, pay attention to which column is in which table.
Use the following format:
Question: Question here
SQLQuery: SQL Query to run
SQLResult: Result of the SQLQuery
Answer: Final answer here
`
Getting following response in debug/verbose mode:
`"text": " SQLQuery:\nSELECT top 5 p.productname, sum(od.quantity) as total_sold\nFROM products p\nJOIN orderdetails od ON p.productid = od.productid \nGROUP BY p.productname\nORDER BY total_sold DESC\n"
}`
Any help would be appreciated.
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Just use SQLDatabaseChain with AWS Bedrock Anthropic Claude 2 to produce this one.
### Expected behavior
Should not prefix "SQLQuery:\n" in front returned SQL query. | Running SQLDatabaseChain adds prefix "SQLQuery:\n" infront of returned SQL by LLM, causing invalid query when ran on Database using chain | https://api.github.com/repos/langchain-ai/langchain/issues/12077/comments | 11 | 2023-10-20T13:14:25Z | 2024-04-14T16:17:56Z | https://github.com/langchain-ai/langchain/issues/12077 | 1,954,271,029 | 12,077 |
[
"langchain-ai",
"langchain"
] | ### System Info
MacOS Ventura 13.6
Python 3.10.13
langchain 0.0.306
### Who can help?
@hwchase17
@agola11
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```
llm = AzureOpenAI(temperature=0, model="gpt-35-turbo")
compressor = LLMChainExtractor.from_llm(llm)
base_retriever = vectorstores.as_retriever()
compression_retriever = ContextualCompressionRetriever(base_compressor=compressor, base_retriever=base_retriever)
compression_retriever.get_relevant_documents("Owner")
```
### Expected behavior
Expected to return the docs compressed from the vectorstore, but I'm getting the `AttributeError: 'str' object has no attribute 'get'` | Function get_relevant_docs() returning AttributeError | https://api.github.com/repos/langchain-ai/langchain/issues/12076/comments | 4 | 2023-10-20T12:49:04Z | 2024-02-21T16:08:04Z | https://github.com/langchain-ai/langchain/issues/12076 | 1,954,222,876 | 12,076 |
[
"langchain-ai",
"langchain"
] | ### Issue with current documentation:
#### **Description**
In various places within the documentation, the import statement is being used:
```python
from langchain.llms import OpenAI
```
However, as hinted in https://github.com/langchain-ai/langchain/commit/779790167e37f49b3eec5d04dfd30b0447d4a32a, this statement has been deprecated. The correct and updated import statement should be:
```python
from langchain.chat_models import ChatOpenAI
```
#### **Problem**
The use of the deprecated import statement leads to problems, particularly when interfacing with Pydantic. Users following the outdated documentation might experience unexpected errors or issues due to this.
| DOC: Deprecated Import Statement in Documentation for OpenAI | https://api.github.com/repos/langchain-ai/langchain/issues/12074/comments | 1 | 2023-10-20T12:31:33Z | 2024-02-06T16:17:06Z | https://github.com/langchain-ai/langchain/issues/12074 | 1,954,194,603 | 12,074 |
[
"langchain-ai",
"langchain"
] | ### System Info
Langchain Version: 0.0.316 - Python 3.8.10
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [X] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [X] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
@hwchase17 @agola11
I am using StructuredTool for multi input support. Below is my initialize_agent
```
sys_msg = "Assistantโs main duty is to decide which tool to use......"
system_message = SystemMessage(content=sys_msg)
agent_executor = initialize_agent(
agent=AgentType.STRUCTURED_CHAT_ZERO_SHOT_REACT_DESCRIPTION,
tools=tools,
llm=llm,
verbose=True,
max_iterations=3,
early_stopping_method='generate',
handle_parsing_errors=True,
agent_kwargs = { "system_message": system_message }
)
```
I dont see the above (sys_msg) as the system_message in the openAI query. Instead I see this in the debug logs
```
2023-10-20 08:54:01 DEBUG api_version=None data='{"messages": [{"role": "system", "content": "Respond to the human as helpfully and accurately as possible. You have access to the following tools:\\n\\n
```
I believe this is the default system message. How do I change this to my custom system message in AgentType.STRUCTURED_CHAT_ZERO_SHOT_REACT_DESCRIPTION.
I see a similar ticket for OpenAI functions AgentType : https://github.com/langchain-ai/langchain/issues/6334
I also tried specifying system_message=system_message as discussed in the above ticket. That did not help either
### Expected behavior
The custom system_message that I provide in agent_kwargs should replace the default one going out to openAI. | system_message in agent_kwargs not updating System Message in AgentType.STRUCTURED_CHAT_ZERO_SHOT_REACT_DESCRIPTION | https://api.github.com/repos/langchain-ai/langchain/issues/12072/comments | 3 | 2023-10-20T11:21:19Z | 2023-11-06T08:27:49Z | https://github.com/langchain-ai/langchain/issues/12072 | 1,954,083,040 | 12,072 |
[
"langchain-ai",
"langchain"
] | ### System Info
Platform: local development on MacOS Ventura
Python version: 3.9.7
langchain.version: 0.0.315
faiss.version: 1.7.4
openai.version: 0.28.1
### Who can help?
@hwchase17
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [X] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
This is my code:
class TestGPT(object):
def __init__(self):
self.llm_model = ChatOpenAI(temperature=0.1, streaming=True, model="gpt-3.5-turbo")
self.store = self.load_store_data()
self.prompt = f"""Use 3 sentences at most."""
@staticmethod
def load_store_data():
with open(f"faiss_store.pkl", "rb") as f:
store = pickle.load(f)
store.index = faiss.read_index(f"docs.index")
return store
def ask(self, user_prompt):
content = self.prompt + "\n\n" + "Question: " + user_prompt + "\n\n"
chain = RetrievalQAWithSourcesChain.from_chain_type(llm=self.llm_model,
retriever=self.store.as_retriever())
gpt_response = chain(content)['answer']
return self.control_response_has_source(gpt_response)
TestGPT().ask("hello")
Error is **"AttributeError: 'OpenAIEmbeddings' object has no attribute 'skip_empty'"**
Traceback:
`Traceback (most recent call last):
File "/Users/md/Desktop/support_gpt.py", line 72, in <module>
res = SupportGPT().ask("uygulama nasฤฑl kullanฤฑlฤฑr")
File "/Users/md/Desktop/support_gpt.py", line 45, in ask
gpt_response = chain(content)['answer']
File "/Users/md/Desktop/venv/lib/python3.9/site-packages/langchain/chains/base.py", line 310, in __call__
raise e
File "/Users/md/Desktop/venv/lib/python3.9/site-packages/langchain/chains/base.py", line 304, in __call__
self._call(inputs, run_manager=run_manager)
File "/Users/md/Desktop/venv/lib/python3.9/site-packages/langchain/chains/qa_with_sources/base.py", line 151, in _call
docs = self._get_docs(inputs, run_manager=_run_manager)
File "/Users/md/Desktop/venv/venv/lib/python3.9/site-packages/langchain/chains/qa_with_sources/retrieval.py", line 50, in _get_docs
docs = self.retriever.get_relevant_documents(
File "/Users/md/Desktop/venv/venv/lib/python3.9/site-packages/langchain/schema/retriever.py", line 211, in get_relevant_documents
raise e
File "/Users/md/Desktop/venv/lib/python3.9/site-packages/langchain/schema/retriever.py", line 204, in get_relevant_documents
result = self._get_relevant_documents(
File "/Users/md/Desktop/venv/lib/python3.9/site-packages/langchain/schema/vectorstore.py", line 585, in _get_relevant_documents
docs = self.vectorstore.similarity_search(query, **self.search_kwargs)
File "/Users/md/Desktop/venv/lib/python3.9/site-packages/langchain/vectorstores/faiss.py", line 364, in similarity_search
docs_and_scores = self.similarity_search_with_score(
File "/Users/md/Desktop/venv/lib/python3.9/site-packages/langchain/vectorstores/faiss.py", line 305, in similarity_search_with_score
embedding = self._embed_query(query)
File "/Users/md/Desktop/venv/lib/python3.9/site-packages/langchain/vectorstores/faiss.py", line 138, in _embed_query
return self.embedding_function(text)
File "/Users/md/Desktop/venv/lib/python3.9/site-packages/langchain/embeddings/openai.py", line 518, in embed_query
return self.embed_documents([text])[0]
File "/Users/md/Desktop/venv/lib/python3.9/site-packages/langchain/embeddings/openai.py", line 490, in embed_documents
return self._get_len_safe_embeddings(texts, engine=self.deployment)
File "/Users/md/Desktop/venv/lib/python3.9/site-packages/langchain/embeddings/openai.py", line 374, in _get_len_safe_embeddings
response = embed_with_retry(
File "/Users/md/Desktop/venv/lib/python3.9/site-packages/langchain/embeddings/openai.py", line 107, in embed_with_retry
return _embed_with_retry(**kwargs)
File "/Users/md/Desktop/venv/lib/python3.9/site-packages/tenacity/__init__.py", line 289, in wrapped_f
return self(f, *args, **kw)
File "/Users/md/Desktop/venv/python3.9/site-packages/tenacity/__init__.py", line 379, in __call__
do = self.iter(retry_state=retry_state)
File "/Users/md/Desktop/venv/lib/python3.9/site-packages/tenacity/__init__.py", line 314, in iter
return fut.result()
File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/concurrent/futures/_base.py", line 438, in result
return self.__get_result()
File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/concurrent/futures/_base.py", line 390, in __get_result
raise self._exception
File "/Users/md/Desktop/venv/lib/python3.9/site-packages/tenacity/__init__.py", line 382, in __call__
result = fn(*args, **kwargs)
File "/Users/md/Desktop/venv/lib/python3.9/site-packages/langchain/embeddings/openai.py", line 105, in _embed_with_retry
return _check_response(response, skip_empty=embeddings.skip_empty)
AttributeError: 'OpenAIEmbeddings' object has no attribute 'skip_empty'
`
### Expected behavior
This function should not throw an error and also I do not get any error when I downgrade the langchain to 0.0.251 | AttributeError: 'OpenAIEmbeddings' object has no attribute 'skip_empty' | https://api.github.com/repos/langchain-ai/langchain/issues/12071/comments | 2 | 2023-10-20T11:01:54Z | 2024-02-13T16:10:13Z | https://github.com/langchain-ai/langchain/issues/12071 | 1,954,054,770 | 12,071 |
[
"langchain-ai",
"langchain"
] | ### Feature request
At the moment huggingface_hub.py supports only sentence-transformers because there is a validation:
```
if not repo_id.startswith("sentence-transformers"):
raise ValueError(
"Currently only 'sentence-transformers' embedding models "
f"are supported. Got invalid 'repo_id' {repo_id}."
)
```
### Motivation
At the moment there are other higher-performing embedders on the hub, like e5 or bge family.
### Your contribution
I think you should relax the constraint by also including embedders supported by sentence-transformer
```
if not repo_id.startswith(("sentence-transformers", "intfloat", "BAAI")):
raise ValueError(
"Currently only 'sentence-transformers' embedding models "
f"are supported. Got invalid 'repo_id' {repo_id}."
)
``` | Support other embedder in Hugginface Hub | https://api.github.com/repos/langchain-ai/langchain/issues/12069/comments | 1 | 2023-10-20T08:39:34Z | 2024-01-30T05:53:04Z | https://github.com/langchain-ai/langchain/issues/12069 | 1,953,813,752 | 12,069 |
[
"langchain-ai",
"langchain"
] | ### Feature request
Textract released the [LAYOUT](https://docs.aws.amazon.com/textract/latest/dg/layoutresponse.html) feature, which identifies different layout elements like tables, lists, figures, text-paragraphs and titles. This should be used by the AmazonTextractPDFParser to generate a linearized output to improve downstream LLMs accuracy with those hints.
Text output should render tables and key/value pairs and text in reading order for multi-column text and prefix lists with a *, when features like LAYOUT, TABLES, FORMS are passed to the textract call
### Motivation
Improve downstream LLM accuracy
### Your contribution
I'll submit a PR for this feature. | feat: Add Linearized output to Textract PDFLoader | https://api.github.com/repos/langchain-ai/langchain/issues/12068/comments | 1 | 2023-10-20T08:28:07Z | 2023-10-31T01:02:11Z | https://github.com/langchain-ai/langchain/issues/12068 | 1,953,794,419 | 12,068 |
[
"langchain-ai",
"langchain"
] | ### Issue with current documentation:
The current documentation for rag does not focus on adding metadata to chunks to make sure before even doing similarity search on vector db only the relevant docs with correct metadata are retrieved from vector db.
The issue is with open source embedding models having only mostly 512 sequence length.
Lot of context is lost if correct metadata is not present in each chunk.
This improves ability of local model to improve results by factor of magnitude. We tested the same with local hosted embedded model and codellama to improve queries on mermaid and currently getting results onpar with gpt4.
### Idea or request for content:
This notebook covers the same.
https://github.com/unoplat/unoplat-lamp/blob/1-mermaid-expert/llm-rag/Mermaid%20Expert%20RAG.ipynb
The comparison is done with codellama without the KB and gpt4. | DOC: Improve MarkDown Splitting and Metadata as Part of RAG. | https://api.github.com/repos/langchain-ai/langchain/issues/12067/comments | 2 | 2023-10-20T08:02:52Z | 2024-03-16T16:05:11Z | https://github.com/langchain-ai/langchain/issues/12067 | 1,953,754,290 | 12,067 |
[
"langchain-ai",
"langchain"
] | ### System Info
accelerate==0.23.0
aiohttp==3.8.6
aiosignal==1.3.1
altair==5.1.2
annotated-types==0.6.0
anyio==3.7.1
appdirs==1.4.4
asgiref==3.7.2
asttokens==2.4.0
async-timeout==4.0.3
attrs==23.1.0
auto-gptq==0.4.2
backcall==0.2.0
bentoml==1.1.7
bitsandbytes==0.41.1
blinker==1.6.3
build==1.0.3
cachetools==5.3.1
cattrs==23.1.2
certifi==2023.7.22
cffi==1.16.0
chardet==5.2.0
charset-normalizer==3.3.0
circus==0.18.0
click==8.1.7
click-option-group==0.5.6
cloudpickle==3.0.0
cmake==3.27.7
colorama==0.4.6
coloredlogs==15.0.1
comm==0.1.4
contextlib2==21.6.0
contourpy==1.1.1
cryptography==41.0.4
cssselect==1.2.0
cuda-python==12.2.0
cycler==0.12.1
Cython==3.0.4
dataclasses-json==0.6.1
datasets==2.14.5
debugpy==1.8.0
decorator==5.1.1
deepmerge==1.1.0
Deprecated==1.2.14
dill==0.3.7
distro==1.8.0
et-xmlfile==1.1.0
executing==2.0.0
fastcore==1.5.29
filelock==3.12.4
filetype==1.2.0
fonttools==4.43.1
frozenlist==1.4.0
fs==2.4.16
fsspec==2023.6.0
ghapi==1.0.4
gitdb==4.0.10
GitPython==3.1.40
greenlet==3.0.0
grpcio==1.59.0
grpcio-health-checking==1.59.0
grpcio-tools==1.59.0
h11==0.14.0
h2==4.1.0
hpack==4.0.0
httpcore==0.18.0
httpx==0.25.0
huggingface-hub==0.17.3
humanfriendly==10.0
hyperframe==6.0.1
idna==3.4
importlib-metadata==6.8.0
inflection==0.5.1
InstructorEmbedding==1.0.1
ipykernel==6.25.2
ipython==8.16.1
ipywidgets==8.1.1
jedi==0.19.1
Jinja2==3.1.2
joblib==1.3.2
JPype1==1.4.1
jsonpatch==1.33
jsonpointer==2.4
jsonschema==4.19.1
jsonschema-specifications==2023.7.1
jupyter_client==8.4.0
jupyter_core==5.4.0
jupyterlab-widgets==3.0.9
kiwisolver==1.4.5
langchain==0.0.318
langsmith==0.0.47
lark==1.1.7
lxml==4.9.3
markdown-it-py==3.0.0
MarkupSafe==2.1.3
marshmallow==3.20.1
matplotlib==3.8.0
matplotlib-inline==0.1.6
mdurl==0.1.2
mpmath==1.3.0
multidict==6.0.4
multiprocess==0.70.15
mypy-extensions==1.0.0
nest-asyncio==1.5.8
networkx==3.2
ninja==1.11.1.1
nltk==3.8.1
numexpr==2.8.7
numpy==1.26.1
openai==0.28.1
openapi-schema-pydantic==1.2.4
openllm==0.3.9
openllm-client==0.3.9
openllm-core==0.3.9
openpyxl==3.1.2
opentelemetry-api==1.20.0
opentelemetry-instrumentation==0.41b0
opentelemetry-instrumentation-aiohttp-client==0.41b0
opentelemetry-instrumentation-asgi==0.41b0
opentelemetry-instrumentation-grpc==0.41b0
opentelemetry-sdk==1.20.0
opentelemetry-semantic-conventions==0.41b0
opentelemetry-util-http==0.41b0
optimum==1.13.2
orjson==3.9.9
packaging==23.2
pandas==2.1.1
parso==0.8.3
pathspec==0.11.2
pdfminer.six==20221105
pdfquery==0.4.3
peft==0.5.0
pickleshare==0.7.5
Pillow==10.1.0
pip-autoremove==0.10.0
pip-requirements-parser==32.0.1
pip-review==1.3.0
pip-tools==7.3.0
platformdirs==3.11.0
portalocker==2.8.2
prometheus-client==0.17.1
prompt-toolkit==3.0.39
protobuf==4.24.4
psutil==5.9.6
pure-eval==0.2.2
pyarrow==13.0.0
pycparser==2.21
pycryptodome==3.19.0
pydantic==2.4.2
pydantic_core==2.10.1
pydeck==0.8.0
Pygments==2.16.1
Pympler==1.0.1
PyMuPDF==1.23.5
pymupdf-fonts==1.0.5
PyMuPDFb==1.23.5
pynvml==11.5.0
pyparsing==3.1.1
pyproject_hooks==1.0.0
pyquery==2.0.0
pyreadline3==3.4.1
python-dateutil==2.8.2
python-dotenv==1.0.0
python-json-logger==2.0.7
python-multipart==0.0.6
pytz==2023.3.post1
pytz-deprecation-shim==0.1.0.post0
pywin32==306
PyYAML==6.0.1
pyzmq==25.1.1
qdrant-client==1.6.3
referencing==0.30.2
regex==2023.10.3
requests==2.31.0
rich==13.6.0
roman==4.1
rouge==1.0.1
rpds-py==0.10.6
safetensors==0.4.0
schema==0.7.5
scikit-learn==1.3.1
scipy==1.11.3
sentence-transformers==2.2.2
sentencepiece==0.1.99
sigfig==1.3.3
simple-di==0.1.5
six==1.16.0
smmap==5.0.1
sniffio==1.3.0
sortedcontainers==2.4.0
spyder-kernels==2.4.4
SQLAlchemy==2.0.22
stack-data==0.6.3
starlette==0.31.1
streamlit==1.27.2
streamlit-chat==0.1.1
sympy==1.12
tabula-py==2.8.2
tabulate==0.9.0
tenacity==8.2.3
threadpoolctl==3.2.0
tiktoken==0.5.1
tokenizers==0.14.1
toml==0.10.2
toolz==0.12.0
torch==2.1.0
torchaudio==2.1.0
torchvision==0.16.0
tornado==6.3.3
tqdm==4.66.1
traitlets==5.11.2
transformers @ git+https://github.com/huggingface/transformers@43bfd093e1817c0333a1e10fcbdd54f1032baad0
typing-inspect==0.9.0
typing_extensions==4.8.0
tzdata==2023.3
tzlocal==5.1
urllib3==1.26.18
uvicorn==0.23.2
validators==0.22.0
watchdog==3.0.0
watchfiles==0.21.0
wcwidth==0.2.8
widgetsnbextension==4.0.9
wrapt==1.15.0
xformers==0.0.22.post4
xlrd==2.0.1
xxhash==3.4.1
yarl==1.9.2
zipp==3.17.0
using python 3.11
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [X] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
from dotenv import load_dotenv
import os
from langchain.chat_models import ChatOpenAI
from qdrant_client import QdrantClient as qcqc
from langchain.embeddings import HuggingFaceInstructEmbeddings
from langchain.vectorstores import Qdrant
from langchain.retrievers.self_query.base import SelfQueryRetriever
from langchain.chains.query_constructor.base import AttributeInfo
load_dotenv()
openai_key = os.getenv('OPENAI_API_KEY')
db_path = os.getenv('vectordb_local_path')
key = openai_key
llm = ChatOpenAI(
temperature = 0,
model = 'gpt-3.5-turbo',
streaming = True)
text_metadata = [AttributeInfo(name = 'book name',
description = "name of the book.",
type = "string"),
AttributeInfo(name = 'author',
description = 'Author of the book',
type = 'string'),
AttributeInfo(name = 'creation data',
description = 'the date the book was written',
type = 'list[int]'),
AttributeInfo(name = 'page',
description = "page number.",
type = "int"),
AttributeInfo(name = 'images',
description = "dictionary whoes keys are name and description of images on the page,\
and whoes contents are image references on pdfs",
type = "dict{string:string}"),
AttributeInfo(name = 'tables',
description = 'list of tables from the page',
type = 'list[dataframe]')
]
def retreive_conversation_construct(store,store_content_description, metadata_format=text_metadata,verbose=False):
''' this is the first part of this function, and is the first problem i ran into
'''
retriever = SelfQueryRetriever.from_llm(llm = llm,
vectorstore=store,
document_contents = store_content_description,
metadata_field_info = metadata_format,
enable_limit=True,
fix_invalid = True,
verbose=verbose)
return retriever
client = qcqc(path= db_path)
model_name = "hkunlp/instructor-xl"
model_kwargs = {'device': 'cuda'}
encode_kwargs = {'normalize_embeddings': True}
load_dotenv()
path = os.getenv('instructor_local_dir')
os.environ['CURL_CA_BUNDLE'] = ''
embed_instruction ='Represent the document for retrieval: '
embeddings = HuggingFaceInstructEmbeddings(
model_name=model_name,
model_kwargs=model_kwargs,
encode_kwargs=encode_kwargs,
cache_folder = path,
embed_instruction = embed_instruction)
vector_store = Qdrant(client= client, collection_name= 'my cluster', embeddings= embeddings)
store_content_description = 'this is a paper about generating training data for large language models.'
retreive_conversation_construct(vector_store,store_content_description)
### Expected behavior
retriever should get generated.
I found in self_query.py, the .from_llm() method eventually leads to _get_builtin_translator getting called, which returns QdrantTranslator(metadata_key=vectorstore.metadata_payload_key) as structured_query_translator.
but later when calling structured_query_translator.allowed_operators from qdrant.py, the QdrantTranslator doesn't have allowed_operators, thus returns a None object.
this results in the following error:
File d:\ai_dev\research_assistant\testing.py:74
retreive_conversation_construct(vector_store,store_content_description)
File d:\ai_dev\research_assistant\testing.py:50 in retreive_conversation_construct
retriever = SelfQueryRetriever.from_llm(llm = llm,
File ~\AppData\Local\Programs\Python\Python311\Lib\site-packages\langchain\retrievers\self_query\base.py:214 in from_llm
query_constructor = load_query_constructor_runnable(
File ~\AppData\Local\Programs\Python\Python311\Lib\site-packages\langchain\chains\query_constructor\base.py:317 in load_query_constructor_runnable
prompt = get_query_constructor_prompt(
File ~\AppData\Local\Programs\Python\Python311\Lib\site-packages\langchain\chains\query_constructor\base.py:203 in get_query_constructor_prompt
allowed_operators=" | ".join(allowed_operators),
TypeError: can only join an iterable | qdrant.py doesn't contain any allowed_operators | https://api.github.com/repos/langchain-ai/langchain/issues/12061/comments | 3 | 2023-10-20T03:00:01Z | 2024-02-12T16:10:44Z | https://github.com/langchain-ai/langchain/issues/12061 | 1,953,416,596 | 12,061 |
[
"langchain-ai",
"langchain"
] | ### System Info
langchain==0.0.319, mac, python 3.10
### Who can help?
@hwchase17 @agola11
I'm trying to use this exact example from: https://python.langchain.com/docs/expression_language/cookbook/memory
```
model = ChatOpenAI()
prompt = ChatPromptTemplate.from_messages([
("system", "You are a helpful chatbot"),
MessagesPlaceholder(variable_name="history"),
("human", "{input}")
])
memory = ConversationBufferMemory(return_messages=True)
memory.load_memory_variables({})
chain = RunnablePassthrough.assign(
memory=RunnableLambda(memory.load_memory_variables) | itemgetter("history")
) | prompt | model
inputs = {"input": "hi im bob"}
response = chain.invoke(inputs)
```
and getting:
```
File "/Users/name/.pyenv/versions/3.10.10/lib/python3.10/site-packages/langchain/schema/prompt_template.py", line 60, in <dictcomp>
**{key: inner_input[key] for key in self.input_variables}
KeyError: 'history'
```
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [X] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Please run the code above.
### Expected behavior
To have it work like the documentation. | The LCEL memory example returns KeyError | https://api.github.com/repos/langchain-ai/langchain/issues/12057/comments | 9 | 2023-10-20T00:04:37Z | 2024-05-29T07:56:57Z | https://github.com/langchain-ai/langchain/issues/12057 | 1,953,239,021 | 12,057 |
[
"langchain-ai",
"langchain"
] | ### Feature request
Currently Bedrock and BedrockChat models do not sypport async calls and streaming.
it would be very useful to have working ChatOpenAI methods like _acall and _astream in Bedrock llms too so that we can use Claude2 and other Bedrock Models in production easily
### Motivation
without async functionalities it is hard to build production level chatbots using Bedrock models, including claude2 which is one of the most desired models
### Your contribution
I am trying myself, but I am having difficulties. If someone can help, it would be very appreciated by many, | Add Async _acall and _astream to Bedrock | https://api.github.com/repos/langchain-ai/langchain/issues/12054/comments | 4 | 2023-10-19T21:16:32Z | 2024-02-05T22:56:23Z | https://github.com/langchain-ai/langchain/issues/12054 | 1,953,068,062 | 12,054 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
In the [documentation](https://python.langchain.com/docs/modules/agents/tools/custom_tools), it's mentioned that expected input parameters could be defined though `args_schema` for the custom tool:
```
class CalculatorInput(BaseModel):
question: str = Field()
class CustomCalculatorTool(BaseTool):
name = "Calculator"
description = "useful for when you need to answer questions about math"
args_schema: Type[BaseModel] = CalculatorInput
```
The output of my custom function is a complex table with multiple columns and diverse data types. I would like to provide clearer descriptions for each column, including the possible values that can be found in each column. I assume that this way, the agent can utilize the data more effectively.
Can you please clarify if it is possible to describe the output from the custom tool in a similar manner as `args_schema` for the agent?
### Suggestion:
Would something like this work for the output description? (Of course, the need for such a solution would be for more complex outputs.)
```
class CalculatorInput(BaseModel):
question: str = Field()
class CalculatorOutput(BaseModel):
answer: str = Field()
class CustomCalculatorTool(BaseTool):
name = "Calculator"
description = "useful for when you need to answer questions about math"
args_schema: Type[BaseModel] = CalculatorInput
output_schema: Type[BaseModel] = CalculatorOutput
```
Thank you!`
| Issue: description for the custom tool's output | https://api.github.com/repos/langchain-ai/langchain/issues/12050/comments | 1 | 2023-10-19T20:56:28Z | 2024-02-06T16:17:16Z | https://github.com/langchain-ai/langchain/issues/12050 | 1,953,042,836 | 12,050 |
[
"langchain-ai",
"langchain"
] | ### System Info
I filed an issue with llama-cpp here https://github.com/ggerganov/llama.cpp/issues/3689
langchain
```Name: langchain
Version: 0.0.208
Summary: Building applications with LLMs through composability
Home-page: https://www.github.com/hwchase17/langchain
Author:
Author-email:
License: MIT
Location: Work\SHARK\shark.venv\Lib\site-packages
Requires: aiohttp, dataclasses-json, langchainplus-sdk, numexpr, numpy, openapi-schema-pydantic, pydantic, PyYAML, requests, SQLAlchemy, tenacity
```
llama-cpp-python
```Name: llama_cpp_python
Version: 0.2.11
Summary: Python bindings for the llama.cpp library
Home-page:
Author:
Author-email: Andrei Betlen <abetlen@gmail.com>
License: MIT
Location: Work\SHARK\shark.venv\Lib\site-packages
Requires: diskcache, numpy, typing-extensions
Required-by:
```
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [X] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
the toy code is adopted from https://learn.activeloop.ai/courses/take/langchain/multimedia/46317643-langchain-101-from-zero-to-hero
It's the first toy vector db embedding example with "Napoleon"
Here is the code to reproduce the error:
```
import os
from langchain.chains import ConversationChain
from langchain.memory import ConversationBufferMemory
from langchain.embeddings import LlamaCppEmbeddings
from langchain.vectorstores import DeepLake
from langchain.text_splitter import RecursiveCharacterTextSplitter
from langchain.llms import LlamaCpp
from langchain.chains import RetrievalQA
from langchain.prompts import PromptTemplate
from langchain.chains import LLMChain
from langchain.callbacks.manager import CallbackManager
from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler
from langchain.prompts import PromptTemplate
from langchain.chains import LLMChain
from langchain.callbacks.manager import CallbackManager
from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler
# instantiate the LLM and embeddings models
llm = LlamaCpp(model_path="llama-2-13b-chat.Q5_K_M.gguf",
temperature=0,
max_tokens=1000, # this was lowered from the original value of 2000, but did not fix it
top_p=1,
Verbose=True)
embeddings = LlamaCppEmbeddings(model_path="llama-2-13b-chat.Q5_K_M.gguf")
# create our documents
texts = [
"Napoleon Bonaparte was born in 15 August 1769",
"Louis XIV was born in 5 September 1638"
]
text_splitter = RecursiveCharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
docs = text_splitter.create_documents(texts)
# create Deep Lake dataset
# TODO: use your organization id here. (by default, org id is your username)
my_activeloop_org_id = "<SOME_ID>"
my_activeloop_dataset_name = "langchain_llama_00"
dataset_path = f"hub://{my_activeloop_org_id}/{my_activeloop_dataset_name}"
db = DeepLake(dataset_path=dataset_path, embedding_function=embeddings)
# add documents to our Deep Lake dataset
db.add_documents(docs)
retrieval_qa = RetrievalQA.from_chain_type(
llm=llm,
chain_type="stuff",
retriever=db.as_retriever())
from langchain.agents import initialize_agent, Tool
from langchain.agents import AgentType
tools = [
Tool(
name="Retrieval QA System",
func=retrieval_qa.run,
description="Useful for answering questions."
),
]
agent = initialize_agent(
tools,
llm,
agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION,
verbose=True
)
response = agent.run("When was Napoleone born?")
print(response)
```
in `agent.run(..)` line llama-cpp says it's running out of memory
```
ggml_allocr_alloc: not enough space in the buffer (needed 442368, largest block available 290848)
GGML_ASSERT: C:\Users\jason\AppData\Local\Temp\pip-install-4x0xr_93\llama-cpp-python_fec9a526add744f5b2436cab2e2c4c28\vendor\llama.cpp\ggml-alloc.c:173: !"not enough space in the buffer"
```
I don't know enough about how LlamaCppEmbeddings works to know if this is an error on my end, or a bug in llama-cpp.
Any guidance is appreciated.
Thank you
### Expected behavior
I expect it to work like the openai example!
| Toy vectordb embedding example adopted to llama-cpp-python causes failure | https://api.github.com/repos/langchain-ai/langchain/issues/12049/comments | 4 | 2023-10-19T20:43:27Z | 2024-02-12T16:10:49Z | https://github.com/langchain-ai/langchain/issues/12049 | 1,953,025,897 | 12,049 |
[
"langchain-ai",
"langchain"
] | ### Feature request
It would be nice to have agents that could access dictionary APIs such as the Merriam-Webster API or Urban Dictionary API (for slang).
### Motivation
It can be useful to be able to look up definitions for words using a dictionary to provide additional context. With no current dictionary tools available, it would be beneficial for there to be an implemented dictionary tool available at all.
### Your contribution
We will open a PR that adds a new tool for accessing the Merriam-Webster Collegiate Dictionary API (https://dictionaryapi.com/products/api-collegiate-dictionary[/](https://www.dictionaryapi.com/)), which provides definitions for English words, as soon as possible. In the future this could be extended to support other Merriam-Webster APIs such as their Medical Dictionary API (https://dictionaryapi.com/products/api-medical-dictionary) or Spanish-English Dictionary API (https://dictionaryapi.com/products/api-spanish-dictionary).
We may also open another PR for Urban Dictionary API integration. | Tools for Dictionary APIs | https://api.github.com/repos/langchain-ai/langchain/issues/12039/comments | 1 | 2023-10-19T18:31:45Z | 2023-11-30T01:28:30Z | https://github.com/langchain-ai/langchain/issues/12039 | 1,952,840,501 | 12,039 |
[
"langchain-ai",
"langchain"
] | ### System Info
Name: langchain
Version: 0.0.317
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [X] Callbacks/Tracing
- [X] Async
### Reproduction
Hello @agola11
I got this runtime warning:
RuntimeWarning: coroutine 'AsyncCallbackManagerForLLMRun.on_llm_new_token' was never awaited
run_manager.on_llm_new_token(chunk.text, chunk=chunk)
I try to stream over a websocket the generated tokens. When I try to add a AsyncCallbackHandler to manage this streaming and run acall the warning occurs and nothing is streamed out.
class StreamingLLMCallbackHandler(AsyncCallbackHandler):
def __init__(self, websocket):
self.websocket = websocket
self.intermediate_result = []
async def on_llm_new_token(self, token: str, **kwargs: Any) -> None:
self.intermediate_result.append(token)
await self.websocket.send_text(token)
async def on_llm_end(self, token: str, **kwargs: Any) -> None:
await self.websocket.send_text("[END]")
stream_handler = StreamingLLMCallbackHandler()
model_kwargs = {
"max_tokens_to_sample": 8000,
"temperature": 0.7,
# "top_k": 250,
# "top_p": 1,
"stop_sequences" : ['STOP_LLM']
}
llm = Bedrock(
client=bedrock,
model_id="anthropic.claude-v2",
# provider_stop_sequence_key_name_map={'anthropic': 'stop_sequences'},
streaming=True,
callbacks=[stream_handler],
model_kwargs=model_kwargs
)
prompt_template = f"""
Human:
{{context}} {{question}}
Assistant:"""
PROMPT = PromptTemplate(template=prompt_template, input_variables=["context", "question"])
chain_type_kwargs = {"prompt": PROMPT}
pinecone.init(api_key=PINECONE_API_KEY, environment=PINECONE_API_ENV)
chat_vectorstore = Pinecone.from_existing_index(index_name='intelligencechat1', embedding= OpenAIEmbeddings(openai_api_key= OPENAI_API_KEY), namespace='146')
chain = RetrievalQA.from_chain_type(
llm=llm,
chain_type="stuff",
return_source_documents=True,
chain_type_kwargs=chain_type_kwargs,
retriever=chat_vectorstore.as_retriever(search_kwargs={'k' : 4}),)
query = 'summarize'
result = await chain.acall(({"query": query}))
### Expected behavior
The expected behavior is that each token is streamed sequentially over the websocket.
the chain work in syncronous mode without 'acall' | Bedrock chain not working with AsyncCallbackHandler | https://api.github.com/repos/langchain-ai/langchain/issues/12035/comments | 11 | 2023-10-19T18:09:39Z | 2024-03-29T00:45:02Z | https://github.com/langchain-ai/langchain/issues/12035 | 1,952,806,485 | 12,035 |
[
"langchain-ai",
"langchain"
] | ### Feature request
Hi, I'd love if we could create conversational retrieval agents using BedrockChat LLMs!
### Motivation
This feature would be very useful for many users. | create_conversational_retrieval_agent with BedrockChat models | https://api.github.com/repos/langchain-ai/langchain/issues/12028/comments | 4 | 2023-10-19T15:57:59Z | 2024-05-07T16:06:13Z | https://github.com/langchain-ai/langchain/issues/12028 | 1,952,599,482 | 12,028 |
[
"langchain-ai",
"langchain"
] | ### Feature request
We want some output parser or feature that will restrict llm to generate specifi number of words.in json or any format.
some times user want 10 lines output,sometime only 2 words etc
so this is very helpful feature .thanks
### Motivation
i want to build the next word auto suggetion model using llm,but my llm is giving me crazy output every time
instead of giving only 1 words
example: my input is
input: what is
output : your name
input: good
output: morning
input: what is the meaninig
output: of
so it will give me only 1 word suggetion,insted of giving all other stuff. want to restrict llm for specific words.
### Your contribution
will help with promt.but few quantised llm ar not good with giving only one word | how can i get only 1 or 2 words output from my llm? | https://api.github.com/repos/langchain-ai/langchain/issues/12024/comments | 4 | 2023-10-19T13:06:06Z | 2024-02-11T16:10:01Z | https://github.com/langchain-ai/langchain/issues/12024 | 1,952,216,333 | 12,024 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
```
import os
from langchain.llms import AzureOpenAI
os.environ["OPENAI_API_TYPE"] = "azure"
os.environ["OPENAI_API_VERSION"] = "2023-07-01-preview"
os.environ["OPENAI_API_BASE"] = "https://myurlid.openai.azure.com/"
os.environ["OPENAI_API_KEY"] = "my key"
llm = AzureOpenAI(
deployment_name="gpt35",
openai_api_version="2023-07-01-preview",
)
# Run the LLM
llm("Tell me a joke")
print(llm)
```
this is my demo code, try run it show error info:
```
$ /bin/python /home/good/langchain/cookbook/test/azure_hello.py
Traceback (most recent call last):
File "/home/good/langchain/cookbook/test/azure_hello.py", line 22, in <module>
llm("Tell me a joke")
File "/home/good/.local/lib/python3.9/site-packages/langchain/llms/base.py", line 866, in __call__
self.generate(
File "/home/good/.local/lib/python3.9/site-packages/langchain/llms/base.py", line 646, in generate
output = self._generate_helper(
File "/home/good/.local/lib/python3.9/site-packages/langchain/llms/base.py", line 534, in _generate_helper
raise e
File "/home/good/.local/lib/python3.9/site-packages/langchain/llms/base.py", line 521, in _generate_helper
self._generate(
File "/home/good/.local/lib/python3.9/site-packages/langchain/llms/openai.py", line 401, in _generate
response = completion_with_retry(
File "/home/good/.local/lib/python3.9/site-packages/langchain/llms/openai.py", line 115, in completion_with_retry
return _completion_with_retry(**kwargs)
File "/home/good/.local/lib/python3.9/site-packages/tenacity/__init__.py", line 289, in wrapped_f
return self(f, *args, **kw)
File "/home/good/.local/lib/python3.9/site-packages/tenacity/__init__.py", line 379, in __call__
do = self.iter(retry_state=retry_state)
File "/home/good/.local/lib/python3.9/site-packages/tenacity/__init__.py", line 314, in iter
return fut.result()
File "/usr/lib64/python3.9/concurrent/futures/_base.py", line 439, in result
return self.__get_result()
File "/usr/lib64/python3.9/concurrent/futures/_base.py", line 391, in __get_result
raise self._exception
File "/home/good/.local/lib/python3.9/site-packages/tenacity/__init__.py", line 382, in __call__
result = fn(*args, **kwargs)
File "/home/good/.local/lib/python3.9/site-packages/langchain/llms/openai.py", line 113, in _completion_with_retry
return llm.client.create(**kwargs)
File "/home/good/.local/lib/python3.9/site-packages/openai/api_resources/completion.py", line 25, in create
return super().create(*args, **kwargs)
File "/home/good/.local/lib/python3.9/site-packages/openai/api_resources/abstract/engine_api_resource.py", line 155, in create
response, _, api_key = requestor.request(
File "/home/good/.local/lib/python3.9/site-packages/openai/api_requestor.py", line 299, in request
resp, got_stream = self._interpret_response(result, stream)
File "/home/good/.local/lib/python3.9/site-packages/openai/api_requestor.py", line 710, in _interpret_response
self._interpret_response_line(
File "/home/good/.local/lib/python3.9/site-packages/openai/api_requestor.py", line 775, in _interpret_response_line
raise self.handle_error_response(
openai.error.InvalidRequestError: The completion operation does not work with the specified model, gpt-35-turbo. Please choose different model and try again. You can learn more about which models can be used with each operation here: https://go.microsoft.com/fwlink/?linkid=2197993.
```
$ python -V
Python 3.9.9
I'm sure my Azure key and other information are correct, because they work normally elsewhere.
### Suggestion:
_No response_ | Issue:The completion operation does not work with the specified model for azure openai api | https://api.github.com/repos/langchain-ai/langchain/issues/12019/comments | 6 | 2023-10-19T09:49:05Z | 2024-02-11T16:10:06Z | https://github.com/langchain-ai/langchain/issues/12019 | 1,951,757,935 | 12,019 |
[
"langchain-ai",
"langchain"
] | ### System Info
langchain 0.0.312
Python 3.11.6
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```python
from bs4 import SoupStrainer
from langchain.document_loaders import WebBaseLoader
from langchain.text_splitter import CharacterTextSplitter
def load_html():
only_body = SoupStrainer('body')
loader = WebBaseLoader(['https://example.com/'], bs_kwargs={'parse_only': only_body})
docs = loader.load()
text_splitter = CharacterTextSplitter(
separator = "\n",
chunk_size = 300,
chunk_overlap = 50,
length_function = len,
)
print(text_splitter.split_documents(docs))
# -> get all texts in html, not filtered by bs_kwargs passed in WebBaseLoader
```
### Expected behavior
expected filtered texts passed by parse_only in bs_kwargs when instantiate WebBaseLoader.
https://github.com/langchain-ai/langchain/blob/12f8e87a0e89a8ff50fc7dbab612ac6770f3d258/libs/langchain/langchain/document_loaders/web_base.py#L245
in lazy_load method, self._scrape is called with path but not other parameters
| self._scrape in lazy_load method is not taken any parameters except path given instantiating WebBaseLoader | https://api.github.com/repos/langchain-ai/langchain/issues/12018/comments | 2 | 2023-10-19T08:23:38Z | 2024-02-06T16:17:36Z | https://github.com/langchain-ai/langchain/issues/12018 | 1,951,581,376 | 12,018 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
I'm currently working on accessing a Confluence space using Langchain and performing question answering on its data. The embeddings of this data are stored in a Chromadb vector database once I provide user name,API keyand Space key.
However, I'm looking for a way to automatically generate embeddings for any documents that change in real-time within the Confluence space and enable real-time question answering on the updated data. Any suggestions or solutions on how to achieve this would be greatly appreciated!
### Suggestion:
_No response_ | Issue: How to Automatically Generate Embeddings for Updated Documents in a Confluence Space and Enable Real-Time Question Answering on the Updated Data? | https://api.github.com/repos/langchain-ai/langchain/issues/12013/comments | 2 | 2023-10-19T06:09:01Z | 2024-02-06T16:17:42Z | https://github.com/langchain-ai/langchain/issues/12013 | 1,951,360,189 | 12,013 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
**Below code is for generating embeddings from pdf**
loader = PyPDFLoader(f"{file_path}")
document = loader.load()
text_splitter = RecursiveCharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
texts = text_splitter.split_documents(documents=document)
embedding = OpenAIEmbeddings()
**Below code is for generating embeddings from confluence**
embedding = OpenAIEmbeddings()
loader = ConfluenceLoader(
url=confluence_url,
username=username,
api_key=api_key
)
for space_key in space_key:
documents.extend(loader.load(space_key=space_key,limit=100))
# Split the texts
text_splitter = CharacterTextSplitter(chunk_size=6000, chunk_overlap=0)
texts = text_splitter.split_documents(documents)
text_splitter = TokenTextSplitter(chunk_size=1000, chunk_overlap=10, encoding_name="cl100k_base")
texts = text_splitter.split_documents(texts)
I would like to inquire about the feasibility of generating embeddings for both PDF documents and Confluence content and storing them in a single 'embeddings' folder. This would allow us to have the flexibility to perform question answering from either Confluence or multiple PDF sources without switching between different folders. Can you provide guidance on how to achieve this integrated storage approach?
### Suggestion:
_No response_ | Issue:doubt about generating embeddings for both PDF documents and Confluence content and storing them in a single 'embeddings' folder | https://api.github.com/repos/langchain-ai/langchain/issues/12012/comments | 5 | 2023-10-19T06:05:13Z | 2024-02-11T16:10:11Z | https://github.com/langchain-ai/langchain/issues/12012 | 1,951,355,497 | 12,012 |
[
"langchain-ai",
"langchain"
] | ### System Info
Hi, I am facing an issue when attempting to run the "Semi_structured_multi_modal_RAG_LLaMA2.ipynb" notebook from the cookbook.

**Environment Details**
Langchain Version: 0.0.317
I would appreciate any assistance in resolving this issue. Thank you.
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
TypeError: UnstructuredYoloXModel.initialize() got an unexpected keyword argument 'extract_images_in_pdf'
### Expected behavior
The notebook should run without any issues and produce the expected output as documented in the cookbook | TypeError: UnstructuredYoloXModel.initialize() got an unexpected keyword argument 'extract_images_in_pdf' while running Semi_structured_multi_modal_RAG_LLaMA2.ipynb | https://api.github.com/repos/langchain-ai/langchain/issues/12010/comments | 9 | 2023-10-19T05:14:22Z | 2024-02-14T16:09:13Z | https://github.com/langchain-ai/langchain/issues/12010 | 1,951,253,426 | 12,010 |
[
"langchain-ai",
"langchain"
] | ### System Info
Hello! Even though [API](https://api.python.langchain.com/en/latest/retrievers/langchain.retrievers.kay.KayAiRetriever.html) mentions a metadata param, it's not found in [code](https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/retrievers/kay.py#L9).
Without metadata filtering, querying โTell me about the returns of Palantir Technologies Inc.โ returns docs with 'company_name': 'ETSY INC'.
Thank you
### Who can help?
@eyurtsev?
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
from langchain.retrievers import KayAiRetriever
retriever = KayAiRetriever.create(dataset_id="company", data_types=["10-K"], num_contexts=1)
retriever.get_relevant_documents(โTell me about the returns of Palantir Technologies Inc.?")
### Expected behavior
[Document(page_content="Company Name: ETSY INC \n Company Industry: SERVICES-BUSINESS SERVICES, NEC \n Form Title: 10-K 2020-FY \n Form Section: Risk Factors \n Text: Any events causing ... abroad.", metadata={'chunk_type': 'text', 'chunk_years_mentioned': [], 'company_name': 'PALANTIR TECHNOLOGIES INC', 'company_sic_code_description': 'SERVICES-BUSINESS SERVICES, NEC', 'data_source': '10-K', 'data_source_link': 'https://www.sec.gov/Archives/edgar/data/1370637/000137063721000012', 'data_source_publish_date': '2020-01-01T00:00:00Z', 'data_source_uid': '0001370637-21-000012', 'title': 'ETSY INC | 10-K 2020-FY '})] | KayAiRetriever: without metadata filtering, wrong results | https://api.github.com/repos/langchain-ai/langchain/issues/12008/comments | 8 | 2023-10-19T04:29:06Z | 2023-10-24T15:12:17Z | https://github.com/langchain-ai/langchain/issues/12008 | 1,951,178,914 | 12,008 |
[
"langchain-ai",
"langchain"
] | i have 1 folder called with data, in it there are 2 .txt files which are obama.txt and trump.txt, each file contains summary of each person from wikipedia and on the root of the folder, i have anthropic.py and below is the code
```
from langchain.document_loaders import DirectoryLoader
from langchain.indexes import VectorstoreIndexCreator
from langchain.embeddings import BedrockEmbeddings
from langchain.llms.bedrock import Bedrock
boto3_bedrock = boto3.client("bedrock-runtime")
llm = Bedrock(
model_id="anthropic.claude-v2",
client=boto3_bedrock,
model_kwargs={"max_tokens_to_sample": 200},
)
bedrock_embeddings = BedrockEmbeddings(
model_id="amazon.titan-embed-text-v1", client=boto3_bedrock
)
loader = DirectoryLoader("./data/", glob="*.txt")
index = VectorstoreIndexCreator(embedding=bedrock_embeddings).from_loaders([loader])
print(index.query("who is george washington", llm=llm))
```
as you can see i am using anthropic.claude-v2 LLM, and as for the query, i asked "who is george washington." during my first run, i got a response that describes who is george washington which shouldnt have happened because the code says to look for context provided only in obama.txt and trump.txt and george washington is not mentioned in either of them. However, when i re-run the code for the second time with **no changes** anywhere, i got a response saying that it does not know who is george washington which is what should have happened. Why is this the case? Why the answers are different drastically when i run the code with 0 change? I attached a screenshot of the terminal output

### Suggestion:
_No response_ | langchain answers change drastically | https://api.github.com/repos/langchain-ai/langchain/issues/12005/comments | 10 | 2023-10-19T03:18:24Z | 2024-02-14T16:09:18Z | https://github.com/langchain-ai/langchain/issues/12005 | 1,951,117,494 | 12,005 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
I have checked the documentation, although both are supported in langchain, but I could not find a way for streaming output.
### Suggestion:
_No response_ | Is there any way to stream output for VLLM and Together ai? | https://api.github.com/repos/langchain-ai/langchain/issues/12004/comments | 3 | 2023-10-19T02:37:53Z | 2024-02-13T12:41:33Z | https://github.com/langchain-ai/langchain/issues/12004 | 1,951,079,170 | 12,004 |
[
"langchain-ai",
"langchain"
] | hi team,
Can I return source documents when using MultiRetrievalQAChain?
I want to fetch the metadata of source.
thx | MultiRetrievalQAChain return source documents | https://api.github.com/repos/langchain-ai/langchain/issues/12002/comments | 3 | 2023-10-19T01:16:15Z | 2024-02-10T16:11:52Z | https://github.com/langchain-ai/langchain/issues/12002 | 1,951,006,205 | 12,002 |
[
"langchain-ai",
"langchain"
] | This is the default class of VectorstoreIndexCreator
```
class VectorstoreIndexCreator(
*,
vectorstore_cls: type[VectorStore] = Chroma,
embedding: Embeddings = OpenAIEmbeddings,
text_splitter: TextSplitter = _get_default_text_splitter,
vectorstore_kwargs: dict = dict
)
```
the default is to use OpenAIEmbeddings as its embedding. What i'm trying to do is to use BedrockEmbeddings, below is my code
```
loaders = TextLoader('data.txt')
index = VectorstoreIndexCreator(embedding=BedrockEmbeddings).from_loaders([loaders])
```
The error that i got
```
VectorstoreIndexCreator(embedding=BedrockEmbeddings).from_loaders([loaders])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "pydantic\main.py", line 341, in pydantic.main.BaseModel.__init__
pydantic.error_wrappers.ValidationError: 1 validation error for VectorstoreIndexCreator
embedding
instance of Embeddings expected (type=type_error.arbitrary_type; expected_arbitrary_type=Embeddings)
``` | Overriding VectorstoreIndexCreator() embedding | https://api.github.com/repos/langchain-ai/langchain/issues/12001/comments | 4 | 2023-10-19T01:04:25Z | 2023-10-19T03:07:29Z | https://github.com/langchain-ai/langchain/issues/12001 | 1,950,996,772 | 12,001 |
[
"langchain-ai",
"langchain"
] | ### System Info
langchain_version: "0.0.306"
library: "langchain"
library_version: "0.0.306"
platform: "Windows-10-10.0.22621-SP0"
py_implementation: "CPython"
runtime: "python"
runtime_version: "3.9.0rc2"
sdk_version: "0.0.41"
### Who can help?
@hwchase17
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Streaming the response of a simple route created among 2 Runnables doesn't work. When I stream each Runnable independelty, it streams perfectly.
When I create a route for those Runnables to choose which one of them should be the next step, it doesn't stream the result.
Here's the example:
```
# ----------------- Runnable 1 -----------------
DEFAULT_DOCUMENT_PROMPT = PromptTemplate.from_template(template="{page_content}")
def _combine_documents(docs, document_prompt = DEFAULT_DOCUMENT_PROMPT, document_separator="\n\n"):
doc_strings = [format_document(doc, document_prompt) for doc in docs]
return document_separator.join(doc_strings)
def _format_chat_history(chat_history: List[Tuple]) -> str:
buffer = ""
for dialogue_turn in chat_history:
human = "Human: " + dialogue_turn[0]
ai = "Assistant: " + dialogue_turn[1]
buffer += "\n" + "\n".join([human, ai])
return buffer
_inputs = RunnableMap({
"standalone_question": {
"question": lambda x: x["question"],
"chat_history": lambda x: _format_chat_history(x['chat_history'])
} | PromptFactory.CONDENSE_QUESTION_PROMPT | ChatOpenAI(temperature=0, streaming=True) | StrOutputParser(),
})
_context = {
"context": itemgetter("standalone_question") | vector_retriever | _combine_documents,
"question": lambda x: x["standalone_question"]
}
Runnable1 = _inputs | _context | PromptFactory.PROMPT | ChatOpenAI(temperature=0, streaming=True)
# ----------------- Stream the response: -----------------
for s in Runnable1.stream({"question": "Hello!", "chat_history": []}):
print(s, end="", flush=True)
```
The above stream works
```
# ----------------- Runnable 2 -----------------
Runnable2 = RunnableMap({
"response": {
"question": lambda x: x["question"],
"chat_history": lambda x: _format_chat_history(x['chat_history']),
"context": lambda x: QueriesContext.get_result()
} | PromptTemplate.from_template(PromptFactory.followup_context_template) | ChatOpenAI(temperature=0, streaming=True) | StrOutputParser(),
})
# ----------------- Stream the response: -----------------
for s in Runnable2.stream({"question": "Hello!", "chat_history": [], "context": ['initialvalue']}):
print(s, end="", flush=True)
```
The above stream also works
```
def get_result(text):
return ["newvalue"]
router_chain = PromptTemplate.from_template(PromptFactory.router_template_test) | ChatOpenAI(temperature=0, streaming=True) | StrOutputParser()
def route(info):
if "runnable1" in info["topic"].lower():
return Runnable1
elif "runnable2" in info["topic"].lower():
return Runnable2
else:
raise Exception("Invalid topic")
full_runnable_router_chain = {
"topic": router_chain,
"question": lambda x: x["question"],
"chat_history": lambda x: x["chat_history"],
"context": lambda x: x["context"]
} | RunnableLambda(route)
# ----------------- Stream the response: -----------------
for s in full_runnable_router_chain.stream({"question": "Hello!", "chat_history": [], "context": ['initialvalue']}):
print(s.content, end="", flush=True)
```
The above stream doesn't stream
### Expected behavior
Stream from "full_runnable_router_chain" | Streaming not working when routing between Runnables in LCEL | https://api.github.com/repos/langchain-ai/langchain/issues/11998/comments | 10 | 2023-10-19T00:24:22Z | 2023-12-26T20:49:20Z | https://github.com/langchain-ai/langchain/issues/11998 | 1,950,952,387 | 11,998 |
[
"langchain-ai",
"langchain"
] | ### Feature request
We propose the integration of a new tool into Langchain that will provide comprehensive support for queries on the AlphaVantage Trading API. AlphaVantage offers a wide range of financial data and services, and this integration will enhance Langchain's capabilities for financial data analysis.
Here is the list of AlphaVantage APIs that will be integrated into the new tool:
- [TIME_SERIES_DAILY](https://www.alphavantage.co/documentation/#daily)
- [TIME_SERIES_WEEKLY](https://www.alphavantage.co/documentation/#weekly)
- [Quote Endpoint](https://www.alphavantage.co/documentation/#latestprice)
- [Search Endpoint](https://www.alphavantage.co/documentation/#symbolsearch)
- [Market News & Sentiment](https://www.alphavantage.co/documentation/#news-sentiment)
- [Top Gainers, Losers, and Most Actively Traded Tickers (US Market)](https://www.alphavantage.co/documentation/#gainer-loser)
### Motivation
The integration of AlphaVantage Trading API support in Langchain will provide users with access to a wealth of financial data, enabling them to perform in-depth analysis, develop trading strategies, and make informed financial decisions, all with real-time information
### Your contribution
I am a University of Toronto Student, working in a group and plan to submit a PR for this issue in November | Add Alpha Vantage API Tool | https://api.github.com/repos/langchain-ai/langchain/issues/11994/comments | 8 | 2023-10-18T20:15:47Z | 2024-03-13T19:58:04Z | https://github.com/langchain-ai/langchain/issues/11994 | 1,950,574,553 | 11,994 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
I want to interact with my database so I'm using SQLDatabaseChain and SQL Agent to convert natural language query to sql query then execute on the database.
So I want to save the chat history here so that if a user asks anything related previous question/answer it should pick and then answer.
So my doubt is here what memory type I need to use.
Can you show me code example.
### Suggestion:
_No response_ | Issue: Which memory type i need to use for db-backed history | https://api.github.com/repos/langchain-ai/langchain/issues/11985/comments | 6 | 2023-10-18T16:15:42Z | 2024-02-13T16:10:27Z | https://github.com/langchain-ai/langchain/issues/11985 | 1,950,139,790 | 11,985 |
[
"langchain-ai",
"langchain"
] | ### System Info
**Description:**
It's not possible to use the ParentDocumentRetriever and MultiVectorRetriever at the same time.
But, it's a good idea to generate multiple vector for one fragment, then manage the life cycle of all vertions the ParentDocumentRetriever.
I think, the `MultiVectorRetriever` is not a good idea. I think it's better to create a MultiVectorStore to be compatible with all vectorstore interface. Then, it's possible to get a Retriever with vs.as_retriever()
I started to implement this scenario, but, because all my pull-request were never reads, I prefer not to submit any code and to maintain it endlessly.
### Who can help?
@eyurtsev
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
MultiVectorRetriever need a VectorStore.
But MultiVectorRetriever IS NOT a VectorStore.
### Expected behavior
An API to create multiple vectors for the same fragment, and, at the same time, a solution to manage all the vectors with the lifecycle of the original document.
If I remove/update the original document, all the associated vectors must be removed/updated.
| ParentDocumentRetriever is incompatible with MultiVectorRetriever | https://api.github.com/repos/langchain-ai/langchain/issues/11983/comments | 4 | 2023-10-18T15:48:45Z | 2024-03-13T19:58:53Z | https://github.com/langchain-ai/langchain/issues/11983 | 1,950,071,972 | 11,983 |
[
"langchain-ai",
"langchain"
] | ### System Info
**description**
With parent_splitter, it's not possible to know the number of IDs before the split.
So, it's not possible to know the ID of each fragment.
Then, it's not possible to manage the life cycle of the fragment because it's impossible to know the list of IDs associated with the original big document.
### Who can help?
@eyurtsev
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```
from langchain.embeddings import OpenAIEmbeddings
from langchain.retrievers import ParentDocumentRetriever
from langchain.schema import Document
from langchain.storage import InMemoryStore
from langchain.text_splitter import RecursiveCharacterTextSplitter
from langchain.vectorstores.chroma import Chroma
vectorstore = Chroma(
collection_name="full_documents",
embedding_function=OpenAIEmbeddings()
)
store = InMemoryStore()
docs = [Document(page_content=txt, metadata={"id": id}) for txt, id in [("aaaaaa", 1), ("bbbbbb", 2)]]
ParentDocumentRetriever(
vectorstore=vectorstore,
docstore=store,
id_key="id",
parent_splitter=RecursiveCharacterTextSplitter(
chunk_size = 2,
chunk_overlap = 0,
length_function = len,
add_start_index = True,
),
child_splitter=RecursiveCharacterTextSplitter(
chunk_size = 1,
chunk_overlap = 0,
length_function = len,
add_start_index = True,
),
).add_documents(docs,ids=[doc.metadata["id"] for doc in docs])
```
Produce:
```
ValueError: Got uneven list of documents and ids. If `ids` is provided, should be same length as `documents`.
```
### Expected behavior
No error. | ParentDocumentRetriever: parent_splitter and ids are incompatible | https://api.github.com/repos/langchain-ai/langchain/issues/11982/comments | 4 | 2023-10-18T15:39:29Z | 2024-03-13T19:58:37Z | https://github.com/langchain-ai/langchain/issues/11982 | 1,950,052,502 | 11,982 |
[
"langchain-ai",
"langchain"
] | ### System Info
latest version of langchain. python=3.11.4
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
How to connect to SQL view in the database, instead of using the all the tables from the DB, as the DB is huge(6Gb), I'm unable to get any out a correct output, I'm getting token error and when it is performing any join its unable to fetch a correct answer, but when I using a selected table its working efficiently in terms of accuracy and time taking.
Can I get a appropriate method by which I can deal with the huge data without getting token or parser errors.
and how to work with the SQL views in the database.
here is the code for the connection string I have tried which includes the tables and its working fine,
from urllib.parse import quote_plus
driver = 'ODBC Driver 17 for SQL Server'
host = '########'
user = '#####'
database = 'HR_Git'
password = '#########'
encoded_password = quote_plus(password)
db = SQLDatabase.from_uri(f"mssql+pyodbc://{user}:{encoded_password}@{host}/{database}?driver={quote_plus(driver)}", include_tables = ['EGV_emp_departments'], sample_rows_in_table_info=2)
Thank you.
### Expected behavior
How to work with the SQL views inside the database and way to avoid token and parser error. | how to connect to sql view in database | https://api.github.com/repos/langchain-ai/langchain/issues/11980/comments | 9 | 2023-10-18T14:22:33Z | 2024-04-22T16:39:16Z | https://github.com/langchain-ai/langchain/issues/11980 | 1,949,883,986 | 11,980 |
[
"langchain-ai",
"langchain"
] | # Issue:
UnstructuredEmailLoader only returning first or no `element` of the mail irrespective of the `mode`.
### System Info
langchain version: 0.0.316
unstructured version: 0.10.18
### Who can help?
@eyurtsev
@hwchase17
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
This is the [link](https://colab.research.google.com/drive/105l79buRBNsWelBIec0E1yUaowyfbVju?usp=sharing) to the code to regenerate the reponse
### Expected behavior
It should have returned the email's message text as the output with it's metadata | UnstructuredEmailLoader just returning the first element | https://api.github.com/repos/langchain-ai/langchain/issues/11978/comments | 4 | 2023-10-18T13:40:37Z | 2023-10-19T06:05:51Z | https://github.com/langchain-ai/langchain/issues/11978 | 1,949,786,869 | 11,978 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
I am using the LLMChain with the ConversationBufferMemory and it works pretty well. There is a case where the chain throws this exception: ValueError: unexpected '{' in field name. This happens only when I use the word "field" in my question. The code I have written is down below:
FUNCTION TO RETRIEVE LLMCHAIN:
```python
def get_llm_chain(OPENAI_API_KEY, context):
global glob_llm_chain
if glob_llm_chain is None:
llm = ChatOpenAI(temperature=0, openai_api_key=OPENAI_API_KEY, model_name="gpt-4")
sysprompttemplate = PromptTemplate.from_template("""Given the following extracted parts of a long document and a question, create a final answer in english with references SOURCES.
If you dont know the answer, just say that you don't know. Dont try to make up an answer. ALWAYS return a SOURCES part in your answer.
You also have the ability to remember the previous conversation you had with the human. Answer the question based on context provided.
============
{context}
============""").format(context=context)
messages = [ SystemMessagePromptTemplate.from_template(sysprompttemplate), MessagesPlaceholder(variable_name="chat_history"), HumanMessagePromptTemplate.from_template("{question}")]
prompt = ChatPromptTemplate.from_messages(messages=messages)
MEMORY = ConversationBufferMemory(memory_key="chat_history", return_messages=True)
llm_chain = LLMChain(prompt=prompt,
llm=llm,
verbose=True,
memory=MEMORY)
glob_llm_chain = llm_chain
return glob_llm_chain
else:
return glob_llm_chain
```
FUNCTION THAT RETURNS THE ANSWER
```python
def get_answer(question):
embeddings = OpenAIEmbeddings()
milvus_connection_properties = get_milvus_connection_properties()
vector_store = retrieve_colection(embeddings, "langchain", milvus_connection_properties)
docs = vector_store.similarity_search(question)
context = get_full_context(docs)
llm_chain = get_llm_chain(OPENAI_API_KEY, context)
#result = llm_chain.predict(input={"question": question, "context":context})
result = llm_chain({"question":question})
return result
```
So, for example when I ask: What is a Field Dependency Map? I get:
```
ERROR:root:unexpected '{' in field name
Traceback (most recent call last):
File "/home/ardit/projects/evolutivoAI/langchain/app.py", line 29, in <module>
st.session_state.results = get_answer(question)
^^^^^^^^^^^^^^^^^^^^
File "/home/ardit/projects/evolutivoAI/langchain/scripts/retrieval.py", line 68, in get_answer
llm_chain = get_llm_chain(OPENAI_API_KEY, context)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/ardit/projects/evolutivoAI/langchain/scripts/retrieval.py", line 41, in get_llm_chain
messages = [ SystemMessagePromptTemplate.from_template(sysprompttemplate), MessagesPlaceholder(variable_name="chat_history"), HumanMessagePromptTemplate.from_template("{question}")]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/ardit/projects/evolutivoAI/langchain/.venv/lib/python3.11/site-packages/langchain/prompts/chat.py", line 151, in from_template
prompt = PromptTemplate.from_template(template, template_format=template_format)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/ardit/projects/evolutivoAI/langchain/.venv/lib/python3.11/site-packages/langchain/prompts/prompt.py", line 204, in from_template
input_variables = {
^
File "/home/ardit/projects/evolutivoAI/langchain/.venv/lib/python3.11/site-packages/langchain/prompts/prompt.py", line 204, in <setcomp>
input_variables = {
^
ValueError: unexpected '{' in field name
```
Any suggestion why this happens?
| Specific word crashing LLMChain | https://api.github.com/repos/langchain-ai/langchain/issues/11977/comments | 2 | 2023-10-18T13:27:44Z | 2024-02-06T16:18:01Z | https://github.com/langchain-ai/langchain/issues/11977 | 1,949,759,853 | 11,977 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
Hi!
I was taking a look to the Confluence integration at https://python.langchain.com/docs/integrations/document_loaders/confluence , and in our team we have the following doubt:
from langchain.document_loaders import ConfluenceLoader
loader = ConfluenceLoader(
url="https://yoursite.atlassian.com/wiki",
username="me",
api_key="12345"
)
documents = loader.load(space_key="SPACE", include_attachments=True, limit=100, max_pages=800)
Can we fetch multiple space and generate embeddings at a time and do question answering from either of the space fetched by confluence loader?
### Suggestion:
_No response_ | Issue: Doubt about Confluence Loader | https://api.github.com/repos/langchain-ai/langchain/issues/11976/comments | 1 | 2023-10-18T10:32:57Z | 2023-10-18T13:00:26Z | https://github.com/langchain-ai/langchain/issues/11976 | 1,949,413,017 | 11,976 |
[
"langchain-ai",
"langchain"
] | ### System Info
Langchain version: v0.0.316
I am following the langchain documentation to add memory to the chat with LLMChain: https://python.langchain.com/docs/modules/memory/adding_memory
```py
memory = ConversationBufferMemory(memory_key="chat_history", return_messages=True)
CYPHER_GENERATION_TEMPLATE = """Task: Generate Cypher statement to query a graph database.
Instructions: ...
Schema:
{schema}
The question is:
{question}
Chat History:
{chat_history}
"""
CYPHER_GENERATION_PROMPT = PromptTemplate(
input_variables=["schema", "question", "chat_history"],
template=CYPHER_GENERATION_TEMPLATE,
)
chain = GraphCypherQAChain.from_llm(
AzureChatOpenAI(
deployment_name=OPENAI_DEPLOYMENT_NAME,
model_name=OPENAI_MODEL_NAME,
openai_api_base=OPENAI_API_BASE,
openai_api_version=OPENAI_API_VERSION,
openai_api_key=OPENAI_API_KEY,
openai_api_type=OPENAI_API_TYPE,
temperature=0
),
graph=graph, verbose=True,
return_intermediate_steps=False,
cypher_prompt=CYPHER_GENERATION_PROMPT,
include_types=property_include_list,
memory=memory,
)
return chain.run(question)
```
So, when I call the `chain.run()` I get the error:
```bash
> Entering new GraphCypherQAChain chain...
Traceback (most recent call last):
File "/home/sudobhat/workspaces/llm/openai-chatgpt-sample-code/neo4j-use-case/neo4j_query_filtered_schema.py", line 171, in <module>
answer = get_openai_answer(question)
File "/home/sudobhat/workspaces/llm/openai-chatgpt-sample-code/neo4j-use-case/neo4j_query_filtered_schema.py", line 162, in get_openai_answer
return chain.run(question)
File "/home/sudobhat/.local/lib/python3.10/site-packages/langchain/chains/base.py", line 501, in run
return self(args[0], callbacks=callbacks, tags=tags, metadata=metadata)[
File "/home/sudobhat/.local/lib/python3.10/site-packages/langchain/chains/base.py", line 306, in __call__
raise e
File "/home/sudobhat/.local/lib/python3.10/site-packages/langchain/chains/base.py", line 300, in __call__
self._call(inputs, run_manager=run_manager)
File "/home/sudobhat/.local/lib/python3.10/site-packages/langchain/chains/graph_qa/cypher.py", line 185, in _call
generated_cypher = self.cypher_generation_chain.run(
File "/home/sudobhat/.local/lib/python3.10/site-packages/langchain/chains/base.py", line 501, in run
return self(args[0], callbacks=callbacks, tags=tags, metadata=metadata)[
File "/home/sudobhat/.local/lib/python3.10/site-packages/langchain/chains/base.py", line 282, in __call__
inputs = self.prep_inputs(inputs)
File "/home/sudobhat/.local/lib/python3.10/site-packages/langchain/chains/base.py", line 439, in prep_inputs
self._validate_inputs(inputs)
File "/home/sudobhat/.local/lib/python3.10/site-packages/langchain/chains/base.py", line 191, in _validate_inputs
raise ValueError(f"Missing some input keys: {missing_keys}")
ValueError: Missing some input keys: {'chat_history'}
```
This is being done almost exactly as in the documentation. Is this a bug or am I missing something?
I also tried an approach with partial variables like this, by looking at the answer in this issue https://github.com/langchain-ai/langchain/issues/8746:
```
CYPHER_GENERATION_PROMPT = PromptTemplate(
input_variables=["schema", "question"],
template=CYPHER_GENERATION_TEMPLATE,
partial_variables={"chat_history": chat_history}
)
```
this does not throw any error, but when I print the final prompt, there is nothing in the chat history.
Also, it seems to work for normal LLmChain, but not GraphCypherQAChain
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [X] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [X] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Steps to reproduce:
1. Run the code provided in the documentation: https://python.langchain.com/docs/modules/memory/adding_memory, but for GraphCypherQAChain
### Expected behavior
The expected behavior is no error thrown like: ValueError: Missing some input keys: {'chat_history'} when chat_history is passed in the prompt template. | ValueError: Missing some input keys: {'chat_history'} when adding memory to GraphCypherQAChain | https://api.github.com/repos/langchain-ai/langchain/issues/11975/comments | 7 | 2023-10-18T10:23:57Z | 2024-06-03T12:12:30Z | https://github.com/langchain-ai/langchain/issues/11975 | 1,949,395,838 | 11,975 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
Hi developers, I am trying to run SeleniumURLLoader inside a Docker container.
When I tried to do a web-scraping on the URLs I am met with this error:
`ERROR MESSAGE: Message: Service /root/.cache/selenium/chromedriver/linux64/118.0.5993.70/chromedriver unexpectedly exited. Status code was: 127`
I tried running `update apt-get` and installed all the necessary dependencies, as well as running this command in the Dockerfile:
`RUN chmod +x /root/.cache/selenium/chromedriver/linux64/118.0.5993.70/chromedriver` but I was met with this error:
`chmod: cannot access '/root/.cache/selenium/chromedriver/linux64/118.0.5993.70/chromedriver': No such file or directory`
Any advice would be greatly appreciated.
### Suggestion:
_No response_ | SeleniumURLLoader in Docker Container | https://api.github.com/repos/langchain-ai/langchain/issues/11974/comments | 6 | 2023-10-18T09:52:41Z | 2024-05-08T10:53:28Z | https://github.com/langchain-ai/langchain/issues/11974 | 1,949,335,904 | 11,974 |
[
"langchain-ai",
"langchain"
] | ### Feature request
Being able to set a SelfQueryRetriever's kwargs from the output of the Condese_question chain or any chain that runs just before.
### Motivation
The motivation behind this is enabling the ConversationalRetrievalChain based on a vectordb to tackle a big range of queries in a more specific way, for example:
- When asking about a specific topic, I want **k** kwarg to be no more than 10 and to have **lambda_mult** close to 0, meaning that documents taken from **fetch_k** should be similar.
- On the other hand when I am asking it to compare 2 entities I want **k** to be 20 for example and the **lambda_mult** to be really close to 1, meaning documents taken should have a little bit of variance
This will ensure that the passed context is good enough
### Your contribution
for now I am trying to output these variables from the condense_question sub chain, but I am not sure it's going to workout | ConversationalRetrievalChain : make the condense_question chain choose the SelfQueryRetriever kwargs | https://api.github.com/repos/langchain-ai/langchain/issues/11971/comments | 4 | 2023-10-18T08:55:23Z | 2024-02-10T16:12:03Z | https://github.com/langchain-ai/langchain/issues/11971 | 1,949,221,723 | 11,971 |
[
"langchain-ai",
"langchain"
] | ### System Info
LangChain version: 0.0.316
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```python
from langchain.llms.base import LLM
class CustomLLM(LLM):
endpoint_name: str
token: str
@property
def _llm_type(self) -> str:
return "custom"
def _score_model(self, dataset, temperature = 0.75, max_new_tokens = 100):
url = f'https://XXXXXXXXXXX/{self.endpoint_name}/XXXXXX'
headers = {'Authorization': f'Bearer {self.token}', 'Content-Type': 'application/json'}
temp_dict = {'index': [0], 'columns': ['prompt', 'temperature', 'max_new_tokens'], 'data': [[dataset, temperature, max_new_tokens]]}
data_json = json.dumps({'dataframe_split': temp_dict}, allow_nan=True)
response = requests.request(method='POST', headers=headers, url=url, data=data_json)
return response.json()['predictions']['candidates'][0]
def _call(self, prompt: str, stop: Optional[List[str]] = None,
run_manager: Optional[CallbackManagerForLLMRun] = None, **kwargs: Any):
if stop is not None: raise ValueError("stop kwargs are not permitted.")
return self._score_model(prompt)
@property
def _identifying_params(self) -> Mapping[str, Any]:
"""Get the identifying parameters."""
return {"endpoint_name": self.endpoint_name, "token":self.token}
class MLflowQABot(mlflow.pyfunc.PythonModel):
def __init__(self, llm, retriever, chat_prompt):
# QABot class just call the custom LLM class and customize the output
self.qabot = QABot(llm, retriever, chat_prompt)
def predict(self, context, inputs):
questions = list(inputs['question'])
return [self.qabot.get_answer(q) for q in questions]
system_message_prompt = SystemMessagePromptTemplate.from_template(config['system_message_template'])
human_message_prompt = HumanMessagePromptTemplate.from_template(config['human_message_template'])
chat_prompt = ChatPromptTemplate.from_messages([system_message_prompt, human_message_prompt])
# Working fine: mean able to log model when I change LLM to ChatOpenAI
# llm = ChatOpenAI(model_name=config['openai_chat_model'], temperature=config['temperature'])
llm = CustomLLM("llama2-7b", token)
model = MLflowQABot(llm, retriever, chat_prompt)
input_columns = [{"type": "string", "name": input_key} for input_key in qa_chain.input_keys]
with mlflow.start_run() as run:
mlflow_result = mlflow.pyfunc.log_model(
python_model = model,
extra_pip_requirements = ['langchain', 'tiktoken', 'openai',
'faiss-gpu', 'typing-inspect', 'typing_extensions'],
artifact_path = 'model',
#registered_model_name=config['registered_model_name'],
signature = infer_signature(input_columns, "This is prediction"))
```
### Expected behavior
### This is expected result.
``` python
(1) MLflow run
Logged [1 run](XXXXXb7d8e14c08964c988339b7c8a7) to an [experiment](XXXXXX80369415271497) in MLflow. [Learn more](https://docs.microsoft.com/azure/databricks/applications/mlflow/tracking#tracking-machine-learning-training-runs)
```
### But getting this error:
```python
---------------------------------------------------------------------------
PicklingError Traceback (most recent call last)
File <command-1869824020018208>, line 3
1 # persist model to mlflow
2 with mlflow.start_run() as run:
----> 3 mlflow_result = mlflow.pyfunc.log_model(
4 python_model = model,
5 extra_pip_requirements = ['langchain', 'tiktoken', 'openai',
6 'faiss-gpu', 'typing-inspect', 'typing_extensions'],
7 artifact_path = 'model',
8 #registered_model_name=config['registered_model_name'],
9 signature = signature)
File /databricks/python/lib/python3.10/site-packages/mlflow/pyfunc/__init__.py:1931, in log_model(artifact_path, loader_module, data_path, code_path, conda_env, python_model, artifacts, registered_model_name, signature, input_example, await_registration_for, pip_requirements, extra_pip_requirements, metadata)
1773 @format_docstring(LOG_MODEL_PARAM_DOCS.format(package_name="scikit-learn"))
1774 def log_model(
1775 artifact_path,
(...)
1788 metadata=None,
1789 ):
1790 """
1791 Log a Pyfunc model with custom inference logic and optional data dependencies as an MLflow
1792 artifact for the current run.
(...)
1929 metadata of the logged model.
1930 """
-> 1931 return Model.log(
1932 artifact_path=artifact_path,
1933 flavor=mlflow.pyfunc,
1934 loader_module=loader_module,
1935 data_path=data_path,
1936 code_path=code_path,
1937 python_model=python_model,
1938 artifacts=artifacts,
1939 conda_env=conda_env,
1940 registered_model_name=registered_model_name,
1941 signature=signature,
1942 input_example=input_example,
1943 await_registration_for=await_registration_for,
1944 pip_requirements=pip_requirements,
1945 extra_pip_requirements=extra_pip_requirements,
1946 metadata=metadata,
1947 )
File /databricks/python/lib/python3.10/site-packages/mlflow/models/model.py:572, in Model.log(cls, artifact_path, flavor, registered_model_name, await_registration_for, metadata, **kwargs)
566 if (
567 (tracking_uri == "databricks" or get_uri_scheme(tracking_uri) == "databricks")
568 and kwargs.get("signature") is None
569 and kwargs.get("input_example") is None
570 ):
571 _logger.warning(_LOG_MODEL_MISSING_SIGNATURE_WARNING)
--> 572 flavor.save_model(path=local_path, mlflow_model=mlflow_model, **kwargs)
573 mlflow.tracking.fluent.log_artifacts(local_path, mlflow_model.artifact_path)
574 try:
File /databricks/python/lib/python3.10/site-packages/mlflow/pyfunc/__init__.py:1759, in save_model(path, loader_module, data_path, code_path, conda_env, mlflow_model, python_model, artifacts, signature, input_example, pip_requirements, extra_pip_requirements, metadata, **kwargs)
1748 return _save_model_with_loader_module_and_data_path(
1749 path=path,
1750 loader_module=loader_module,
(...)
1756 extra_pip_requirements=extra_pip_requirements,
1757 )
1758 elif second_argument_set_specified:
-> 1759 return mlflow.pyfunc.model._save_model_with_class_artifacts_params(
1760 path=path,
1761 signature=signature,
1762 hints=hints,
1763 python_model=python_model,
1764 artifacts=artifacts,
1765 conda_env=conda_env,
1766 code_paths=code_path,
1767 mlflow_model=mlflow_model,
1768 pip_requirements=pip_requirements,
1769 extra_pip_requirements=extra_pip_requirements,
1770 )
File /databricks/python/lib/python3.10/site-packages/mlflow/pyfunc/model.py:189, in _save_model_with_class_artifacts_params(path, python_model, signature, hints, artifacts, conda_env, code_paths, mlflow_model, pip_requirements, extra_pip_requirements)
187 saved_python_model_subpath = "python_model.pkl"
188 with open(os.path.join(path, saved_python_model_subpath), "wb") as out:
--> 189 cloudpickle.dump(python_model, out)
190 custom_model_config_kwargs[CONFIG_KEY_PYTHON_MODEL] = saved_python_model_subpath
192 if artifacts:
File /databricks/python/lib/python3.10/site-packages/cloudpickle/cloudpickle_fast.py:57, in dump(obj, file, protocol, buffer_callback)
45 def dump(obj, file, protocol=None, buffer_callback=None):
46 """Serialize obj as bytes streamed into file
47
48 protocol defaults to cloudpickle.DEFAULT_PROTOCOL which is an alias to
(...)
53 compatibility with older versions of Python.
54 """
55 CloudPickler(
56 file, protocol=protocol, buffer_callback=buffer_callback
---> 57 ).dump(obj)
File /databricks/python/lib/python3.10/site-packages/cloudpickle/cloudpickle_fast.py:602, in CloudPickler.dump(self, obj)
600 def dump(self, obj):
601 try:
--> 602 return Pickler.dump(self, obj)
603 except RuntimeError as e:
604 if "recursion" in e.args[0]:
PicklingError: Can't pickle <cyfunction bool_validator at 0x7f95076a4450>: it's not the same object as pydantic.validators.bool_validator
``` | ISSUE: Not able to log CustomLLM using mlflow.pyfunc.log_model | https://api.github.com/repos/langchain-ai/langchain/issues/11966/comments | 2 | 2023-10-18T08:11:09Z | 2024-02-08T16:16:56Z | https://github.com/langchain-ai/langchain/issues/11966 | 1,949,137,515 | 11,966 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
I got this error when I built a ChatBot with Langchain using VertexAI. I'm seeing this error and couldn't find any details so far.
File /opt/conda/lib/python3.10/site-packages/langchain/llms/vertexai.py:100, in completion_with_retry.<locals>._completion_with_retry(*args, **kwargs)
98 @retry_decorator
99 def _completion_with_retry(*args: Any, **kwargs: Any) -> Any:
--> 100 return llm.client.predict(*args, **kwargs)
TypeError: TextGenerationModel.predict() got an unexpected keyword argument 'candidate_count'
### Suggestion:
_No response_ | Issue: TypeError: TextGenerationModel.predict() got an unexpected keyword argument 'candidate_count' | https://api.github.com/repos/langchain-ai/langchain/issues/11961/comments | 11 | 2023-10-18T07:06:38Z | 2024-02-15T16:08:25Z | https://github.com/langchain-ai/langchain/issues/11961 | 1,949,009,892 | 11,961 |
[
"langchain-ai",
"langchain"
] | ### Issue with current documentation:
Currently in the streaming documentation page, there is no guidance on how to interrupt the streaming once the model start generation. I want to implement the same "Stop generation" button functionality in chatGPT web, which should stopped the streaming generation. I tried to use try except but it is not working.
### Idea or request for content:
There is no documentation about how to interrupt the streaming generation once the model started generation. Even an error happens during new token generation, the program will not stop running but raise an error. How to stop the generation if an error happened? | DOC: There is no documentation about how to interrupt the streaming generation once the model started generation. | https://api.github.com/repos/langchain-ai/langchain/issues/11959/comments | 17 | 2023-10-18T06:27:34Z | 2024-06-21T20:54:46Z | https://github.com/langchain-ai/langchain/issues/11959 | 1,948,952,565 | 11,959 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
I'm experimenting with natural language to SQL conversion using SQLDatabaseChain and SQLDatabaseAgent. In this experiment, I'm utilizing the ConversationBufferWindowMemory. However, I've encountered an issue where the memory is not functioning as expected. When I ask a question related to a previous question or answer, the chain/agent isn't handling memory and instead responds independently of the question asked based on memory.
How to fix this
@dosubot
### Suggestion:
_No response_ | Issue: <ConversationBufferWindowMemory doesn't work with db based chat history> | https://api.github.com/repos/langchain-ai/langchain/issues/11958/comments | 7 | 2023-10-18T03:44:58Z | 2024-04-24T16:37:15Z | https://github.com/langchain-ai/langchain/issues/11958 | 1,948,740,305 | 11,958 |
[
"langchain-ai",
"langchain"
] | ### System Info
It was unexpected that I had to provide the accss_token when using QianfanLLMEndpoint
Name: langchain
Version: 0.0.312
Name: qianfan
Version: 0.0.6
### Who can help?
_No response_
### Information
- [x] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```
python
from langchain.llms import QianfanLLMEndpoint
llm = QianfanLLMEndpoint(qianfan_ak=client_id,qianfan_sk=client_secret,model="ERNIE-Bot-turbo")
res = llm("hi")
print(res)
```
error msg:
860 if not isinstance(prompt, str):
861 raise ValueError(
862 "Argument `prompt` is expected to be a string. Instead found "
863 f"{type(prompt)}. If you want to run the LLM on multiple prompts, use "
864 "`generate` instead."
865 )
866 return (
--> 867 self.generate(
868 [prompt],
869 stop=stop,
870 callbacks=callbacks,
871 tags=tags,
872 metadata=metadata,
873 **kwargs,
874 )
875 .generations[0][0]
...
166 )
167 AuthManager().register(self._ak, self._sk, self._access_token)
168 else:
InvalidArgumentError: both ak and sk must be provided, otherwise access_token should be provided
### Expected behavior
Normal operation | It was unexpected that I had to provide the accss_token when using QianfanLLMEndpoint | https://api.github.com/repos/langchain-ai/langchain/issues/11957/comments | 3 | 2023-10-18T03:27:29Z | 2024-05-07T16:06:08Z | https://github.com/langchain-ai/langchain/issues/11957 | 1,948,721,975 | 11,957 |
[
"langchain-ai",
"langchain"
] | ### System Info
langchain 0.0.316
langserve 0.0.10
python 3.11.4 on darwin
### Who can help?
@agola11 @hwchase17
### Information
- [X] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [X] Async
### Reproduction
Langserve example here (https://github.com/langchain-ai/langserve-launch-example/blob/main/langserve_launch_example/chain.py) in which I want to use ConversationChain instead of ChatOpenAI.
server.py
#!/usr/bin/env python
"""A server for the chain above."""
from fastapi import FastAPI
from langserve import add_routes
from chain import conversation_chain
app = FastAPI(title="My App")
add_routes(app, conversation_chain)
if __name__ == "__main__":
import uvicorn
uvicorn.run(app, host="localhost", port=8000)
chain.py
```
from langchain.llms import OpenAI`
from langchain.chains import ConversationChain
from langchain.prompts import PromptTemplate
from langchain.memory import ConversationBufferMemory
template = """Assistant is a large language model trained by OpenAI.
Assistant is designed to be able to assist with a wide range of tasks, from answering simple questions to providing in-depth explanations and discussions on a wide range of topics. As a language model, Assistant is able to generate human-like text based on the input it receives, allowing it to engage in natural-sounding conversations and provide responses that are coherent and relevant to the topic at hand.
{history}
Human: {input}
Assistant:"""
prompt = PromptTemplate(input_variables=["history", "input"], template=template)
conversation_chain = ConversationChain(
llm=OpenAI(temperature=0),
prompt=prompt,
verbose=True,
memory=ConversationBufferMemory(),
)
```
### Expected behavior
I was expecting to have a streaming response as the ChatOpenAI behavior. It seems to me that ConversationChain doesn't support streaming response. | Streaming support with ConversationChain | https://api.github.com/repos/langchain-ai/langchain/issues/11945/comments | 13 | 2023-10-17T20:36:50Z | 2024-06-21T13:11:39Z | https://github.com/langchain-ai/langchain/issues/11945 | 1,948,210,943 | 11,945 |
[
"langchain-ai",
"langchain"
] | ### Issue with current documentation:
When looking at the documentation for agents, memory, and the Agent Executor, I noticed you pass in tools to both the agent and the executor. What's the purpose of passing the tools to both? Shouldn't you need to just pass it to one?
https://python.langchain.com/docs/use_cases/question_answering/conversational_retrieval_agents#the-agent
**The Agent:**
`agent = OpenAIFunctionsAgent(llm=llm, tools=tools, prompt=prompt)`
**The Agent Executor:**
`agent_executor = AgentExecutor(agent=agent, tools=tools, memory=memory, verbose=True,
return_intermediate_steps=True)`
Perhaps I don't understand what AgentExecutor is doing under the hood? If someone could explain, that would be great.
### Idea or request for content:
_No response_ | DOC: Agent & AgentExecutioner | https://api.github.com/repos/langchain-ai/langchain/issues/11937/comments | 2 | 2023-10-17T18:29:39Z | 2024-02-08T16:17:01Z | https://github.com/langchain-ai/langchain/issues/11937 | 1,948,013,473 | 11,937 |
[
"langchain-ai",
"langchain"
] | ### System Info
I compared the speed of indexing with and without using the Indexing API, and I notice a significant difference. Using Indexing API is 30-50% slower. Also for experiment, I tried to index the exact same data twice, and the Indexing API is taking extremely long time for the second time. Any help would be appreciated. Thank you.
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [X] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
With Indexing API:
```
vdb = PGVector(
connection_string,
collection_name,
embedding_function
)
record_manager = SQLRecordManager(
namespace, engine=postgres_engine
)
record_manager.create_schema()
index(
doc_chunks,
record_manager,
vdb,
cleanup=None,
source_id_key="source",
)
```
Without Indexing API:
```
vdb = PGVector.from_texts(
texts
embedding
collection_name
connection_string
)
```
### Expected behavior
I expect the speeds are comparable with/without Indexing API. | Indexing API slow | https://api.github.com/repos/langchain-ai/langchain/issues/11935/comments | 10 | 2023-10-17T17:36:09Z | 2024-02-14T03:47:19Z | https://github.com/langchain-ai/langchain/issues/11935 | 1,947,924,672 | 11,935 |
[
"langchain-ai",
"langchain"
] | ### Feature request
My feature proposal involves the integration of both a greeting module and a gratitude module into the Langchain SQLDatabaseToolkit. The greeting module is designed to deliver an introductory message about the SQL-helpful bot, and the gratitude module aims to express appreciation when users interact with it.
### Motivation
The motivation behind this proposal is to enhance the user experience and make interactions with the SQLDatabaseToolkit more friendly and engaging. By adding these modules, we can create a welcoming and user-focused environment, improving user satisfaction and the overall utility of the toolkit.
I'm always frustrated when I encounter impersonal and uninspiring interactions with SQL database tools. By implementing these modules, we can address this issue and provide a more human-like and engaging experience for users.
### Your contribution
I am capable of improving and thoroughly testing the module currently | Enhancing Human-Level Interaction: Incorporating Greeting and Gratitude Modules into the Langchain SQLDatabaseToolkit | https://api.github.com/repos/langchain-ai/langchain/issues/11931/comments | 2 | 2023-10-17T17:07:58Z | 2024-02-06T16:18:21Z | https://github.com/langchain-ai/langchain/issues/11931 | 1,947,880,043 | 11,931 |
[
"langchain-ai",
"langchain"
] | ### System Info
latest version of all modules
### Who can help?
here's my PROMPT and code:
from langchain.prompts.chat import ChatPromptTemplate
updated_prompt = ChatPromptTemplate.from_messages(
[
("system",
"""
You are a knowledgeable AI assistant specializing in extracting information from the 'inquiry' table in the MySQL Database.
Your primary task is to perform a single query on the schema of the 'inquiry' table and table and retrieve the data using SQL.
When formulating SQL queries, keep the following context in mind:
- Filter records based on exact column value matches.
- If the user inquires about the Status of the inquiry fetch all these columns: status, name, and time values, and inform the user about these specific values.
- Limit query results to a maximum of 3 unless the user specifies otherwise.
- Only query necessary columns.
- Avoid querying for non-existent columns.
- Place the 'ORDER BY' clause after 'WHERE.'
- Do not add a semicolon at the end of the SQL.
If the query results in an empty set, respond with "information not found"
Use this format:
Question: The user's query
Thought: Your thought process
Action: SQL Query
Action Input: SQL query
Observation: Query results
... (repeat for multiple queries)
Thought: Summarize what you've learned
Final Answer: Provide the final answer
Begin!
"""),
("user", "{question}\n ai: "),
]
)
llm = ChatOpenAI(model=os.getenv("OPENAI_CHAT_MODEL"), temperature=0) # best result
sql_toolkit = SQLDatabaseToolkit(db=db, llm=llm)
sql_toolkit.get_tools()
sqldb_agent = create_sql_agent(
llm=llm,
toolkit=sql_toolkit,
agent_type=AgentType.ZERO_SHOT_REACT_DESCRIPTION,
verbose=True,
handle_parsing_errors=True,
)
sqldb_agent.run(updated_prompt.format(
question="What is the status of inquiry 123?"
))
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
here's my PROMPT and code:
from langchain.prompts.chat import ChatPromptTemplate
updated_prompt = ChatPromptTemplate.from_messages(
[
("system",
"""
You are a knowledgeable AI assistant specializing in extracting information from the 'inquiry' table in the MySQL Database.
Your primary task is to perform a single query on the schema of the 'inquiry' table and table and retrieve the data using SQL.
When formulating SQL queries, keep the following context in mind:
- Filter records based on exact column value matches.
- If the user inquires about the Status of the inquiry fetch all these columns: status, name, and time values, and inform the user about these specific values.
- Limit query results to a maximum of 3 unless the user specifies otherwise.
- Only query necessary columns.
- Avoid querying for non-existent columns.
- Place the 'ORDER BY' clause after 'WHERE.'
- Do not add a semicolon at the end of the SQL.
If the query results in an empty set, respond with "information not found"
Use this format:
Question: The user's query
Thought: Your thought process
Action: SQL Query
Action Input: SQL query
Observation: Query results
... (repeat for multiple queries)
Thought: Summarize what you've learned
Final Answer: Provide the final answer
Begin!
"""),
("user", "{question}\n ai: "),
]
)
llm = ChatOpenAI(model=os.getenv("OPENAI_CHAT_MODEL"), temperature=0)
sql_toolkit = SQLDatabaseToolkit(db=db, llm=llm)
sql_toolkit.get_tools()
sqldb_agent = create_sql_agent(
llm=llm,
toolkit=sql_toolkit,
agent_type=AgentType.ZERO_SHOT_REACT_DESCRIPTION,
verbose=True,
handle_parsing_errors=True,
)
sqldb_agent.run(updated_prompt.format(
question="What is the status of inquiry 123?"
))
### Expected behavior
t should be comprehensible and generate succinct results | Langchain prompt not working as expected , it's not consistence and not able to understand examples | https://api.github.com/repos/langchain-ai/langchain/issues/11929/comments | 3 | 2023-10-17T16:44:09Z | 2024-02-08T16:17:05Z | https://github.com/langchain-ai/langchain/issues/11929 | 1,947,841,518 | 11,929 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
It seems the default prompts do not support passing a JSON-Schema in create_json_agent to provide description to each field and nested fields. Wanted to understand if this hasn't been deemed necessary?
In addition, if there is a list of JSON objects, with some fields in each JSON object having nested objects or arrays, is the JSON agent the right fit, or should one try to use custom agents (perhaps Pandas) for such cases?
### Suggestion:
_No response_ | Passing a JSON-Schema in create_json_agent | https://api.github.com/repos/langchain-ai/langchain/issues/11927/comments | 3 | 2023-10-17T16:20:36Z | 2024-02-08T16:17:10Z | https://github.com/langchain-ai/langchain/issues/11927 | 1,947,798,052 | 11,927 |
[
"langchain-ai",
"langchain"
] | ### Feature request
A tool to allow agents to search and retrieve data from IMDb (https://www.imdb.com/).
### Motivation
IMDb is one of the largest movie databases available online. Adding this tool would allow agents to intelligently retrieve up-to-date movie information from IMDb, enhancing the experience of its users. For example, with this tool, users could utilize LangChain through easily accessible prompts to find movies based on favorite genres, relations to other movies, or other information such as actors and producers.
### Your contribution
If all goes well, a PR can be ready sometime in November with this feature implemented. | Adding an IMDb tool | https://api.github.com/repos/langchain-ai/langchain/issues/11926/comments | 2 | 2023-10-17T15:52:11Z | 2024-03-13T20:01:05Z | https://github.com/langchain-ai/langchain/issues/11926 | 1,947,748,350 | 11,926 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
I am calling LLM `ChatOpenAI` models through `load_qa_with_sources_chain` as shown in the following code snippet.
```python
llm=ChatOpenAI(
model_name=....,
temperature=0,
openai_api_key=...,
max_tokens=...,
)
chain = load_qa_with_sources_chain(
llm, chain_type=chain_type, verbose=verbose, prompt=prompt
)
response = chain(
{"input_documents": docs, "question": query},
)
```
Is there an easy way to obtain somehow the raw response of `OpenAI` for debugging purposes? I am especially interested in the `finish_reason` value (that is retrieved in the `LLMResult` object) to know if the OpenAI response is complete or not!
Thank you in advance for your help
### Suggestion:
_No response_ | Is there a way to obtain `finish_reason` value from OpenAI response when using `load_qa_with_sources_chain` | https://api.github.com/repos/langchain-ai/langchain/issues/11924/comments | 2 | 2023-10-17T15:32:03Z | 2024-02-06T16:18:36Z | https://github.com/langchain-ai/langchain/issues/11924 | 1,947,709,822 | 11,924 |
[
"langchain-ai",
"langchain"
] | ### System Info
Here chain example:
Thought:I can query the 'information_enquiry' table to find out who is assigned to a job.
Action: SQL Query
Action Input: SELECT name FROM nformation_enquiry WHERE job_id = '123'
Observation: SQL Query is not a valid tool, try one of [sql_db_query, sql_db_schema, sql_db_list_tables, sql_db_query_checker].
Thought:I made a mistake in the action. I should use the `sql_db_query` tool to execute the SQL query.
Action: sql_db_query
Action Input: SELECT name FROM customer_information_enquiry WHERE job_no = '123'
Observation:
Thought:I have the information about the person assigned to the job.
Final Answer: The job was assigned to Helena. --> this is wrong , it faked the first row
I have instructed prompt :
Begin!
Question: Who is assigned to 123?
Thought: I need to find the status of a specific job.
Action: SQL Query
Action Input: SELECT name FROM information_enquiry WHERE job_no ='123'
Observation:
Thought: I don't have the information about the job status.
Final Answer: information not found
but it's not working
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
llm = ChatOpenAI(model=os.getenv("OPENAI_CHAT_MODEL"), temperature=0)
sql_toolkit = SQLDatabaseToolkit(db=db, llm=llm)
sql_toolkit.get_tools()
sqldb_agent = create_sql_agent(
llm=llm,
toolkit=sql_toolkit,
agent_type=AgentType.ZERO_SHOT_REACT_DESCRIPTION,
verbose=True,
handle_parsing_errors=True
)
formatted_prompt = updated_prompt.format(question=query)
result = sqldb_agent.run(formatted_prompt)
sqldb_agent.run(updated_prompt.format(
question="Who is assigned to the job 123"))
### Expected behavior
it should return 'information not found' | Langchain SQLDatabaseToolkit providing incorrect results. it's faking the results using the top k rows | https://api.github.com/repos/langchain-ai/langchain/issues/11922/comments | 3 | 2023-10-17T15:02:23Z | 2024-02-09T16:14:38Z | https://github.com/langchain-ai/langchain/issues/11922 | 1,947,647,017 | 11,922 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
I want to modify the prompt of the open ai function agent, so i can add more parameters that i can pass as inputs during execution
initialize_agent(tools=tool_items,
llm=llm,
agent=AgentType.OPENAI_FUNCTIONS)
and during execution such as
result = agent_chain.run({"input": "Whats the distance between Malmo and Stockholm"})
i want to be able to pass several input parameters so that for example it becomes like this
agent_chain.run({"input": "Whats the distance between Malmo and Stockholm","language":"French"}) etc
I have been able to do that successfully for other chains, but not for this one
How do i achieve this
### Suggestion:
_No response_ | Issue: How to modify the actual prompt to add in new input parameters of the open ai functions agent, so that during the running of the agent, we pass those parameters also | https://api.github.com/repos/langchain-ai/langchain/issues/11921/comments | 4 | 2023-10-17T15:00:56Z | 2024-02-12T16:11:14Z | https://github.com/langchain-ai/langchain/issues/11921 | 1,947,643,981 | 11,921 |
[
"langchain-ai",
"langchain"
] | ### System Info
Langchain version: 0.0.272
Python Version: 3.11.0
### Who can help?
@hwchase17
@ag
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [X] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
The `save_context` method of the `ConversationTokenBufferMemory` does pop operation directly on `chat_memory.messages` [link to code](https://github.com/langchain-ai/langchain/blob/31f264169db4ab23689f2e179983f1cfdfd1a33a/libs/langchain/langchain/memory/token_buffer.py#L48-L49)
It works only with in-memory memory and not with db-backed memory like `PostgresChatMessageHistory` because the `buffer`( `self.chat_memory.messages` ) is immutable
```python
def save_context(self, inputs: Dict[str, Any], outputs: Dict[str, str]) -> None:
"""Save context from this conversation to buffer. Pruned."""
super().save_context(inputs, outputs)
# Prune buffer if it exceeds max token limit
buffer = self.chat_memory.messages
curr_buffer_length = self.llm.get_num_tokens_from_messages(buffer)
if curr_buffer_length > self.max_token_limit:
pruned_memory = []
while curr_buffer_length > self.max_token_limit:
pruned_memory.append(buffer.pop(0))
curr_buffer_length = self.llm.get_num_tokens_from_messages(buffer)
```
### Expected behavior
The `ChatMessageHistory` should work like an interface with`ConversationTokenBufferMemory`.
Step to resolve
- remove direct calls to `self.chat_memory.messages`
- add a private variable to hold chat history | ConversationTokenBufferMemory doesn't work with db based chat history | https://api.github.com/repos/langchain-ai/langchain/issues/11919/comments | 4 | 2023-10-17T14:28:09Z | 2024-02-11T16:10:42Z | https://github.com/langchain-ai/langchain/issues/11919 | 1,947,569,327 | 11,919 |
[
"langchain-ai",
"langchain"
] | we are trying to use langchain s3 loader to load files from the bucket using python, once we create any subfolder we are getting no such directory error. also we are not able to load the files from those sub folders.
How to fix these errors? | langchain s3 loader not able to load files from subfolders | https://api.github.com/repos/langchain-ai/langchain/issues/11917/comments | 4 | 2023-10-17T13:00:11Z | 2024-02-09T16:14:53Z | https://github.com/langchain-ai/langchain/issues/11917 | 1,947,371,287 | 11,917 |
[
"langchain-ai",
"langchain"
] | ### System Info
Python 3.11
Lanchain 315
### Who can help?
@hwchase17
@agola11
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
import openai
import os
from langchain.llms import OpenAI
llm = OpenAI(temperature=0.3)
print(llm.predict("What is the capital of India?"))
### Expected behavior
When OpenAI quotas are reached (or no payment method is defined), requests should not be retried but should raise an appropriate error.
Error in console :
```
Retrying langchain.chat_models.openai.acompletion_with_retry.<locals>._completion_with_retry in 4.0 seconds as it raised RateLimitError: You exceeded your current quota, please check your plan and billing details..
```
Equivalent TS issue : https://github.com/langchain-ai/langchainjs/issues/1929
Fix in TS land : https://github.com/langchain-ai/langchainjs/pull/1934 | Support for OpenAI quotas | https://api.github.com/repos/langchain-ai/langchain/issues/11914/comments | 2 | 2023-10-17T10:02:45Z | 2024-02-06T16:19:01Z | https://github.com/langchain-ai/langchain/issues/11914 | 1,947,039,933 | 11,914 |
[
"langchain-ai",
"langchain"
] | ### Discussed in https://github.com/langchain-ai/langchain/discussions/11846
<div type='discussions-op-text'>
<sup>Originally posted by **Marbyun** October 16, 2023</sup>
Hi folks!
i have case to create chatbot that will use 2 sources as the dataset (text file and sqlite). I use Multiple Retrieaval Sources [document](https://python.langchain.com/docs/use_cases/question_answering/multiple_retrieval) as my main code. But the code can not do continuous conversation like this:
> q:'what's name of employee id xx.xx'
> a:'his name is xxx xxx'
> q:'what's his email?'
> a:'his email is xxx@email.com'
and then i get code from this [update](https://github.com/langchain-ai/langchain/pull/8597/files) by @keenborder786 that can perfomance my q&a, but rigth now i get confuse how to combine it? can you guys help me please, i am very new in here...
> Ps:
> - I have 2 sources Text file have data about company profile and SQLite have employee data
> - Need to combine the sources and can do continuous conversation</div> | How to use Multiple Retrieaval Sources and Added Memory at SQLDatabaseChain | https://api.github.com/repos/langchain-ai/langchain/issues/11908/comments | 2 | 2023-10-17T07:13:01Z | 2024-02-06T16:19:07Z | https://github.com/langchain-ai/langchain/issues/11908 | 1,946,710,273 | 11,908 |
[
"langchain-ai",
"langchain"
] | @dosu-bot
When I use chromadb instead of deeplake, my code worked fine but with deeplake, I am facing this error when I use deeplake for some reason despite everything is the exact same thing:
creating embeddings: 12%|โโโโโโโโโโโ | 3/26 [00:02<00:17, 1.32it/s]Retrying langchain.embeddings.openai.embed_with_retry.<locals>._embed_with_retry in 4.0 seconds as it raised RateLimitError: Rate limit reached for text-embedding-ada-002 in organization org-m0YReKtLXxUATOVCwzcBNfqm on requests per min. Limit: 3 / min. Please try again in 20s. Contact us through our help center at help.openai.com if you continue to have issues. Please add a payment method to your account to increase your rate limit. Visit https://platform.openai.com/account/billing to add a payment method..
Retrying langchain.embeddings.openai.embed_with_retry.<locals>._embed_with_retry in 4.0 seconds as it raised RateLimitError: Rate limit reached for text-embedding-ada-002 in organization org-m0YReKtLXxUATOVCwzcBNfqm on requests per min. Limit: 3 / min. Please try again in 20s. Contact us through our help center at help.openai.com if you continue to have issues. Please add a payment method to your account to increase your rate limit. Visit https://platform.openai.com/account/billing to add a payment method..
Below is my code:
from langchain.chat_models import ChatOpenAI
from langchain.prompts.prompt import PromptTemplate
from langchain.memory import ConversationBufferMemory
from langchain.chains import ConversationalRetrievalChain
from langchain.document_loaders import WebBaseLoader
from langchain.text_splitter import RecursiveCharacterTextSplitter
from langchain.vectorstores import Qdrant
from langchain.callbacks import StreamingStdOutCallbackHandler
from langchain.embeddings import OpenAIEmbeddings
from langchain.callbacks import get_openai_callback
from langchain.vectorstores import Chroma
from langchain.vectorstores import DeepLake
from langchain.document_loaders import DirectoryLoader, PyPDFLoader
from dotenv import load_dotenv
import time
import warnings
# warnings.filterwarnings("ignore")
load_dotenv()
directory_path = "C:\\Users\\Asus\\Documents\\Vendolista"
pdf_loader = DirectoryLoader(directory_path,
glob="**/*.pdf",
show_progress=True,
use_multithreading=True,
silent_errors=True,
loader_cls = PyPDFLoader)
documents = pdf_loader.load()
print(str(len(documents))+ " documents loaded")
def print_letter_by_letter(text):
for char in text:
print(char, end='', flush=True)
time.sleep(0.02)
# Create embeddings
def langchain(customer_prompt, chat_history):
llm = ChatOpenAI(temperature = 0, model_name='gpt-3.5-turbo', callbacks=[StreamingStdOutCallbackHandler()], streaming = True)
text_splitter = RecursiveCharacterTextSplitter(
chunk_size=800,
chunk_overlap=80,
)
chunks = text_splitter.split_documents(documents)
embeddings = OpenAIEmbeddings()
persist_directory = "C:\\Users\\Asus\\OneDrive\\Documents\\Vendolista"
knowledge_base = Chroma.from_documents(chunks, embeddings, persist_directory = persist_directory)
#save to disk
knowledge_base.persist()
#To delete the DB we created at first so that we can be sure that we will load from disk as fresh db
knowledge_base = None
new_knowledge_base = Chroma(persist_directory = persist_directory, embedding_function = embeddings)
# Create a custom prompt for your use case
prompt_template = """
Answer the Question as a AI assistant that is answering based on the documents only. If the question is unrelated
then say "sorry this question is completely not related. If you think it is, email the staff
and they will get back to you: yazanrisheh@hotmail.com." Do not ever answer with "I don't know" to any question.
You either give an answer or mention it's not related.
Text: {context}
Question: {question}
Answer :
"""
PROMPT = PromptTemplate(
template=prompt_template, input_variables=["context", "question"]
)
memory = ConversationBufferMemory(llm=llm, memory_key='chat_history', input_key='question', output_key='answer', return_messages=False)
conversation = ConversationalRetrievalChain.from_llm(
llm=llm,
retriever=new_knowledge_base.as_retriever(),
memory=memory,
chain_type="stuff",
verbose=True,
combine_docs_chain_kwargs={"prompt":PROMPT}
)
return conversation({"question": customer_prompt, "chat_history": memory})
def main():
chat_history = []
while True:
customer_prompt = input("Ask me anything about the files (type 'exit' to quit): ")
if customer_prompt.lower() in ["exit"] and len(customer_prompt) == 4:
end_chat = "Thank you for visiting us! Have a nice day"
print_letter_by_letter(end_chat)
break
if customer_prompt:
with get_openai_callback() as cb:
response = (langchain(customer_prompt, chat_history))
print(response['answer'])
print(cb)
if __name__ == "__main__":
main() | RateLimitError | https://api.github.com/repos/langchain-ai/langchain/issues/11907/comments | 2 | 2023-10-17T06:43:59Z | 2024-02-08T16:17:35Z | https://github.com/langchain-ai/langchain/issues/11907 | 1,946,665,722 | 11,907 |
[
"langchain-ai",
"langchain"
] | ### Feature request
There should be a callback handler like [OpenAICallbackHandler](https://github.com/langchain-ai/langchain/blob/31f264169db4ab23689f2e179983f1cfdfd1a33a/libs/langchain/langchain/callbacks/openai_info.py#L120) for AWS Bedrock models, so that we can easily get the token usage and monitor cost.
It looks like the AWS Bedrock API doesn't return with token usage. Unlike OpenAI.
Can we have similar feature, perhaps [using the internal Anthropic tokenizer to count](https://github.com/langchain-ai/langchain/blob/31f264169db4ab23689f2e179983f1cfdfd1a33a/libs/langchain/langchain/llms/bedrock.py#L405)?
### Motivation
Cost monitoring
### Your contribution
PR | Enable token usage count for AWS Bedrock | https://api.github.com/repos/langchain-ai/langchain/issues/11906/comments | 7 | 2023-10-17T05:16:38Z | 2024-03-30T14:02:02Z | https://github.com/langchain-ai/langchain/issues/11906 | 1,946,564,023 | 11,906 |
[
"langchain-ai",
"langchain"
] | @dosu-bot
Below is my code in python about creating a Q & A using Langchain with openai API. I have 3 issues I want to fix:
1) The answer is always being repeated twice.
2) I am using ConversationalRetrievalChain from LangChain therefore, I want to retrieve the source of the document when I get my answer.
3) I want to change the chain_type = "stuff" into Map Re Rank.
from langchain.chat_models import ChatOpenAI
from langchain.prompts.prompt import PromptTemplate
from langchain.memory import ConversationBufferMemory
from langchain.chains import ConversationalRetrievalChain
from langchain.document_loaders import WebBaseLoader
from langchain.text_splitter import RecursiveCharacterTextSplitter
from langchain.vectorstores import Qdrant
from langchain.callbacks import StreamingStdOutCallbackHandler
from langchain.embeddings import OpenAIEmbeddings
from langchain.callbacks import get_openai_callback
from langchain.vectorstores import Chroma
from langchain.document_loaders import DirectoryLoader, PyPDFLoader
from dotenv import load_dotenv
import time
import warnings
# warnings.filterwarnings("ignore")
load_dotenv()
def print_letter_by_letter(text):
for char in text:
print(char, end='', flush=True)
time.sleep(0.02)
# Create embeddings
def langchain(customer_prompt, chat_history):
directory_path = "C:\\Users\\Asus\\Documents\\Vendolista"
pdf_loader = DirectoryLoader(directory_path,
glob="**/*.pdf",
show_progress=True,
use_multithreading=True,
silent_errors=True,
loader_cls = PyPDFLoader)
documents = pdf_loader.load()
print(str(len(documents))+ " documents loaded")
llm = ChatOpenAI(temperature = 0, model_name='gpt-3.5-turbo', callbacks=[StreamingStdOutCallbackHandler()], streaming = True)
# Split into chunks
text_splitter = RecursiveCharacterTextSplitter(
chunk_size=800,
chunk_overlap=80,
)
chunks = text_splitter.split_documents(documents)
embeddings = OpenAIEmbeddings()
persist_directory = "C:\\Users\\Asus\\OneDrive\\Documents\\Vendolista"
knowledge_base = Chroma.from_documents(chunks, embeddings, persist_directory = persist_directory)
#save to disk
knowledge_base.persist()
#To delete the DB we created at first so that we can be sure that we will load from disk as fresh db
knowledge_base = None
new_knowledge_base = Chroma(persist_directory = persist_directory, embedding_function = embeddings)
# Create a custom prompt for your use case
prompt_template = """
Answer the Question as a AI assistant that is answering based on the documents only. If the question is unrelated
then say "sorry this question is completely not related. If you think it is, email the staff
and they will get back to you: yazanrisheh@hotmail.com." Do not ever answer with "I don't know" to any question.
You either give an answer or mention it's not related.
Text: {context}
Question: {question}
Answer :
"""
PROMPT = PromptTemplate(
template=prompt_template, input_variables=["context", "question"]
)
memory = ConversationBufferMemory(llm=llm, memory_key='chat_history', input_key='question', output_key='answer', return_messages=True)
conversation = ConversationalRetrievalChain.from_llm(
llm=llm,
retriever=new_knowledge_base.as_retriever(),
memory=memory,
chain_type="stuff",
combine_docs_chain_kwargs={"prompt":PROMPT}
)
return conversation({"question": customer_prompt, "chat_history": memory})
def main():
chat_history = []
while True:
customer_prompt = input("Ask me anything about the files (type 'exit' to quit): ")
if customer_prompt.lower() in ["exit"] and len(customer_prompt) == 4:
end_chat = "Thank you for visiting us! Have a nice day"
print_letter_by_letter(end_chat)
break
if customer_prompt:
with get_openai_callback() as cb:
response = langchain(customer_prompt, chat_history)
print(response['answer'])
print(cb)
if __name__ == "__main__":
main() | Repetitive answer and not getting source of documents | https://api.github.com/repos/langchain-ai/langchain/issues/11905/comments | 2 | 2023-10-17T04:03:05Z | 2024-02-06T16:19:17Z | https://github.com/langchain-ai/langchain/issues/11905 | 1,946,493,093 | 11,905 |
[
"langchain-ai",
"langchain"
] | ### Issue with current documentation:
I want to load doc type file use UnstructuredFileLoader, andI installed all require library, but I got this mistake:
Traceback (most recent call last):
File "D:\soft\Anaconda3\lib\site-packages\unstructured\partition\doc.py", line 67, in partition_doc
convert_office_doc(
File "D:\soft\Anaconda3\lib\site-packages\unstructured\partition\common.py", line 294, in convert_office_doc
logger.info(output.decode().strip())
UnicodeDecodeError: 'utf-8' codec can't decode byte 0xd0 in position 42: invalid continuation byte.
Then I see the code of function convert_office_doc that can not change encoding type, this mistake is 'utf-8' is not valid encoding type, so I changed this to 'gb2312', it worked.
So, whether a parameter should be added for the encoding type when reading the file?
### Idea or request for content:
So, whether a parameter should be added for the encoding type when reading the file?
For example, UnstructuredFileLoader(file_path, file_name, coding='utf-8'). | logger.info(output.decode().strip()) of common.py will raise a mistake when use convert_office_doc to convert .doc file to .docx file because it is not encode by 'utf-8' | https://api.github.com/repos/langchain-ai/langchain/issues/11898/comments | 2 | 2023-10-17T01:16:58Z | 2024-02-07T00:57:56Z | https://github.com/langchain-ai/langchain/issues/11898 | 1,946,344,621 | 11,898 |
[
"langchain-ai",
"langchain"
] | ### Feature request
combines LangChain with Stable Diffusion to generate a text-related image
### Motivation
The motivation behind this feature proposal is to enhance the capabilities of LangChain by integrating it with Stable Diffusion to enable the generation of images based on text inputs. This integration serves several purposes:
Enriched User Experience: The ability to generate images from text can significantly enrich user experiences in various applications. It can be applied to chatbots, content generation, creative tools, and more.
Creative Content Generation: This feature can empower users to generate creative content, artwork, or visualizations based on their textual ideas or descriptions.
Enhanced Language Model Integration: By integrating with Stable Diffusion, LangChain can harness the power of state-of-the-art generative models to create meaningful images that complement the textual context.
Research and Innovation: This integration can also serve as a research platform to explore the synergy between language models and generative image models, advancing the field of AI and creative content generation.
### Your contribution
Identify Relevant Repositories, Submit a Pull Request, Engage with the Community | LangChain with Stable Diffusion | https://api.github.com/repos/langchain-ai/langchain/issues/11894/comments | 8 | 2023-10-16T21:53:29Z | 2024-03-18T16:05:49Z | https://github.com/langchain-ai/langchain/issues/11894 | 1,946,164,422 | 11,894 |
[
"langchain-ai",
"langchain"
] | ### System Info
I'm using jupyter notebook and Azure OpenAI
Python 3.11.5
langchain==0.0.315
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [X] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
I got the error:
InvalidRequestError: The API deployment for this resource does not exist. If you created the deployment within the last 5 minutes, please wait a moment and try again.
When I run
```
from langchain.embeddings.openai import OpenAIEmbeddings
embedding = OpenAIEmbeddings()
sentence1 = "i like dogs"
embedding1 = embedding.embed_query(sentence1)
```
But if I run - not using Lanchain - it works fine:
```
response = openai.Embedding.create(
input="Your text string goes here",
model="text-embedding-ada-002",
engine="embeddingstest"
)
embeddings = response['data'][0]['embedding']
```
### Expected behavior
I would expect the embeddings of my string. | API deployment not found when using Azure with embeddings | https://api.github.com/repos/langchain-ai/langchain/issues/11893/comments | 3 | 2023-10-16T21:00:04Z | 2023-10-17T14:16:56Z | https://github.com/langchain-ai/langchain/issues/11893 | 1,946,085,853 | 11,893 |
[
"langchain-ai",
"langchain"
] | ### System Info
Python 3.11.6
Langchain 0.0.315
Device name Precision7760
Processor 11th Gen Intel(R) Core(TM) i9-11950H @ 2.60GHz 2.61 GHz
Installed RAM 32.0 GB (31.2 GB usable)
Device ID 049EB0D9-D534-47A1-9F59-62B1F3D578D4
Product ID 00355-60713-95419-AAOEM
System type 64-bit operating system, x64-based processor
Pen and touch No pen or touch input is available for this display
Edition Windows 11 Pro
Version 22H2
Installed on โ10/โ7/โ2023
OS build 22621.2428
Experience Windows Feature Experience Pack 1000.22674.1000.0
### Who can help?
@hwchase17
@agola11
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
from langchain.schema import AIMessage, HumanMessage
from langchain.chat_models import ChatOpenAI
from langchain.memory import ConversationBufferMemory
from langchain.chains import ConversationChain
chat = ChatOpenAI()
memory = ConversationBufferMemory()
conversation = ConversationChain(llm=chat, memory=memory)
user_input="Hola atenea yo me llamo franks y estoy interesado en adquirir un vehรญculo"
response = conversation.run([HumanMessage(content=str(user_input))])
print(response)
I am getting this errors:
Traceback (most recent call last):
File "F:\Audi\Chatbot_Demo\mini_example.py", line 12, in <module>
response = conversation.run([HumanMessage(content=str(user_input))])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\Python\Python311\Lib\site-packages\langchain\chains\base.py", line 503, in run
return self(args[0], callbacks=callbacks, tags=tags, metadata=metadata)[
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\Python\Python311\Lib\site-packages\langchain\chains\base.py", line 310, in __call__
final_outputs: Dict[str, Any] = self.prep_outputs(
^^^^^^^^^^^^^^^^^^
File "E:\Python\Python311\Lib\site-packages\langchain\chains\base.py", line 406, in prep_outputs
self.memory.save_context(inputs, outputs)
File "E:\Python\Python311\Lib\site-packages\langchain\memory\chat_memory.py", line 36, in save_context
self.chat_memory.add_user_message(input_str)
File "E:\Python\Python311\Lib\site-packages\langchain\schema\chat_history.py", line 46, in add_user_message
self.add_message(HumanMessage(content=message))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\Python\Python311\Lib\site-packages\langchain\load\serializable.py", line 97, in __init__
super().__init__(**kwargs)
File "E:\Python\Python311\Lib\site-packages\pydantic\v1\main.py", line 341, in __init__
raise validation_error
pydantic.v1.error_wrappers.ValidationError: 1 validation error for HumanMessage
content
**str type expected (type=type_error.str)**
### Expected behavior
The argument passed to HumanMessage is a str, nevertheless an error is produced (as it wasn't). | HumanMessage error expecting a str type in content | https://api.github.com/repos/langchain-ai/langchain/issues/11882/comments | 4 | 2023-10-16T19:31:43Z | 2024-06-03T00:42:51Z | https://github.com/langchain-ai/langchain/issues/11882 | 1,945,944,555 | 11,882 |
[
"langchain-ai",
"langchain"
] | ### System Info
I am uing the code provided in the Chat-Langchain implementation
```python
def load_langchain_docs():
return SitemapLoader(
"https://python.langchain.com/sitemap.xml",
filter_urls=["https://python.langchain.com/"],
parsing_function=langchain_docs_extractor,
default_parser="lxml",
bs_kwargs={
"parse_only": SoupStrainer(
name=("article", "title", "html", "lang", "content")
),
},
meta_function=metadata_extractor,
).load()
```
When Fetcing pages complete the error comes.
```
Fetching pages: 100%|###################################################################################################################| 944/944 [04:17<00:00, 3.66it/s]
Exception ignored in: <function _ProactorBasePipeTransport.__del__ at 0x00000202B4D78CA0>
Traceback (most recent call last):
File "C:\Users\meetg\AppData\Local\Programs\Python\Python39\lib\asyncio\proactor_events.py", line 116, in __del__
self.close()
File "C:\Users\meetg\AppData\Local\Programs\Python\Python39\lib\asyncio\proactor_events.py", line 108, in close
self._loop.call_soon(self._call_connection_lost, None)
File "C:\Users\meetg\AppData\Local\Programs\Python\Python39\lib\asyncio\base_events.py", line 746, in call_soon
self._check_closed()
File "C:\Users\meetg\AppData\Local\Programs\Python\Python39\lib\asyncio\base_events.py", line 510, in _check_closed
raise RuntimeError('Event loop is closed')
RuntimeError: Event loop is closed
โญโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ Traceback (most recent call last) โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฎ
โ C:\python\Projects\Langchain\save_langchain_docs.py: โ
โ 201 in <module> โ
โ โ
โ 198 โ working_memory.save(documents=docs_transformed) โ
โ 199 โ
โ 200 if __name__ == "__main__": โ
โ โฑ 201 โ ingest_docs() โ
โ 202 โ
โ โ
โ C:\python\Projects\Langchain\save_langchain_docs.py: โ
โ 182 in ingest_docs โ
โ โ
โ 179 def ingest_docs(): โ
โ 180 โ docs_from_documentation = load_langchain_docs() โ
โ 181 โ logger.info(f"Loaded {len(docs_from_documentation)} docs from documentation") โ
โ โฑ 182 โ docs_from_api = load_api_docs() โ
โ 183 โ logger.info(f"Loaded {len(docs_from_api)} docs from API") โ
โ 184 โ โ
โ 185 โ text_splitter = RecursiveCharacterTextSplitter(chunk_size=4000, chunk_overlap=200) โ
โ โ
โ C:\python\Projects\Langchain\save_langchain_docs.py: โ
โ 159 in load_api_docs โ
โ โ
โ 156 โ
โ 157 โ
โ 158 def load_api_docs(): โ
โ โฑ 159 โ return RecursiveUrlLoader( โ
โ 160 โ โ url="https://api.python.langchain.com/en/latest/", โ
โ 161 โ โ max_depth=8, โ
โ 162 โ โ extractor=simple_extractor, โ
โ โ
โ C:\python\Projects\Gesture โ
โ Scrolling\keyvenv\lib\site-packages\langchain\document_loaders\recursive_url_loader.py:112 in โ
โ __init__ โ
โ โ
โ 109 โ โ self.timeout = timeout โ
โ 110 โ โ self.prevent_outside = prevent_outside if prevent_outside is not None else True โ
โ 111 โ โ self.link_regex = link_regex โ
โ โฑ 112 โ โ self._lock = asyncio.Lock() if self.use_async else None โ
โ 113 โ โ self.headers = headers โ
โ 114 โ โ self.check_response_status = check_response_status โ
โ 115 โ
โ โ
โ C:\Users\meetg\AppData\Local\Programs\Python\Python39\lib\asyncio\locks.py:81 in __init__ โ
โ โ
โ 78 โ โ self._waiters = None โ
โ 79 โ โ self._locked = False โ
โ 80 โ โ if loop is None: โ
โ โฑ 81 โ โ โ self._loop = events.get_event_loop() โ
โ 82 โ โ else: โ
โ 83 โ โ โ self._loop = loop โ
โ 84 โ โ โ warnings.warn("The loop argument is deprecated since Python 3.8, " โ
โ โ
โ C:\Users\meetg\AppData\Local\Programs\Python\Python39\lib\asyncio\events.py:642 in โ
โ get_event_loop โ
โ โ
โ 639 โ โ โ self.set_event_loop(self.new_event_loop()) โ
โ 640 โ โ โ
โ 641 โ โ if self._local._loop is None: โ
โ โฑ 642 โ โ โ raise RuntimeError('There is no current event loop in thread %r.' โ
โ 643 โ โ โ โ โ โ โ % threading.current_thread().name) โ
โ 644 โ โ โ
โ 645 โ โ return self._local._loop โ
โฐโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโฏ
RuntimeError: There is no current event loop in thread 'MainThread'.
```
### Who can help?
@eyurtsev
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [x] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [x] Async
### Reproduction
By executing the function `load_langchain_docs()` from the above code snippet, error will come.
### Expected behavior
All the pages should be scraped and loaded to `Document` class | No current event loop in thread 'MainThread' error while using SitemapLoader() | https://api.github.com/repos/langchain-ai/langchain/issues/11879/comments | 10 | 2023-10-16T19:06:17Z | 2024-02-13T16:10:38Z | https://github.com/langchain-ai/langchain/issues/11879 | 1,945,907,441 | 11,879 |
[
"langchain-ai",
"langchain"
] | ### System Info
Name: kuzu
Version: 0.0.10 (latest)
Name: langchain
Version: 0.0.311
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
I am using the example from langchain documentation:
https://python.langchain.com/docs/use_cases/graph/graph_kuzu_qa
This line fails:
graph = KuzuGraph(db)
F```
ile [~/.local/lib/python3.10/site-packages/langchain/graphs/kuzu_graph.py:71](https://file+.vscode-resource.vscode-cdn.net/home/steve/Projects/homelab/homeapi/notebooks/~/.local/lib/python3.10/site-packages/langchain/graphs/kuzu_graph.py:71), in KuzuGraph.refresh_schema(self)
69 for table in rel_tables:
70 current_table_schema = {"properties": [], "label": table["name"]}
---> 71 properties_text = self.conn._connection.get_rel_property_names(
72 table["name"]
73 ).split("\n")
74 for i, line in enumerate(properties_text):
75 # The first 3 lines defines src, dst and name, so we skip them
76 if i < 3:
AttributeError: 'kuzu._kuzu.Connection' object has no attribute 'get_rel_property_names'
```
### Expected behavior
should work like example | KuzuGraph not working | https://api.github.com/repos/langchain-ai/langchain/issues/11874/comments | 5 | 2023-10-16T18:10:46Z | 2024-02-15T19:17:09Z | https://github.com/langchain-ai/langchain/issues/11874 | 1,945,812,797 | 11,874 |
[
"langchain-ai",
"langchain"
] | ### Feature request
connect langchain to denodo database. while support for mysql, postgres etc exists, there is currently none for more niche platforms like denodo. Somthing like this


### Motivation
many organizations now use denodo for storing massive datasets , providing denodo connectivity will greatly enhance the reach of llamaindex and add immense value
### Your contribution
I can perform thorough testing of the feature and then update the documentation. | Denodo connector for langchain | https://api.github.com/repos/langchain-ai/langchain/issues/11873/comments | 1 | 2023-10-16T17:37:46Z | 2024-02-06T16:19:31Z | https://github.com/langchain-ai/langchain/issues/11873 | 1,945,756,011 | 11,873 |
[
"langchain-ai",
"langchain"
] | ### System Info
N/A (issue pertains to the web tutorial, [Retrieval-augmented generation (RAG)](https://python.langchain.com/docs/use_cases/question_answering/#quickstart))
### Who can help?
@hwchase17
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
1. Navigate to [Retrieval-augmented generation (RAG)](https://python.langchain.com/docs/use_cases/question_answering/#quickstart)].
2. Attempt to access the links provided below:
- "Open in Colab" link, which is intended to direct users to a Colab notebook: [Open in Colab](https://colab.research.google.com/github/langchain-ai/langchain/blob/master/docs/docs/use_cases/question_answering/qa.ipynb)
- "See details in here on the multi-vector retriever for this purpose" link, which should provide more information on the multi-vector retriever: [See details in here on the multi-vector retriever for this purpose](https://python.langchain.com/docs/use_cases/question_answering/docs/modules/data_connection/retrievers/multi_vector)
3. Observe that these links do not lead to the intended resources and instead result in 404 errors or lead to pages that no longer exist.
### Expected behavior
All links in the tutorial lead to active, relevant pages. | Dead Links in Retrieval-Augmented Generation (RAG) Tutorial | https://api.github.com/repos/langchain-ai/langchain/issues/11871/comments | 2 | 2023-10-16T16:19:47Z | 2024-02-06T16:19:37Z | https://github.com/langchain-ai/langchain/issues/11871 | 1,945,627,705 | 11,871 |
[
"langchain-ai",
"langchain"
] | ### System Info
Subject: Bug Report - Langchain API Query Limitation
I am currently utilizing the following code snippet:
```typescript
let data: JsonObject;
try {
const jsonFile = await fs.readFileSync(
'./src/openAI/data/formattedResults.json',
'utf8',
);
data = JSON.parse(jsonFile) as JsonObject;
if (!data) {
throw new Error('Failed to load JSON spec');
}
} catch (e) {
console.error(e);
return;
}
const toolkit = new JsonToolkit(new JsonSpec(data));
const executor = await createJsonAgent(model, toolkit);
const res = await executor.call({ input: question }).catch((err) => {
console.log(err);
});
```
While interacting with the Langchain API, I noticed an issue. When I asked any question, it only returned the top few projects as answers. For instance, when I queried the agent, 'How many projects are there?' it correctly identified that there are 289 projects. However, it incorrectly responded with 'There are 5 projects' instead of providing the accurate count.
Upon further investigation, I realized that the Langchain API appears to be limiting the results to the top 5 data entries, rather than considering the entire dataset of 289 projects. This issue is affecting the accuracy of the responses.
I kindly request assistance in resolving this limitation so that the Langchain API can process and return the complete dataset accurately. Please advise on the necessary steps to address this issue and ensure that all relevant data is considered in responses to queries.
Thank you for your prompt attention to this matter.
### Who can help?
@eyurtsev
@hwchase17
@agola11
@dosu-beta
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [X] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
let data: JsonObject;
try {
const jsonFile = await fs.readFileSync(
'./src/openAI/data/formattedResults.json',
'utf8',
);
data = JSON.parse(jsonFile) as JsonObject;
if (!data) {
throw new Error('Failed to load JSON spec');
}
} catch (e) {
console.error(e);
return;
}
const toolkit = new JsonToolkit(new JsonSpec(data));
const executor = await createJsonAgent(model, toolkit);
const res = await executor.call({ input: question }).catch((err) => {
console.log(err);
});
<img width="735" alt="image" src="https://github.com/langchain-ai/langchain/assets/82230052/a0d70ea4-b236-4bb7-9450-52838aff97c5">
Totally there were 289 projects
<img width="178" alt="image" src="https://github.com/langchain-ai/langchain/assets/82230052/881bff40-0ca4-4729-83c5-83f2fe982fe9">
### Expected behavior
The agent should take into consideration all the data, rather than top few data | createJsonAgent returns answer based on top 5 data only | https://api.github.com/repos/langchain-ai/langchain/issues/11867/comments | 2 | 2023-10-16T15:15:52Z | 2024-02-08T16:17:50Z | https://github.com/langchain-ai/langchain/issues/11867 | 1,945,481,349 | 11,867 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.