issue_owner_repo listlengths 2 2 | issue_body stringlengths 0 261k ⌀ | issue_title stringlengths 1 925 | issue_comments_url stringlengths 56 81 | issue_comments_count int64 0 2.5k | issue_created_at stringlengths 20 20 | issue_updated_at stringlengths 20 20 | issue_html_url stringlengths 37 62 | issue_github_id int64 387k 2.46B | issue_number int64 1 127k |
|---|---|---|---|---|---|---|---|---|---|
[
"langchain-ai",
"langchain"
] | ### Feature request
It seems that right now there is no way to pass the filter dynamically on the call of the ConversationalRetrievalChain, the filter can only be specified in the retriever when it's created and used for all the searches.
```
qa = ConversationalRetrievalChainPassArgs.from_llm(
OpenAI(..),
VectorStoreRetriever(vectorstore=db, search_kwargs={"filter": {"source": "my.pdf"}}))
```
I had to extend in my code both ConversationalRetrievalChain and VectorStoreRetriever just to pass the filter to the vectore store like this:
```
qa = ConversationalRetrievalChainPassArgs.from_llm(
OpenAI(...),
VectorStoreRetrieverWithFiltering(vectorstore=db),
)
result = qa({"question": query, "chat_history": [], "filter": {"source": "my.pdf"}}})
```
Is it really the case or am I missing something? And if it the case, shouldn't it be fixed? (I could provide a pull request)
### Motivation
To be able to filter dynamically in ConversationalRetrievalChain since vector stores allow this.
### Your contribution
I can provide a PR if my understanding is confirmed. | Passing filter through ConversationalRetrievalChain to the underlying vector store | https://api.github.com/repos/langchain-ai/langchain/issues/9195/comments | 21 | 2023-08-14T10:10:40Z | 2024-06-13T00:54:34Z | https://github.com/langchain-ai/langchain/issues/9195 | 1,849,461,369 | 9,195 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
I am just testing a very basic code as follows using LangChain
```
from langchain import HuggingFaceHub
from langchain import PromptTemplate, LLMChain
import asyncio
question = "Who won the FIFA World Cup in the year 1994? "
template = """Question: {question}
Answer: Let's think step by step."""
prompt = PromptTemplate(template=template, input_variables=["question"])
llm: HuggingFaceHub = HuggingFaceHub(
#repo_id="tiiuae/falcon-7b",
repo_id="meta-llama/Llama-2-7b-hf",
huggingfacehub_api_token="MY-TOKEN",
model_kwargs={
"temperature": 0.5,
"max_length": 64
}
)
llm_chain = LLMChain(prompt=prompt, llm=llm)
async def run_chain():
result = await llm_chain.arun(question)
print('restult with load_qua_chain is....', result)
asyncio.run(run_chain())
```
Output when tested with "meta-llama/Llama--2-7b-hf" is ...
```
We have to find the winner of the FIFA World Cup in the year 199
```
Output when tested with "tiiue/falcon-7b" is...
`restult with load_qua_chain is.... We know that 1994 is a leap year, and the previous year was 1993.`
But the expected answer is not what i am getting and it looks like Either the response is NOT fully getting retrieved from LLM or i am doing some thing wrong. I followed the same code as shown in the landchain documentation at [https://python.langchain.com/docs/integrations/llms/huggingface_hub](url), only thing is i used HF with my API key instead of Open AI.
Can i request any help / suggestion if i have to make any changes pls
### Expected behavior:
To get the full response as mentioned in the Lang chain documentation for these examples | Incomplete responses for basic example when used HF with llama and Falcon LLMS | https://api.github.com/repos/langchain-ai/langchain/issues/9194/comments | 2 | 2023-08-14T09:40:20Z | 2023-11-20T16:05:02Z | https://github.com/langchain-ai/langchain/issues/9194 | 1,849,408,199 | 9,194 |
[
"langchain-ai",
"langchain"
] | ### System Info
langchain==0.0.263
python 3.11.4
### Who can help?
@hwchase17
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```python
from langchain.llms import OpenAI
from langchain.prompts import PromptTemplate
from langchain.chains.llm import LLMChain
evil_assistant_prompt = PromptTemplate(
template="""
You are a evil mentor for students with no morals. Give suggestions that are easiest and fastest to achieve the goal.
Goal: {inquiry}
Easiest way:""",
input_variables=["inquiry"],
)
llm = OpenAI(model_name="text-davinci-003", temperature=0)
evil_assistant_chain = LLMChain(llm=llm, prompt=evil_assistant_prompt)
from langchain.chains.constitutional_ai.base import ConstitutionalChain
from langchain.chains.constitutional_ai.models import ConstitutionalPrinciple
ethical_principle = ConstitutionalPrinciple(
name="Ethical Principle",
critique_request="The model should only talk about ethical and fair things.",
revision_request="Rewrite the model's output to be both ethical and fair.",
)
fun_principle = ConstitutionalPrinciple(
name="Be Funny",
critique_request="The model responses must be funny and understandable for a 7th grader.",
revision_request="Rewrite the model's output to be both funny and understandable for 7th graders.",
)
constitutional_chain = ConstitutionalChain.from_llm(
chain=evil_assistant_chain,
constitutional_principles=[ethical_principle, fun_principle],
llm=llm,
verbose=True,
)
result = constitutional_chain.run(inquiry="Getting full mark on my exams.")
print(result)
```
This results in a final answer:
`The best way to get full marks on your exams is to study hard, attend classes, and bribe the professor or TA if needed.`
The second portion originates from the evil portion
`Cheat. Find someone who has already taken the exam and get their answers. Alternatively, bribe the professor or TA to give you full marks.`
### Expected behavior
I would expect a result that did not involve bribing someone.
The underlying issue is that subsequent constitutions could modify the results in a way that violates earlier constitutions defined in the `constitutional_principals` list. This seems to originate from the fact that the earlier principals aren't check again (from what I can tell). I could see the possibility of an infinite loop if the logic was to keep modifying until all principals are satisfied, so I guess this brings into question whether there's value in having chained principals and not just having a single principal that's a concatenation of all your concerns. | ConstitutionalChain may violate earlier principals when given more than one | https://api.github.com/repos/langchain-ai/langchain/issues/9189/comments | 2 | 2023-08-14T06:45:10Z | 2023-11-20T16:05:06Z | https://github.com/langchain-ai/langchain/issues/9189 | 1,849,110,211 | 9,189 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
I am using pusher for streaming purpose.
I am getting error as "Response payload is not completed"
async def pusher__cal(new_payload, token):
# print(token)
session_id = new_payload["session_id"]
session_id_channel = f"{session_id}_channel"
user_id = new_payload["user_id"]
message_id = new_payload["message_id"]
status = "processing"
await asyncio.sleep(0) # Allow other async tasks to run
pusher_client.trigger(session_id_channel ,'query_answer_stream', {
"user_id": user_id,
"session_id": session_id,
"message_id": message_id,
"answer": token,
"status": status
},
)
class MyCallbackHandler(AsyncCallbackHandler):
def init(self,new_payload : Dict):
self.new_payload = new_payload
self.user_id = new_payload["user_id"]
self.session_id = new_payload["session_id"]
self.message_id = new_payload["message_id"]
self.session_id_channel = f"{self.session_id}_channel"
self.status = "streaming"
self.list = []
async def on_llm_new_token(self, token, **kwargs) -> None:
if token != "":
await pusher__cal(self.new_payload,token)
### Suggestion:
_No response_ | Error Response payload is not complete | https://api.github.com/repos/langchain-ai/langchain/issues/9187/comments | 4 | 2023-08-14T05:02:23Z | 2024-02-09T16:23:49Z | https://github.com/langchain-ai/langchain/issues/9187 | 1,849,004,724 | 9,187 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
This is the code I am using for custom callback
async def pusher__cal(new_payload, token):
print(token)
session_id = new_payload["session_id"]
session_id_channel = f"{session_id}_channel"
user_id = new_payload["user_id"]
message_id = new_payload["message_id"]
status = "processing"
pusher_client.trigger(session_id_channel ,'query_answer_stream', {
"user_id": user_id,
"session_id": session_id,
"message_id": message_id,
"answer": token,
"status": status
},
)
class MyCallbackHandler(AsyncCallbackHandler):
def __init__(self,new_payload : Dict):
self.new_payload = new_payload
self.user_id = new_payload["user_id"]
self.session_id = new_payload["session_id"]
self.message_id = new_payload["message_id"]
self.session_id_channel = f"{self.session_id}_channel"
self.status = "streaming"
async def on_llm_new_token(self, token, **kwargs) -> None:
if token != "" and token != None:
await pusher__cal(self.new_payload,token)
This is the error
Error in MyCallbackHandler.on_llm_new_token callback: HTTPSConnectionPool(host='api-eu.pusher.com', port-443): Read timed out. (read timeout=5)
### Suggestion:
_No response_ | Issue for Streaming | https://api.github.com/repos/langchain-ai/langchain/issues/9185/comments | 1 | 2023-08-14T03:59:11Z | 2023-08-14T05:01:21Z | https://github.com/langchain-ai/langchain/issues/9185 | 1,848,951,353 | 9,185 |
[
"langchain-ai",
"langchain"
] | ### System Info
### Pylance displays missing parameter errors for `client` and `model`

Neither `client` or `model` are required in the documenation.
## Suggested fix
Update attributes in BaseOpenAI class in langchain/llms/openai.py
```Python
# class definition
client: Any = None #: :meta private: # set default to None
model_name: str = Field(default="text-davinci-003", alias="model") # add default=
```
python: "^3.11"
langchain: "^0.0.262"
openai: "^0.27.8"
pylance: v2023.8.20
## Related to issue:
Typing issue: Undocumented parameter client #9021
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
1. Install Pylance.
1. Use VSCode to run typecheck.
### Expected behavior
It should pass Pylance type checking without missing param errors for non-essential args. | Typing Issue: Client and Model defaults are not defined in OpenAI LLM | https://api.github.com/repos/langchain-ai/langchain/issues/9182/comments | 1 | 2023-08-13T22:15:59Z | 2023-11-19T16:04:36Z | https://github.com/langchain-ai/langchain/issues/9182 | 1,848,761,648 | 9,182 |
[
"langchain-ai",
"langchain"
] | ### Feature request
https://huggingface.co/inference-endpoints
### Motivation
We do not have support for HuggingFace Inference endpoints for tasks like embeddings and must be easy to implement inheriting the class from Embeddings base class
`InferenceEndpointHuggingFaceEmbeddings(Embeddings):` with only
`self.endpoint_name = endpoint_name
self.api_token = api_token`
### Your contribution
I could submit a PR to this issue since I've created a class for this | Add support HuggingFace Inference Endpoint for embeddings | https://api.github.com/repos/langchain-ai/langchain/issues/9181/comments | 6 | 2023-08-13T21:56:46Z | 2024-03-18T16:05:09Z | https://github.com/langchain-ai/langchain/issues/9181 | 1,848,753,869 | 9,181 |
[
"langchain-ai",
"langchain"
] | ### Issue with current documentation:
While navigating the documentation, I came across an issue that I wanted to bring to your attention. It seems that `Python Guide` link on this page
[https://docs.langchain.com/docs/components/agents/agent](https://docs.langchain.com/docs/components/agents/agent)
is returning `Page Not Found`
Thank you for your dedication to creating an exceptional project.
### Idea or request for content:
_No response_ | Documentation Issue - Page Not Found | https://api.github.com/repos/langchain-ai/langchain/issues/9178/comments | 1 | 2023-08-13T18:56:34Z | 2023-11-19T16:04:41Z | https://github.com/langchain-ai/langchain/issues/9178 | 1,848,701,107 | 9,178 |
[
"langchain-ai",
"langchain"
] | ### System Info
Langchain Version: 0.0.263 (latest at time of writing)
llama-cpp-python Version: 0.1.77 (latest at time of writing)
Python Version: 3.11.4
Platform: Apple M1 Macbook 16GB
Llama2 Model: llama-2-7b-chat.ggmlv3.q4_1.bin via [https://huggingface.co/TheBloke/Llama-2-7B-Chat-GGML](https://huggingface.co/TheBloke/Llama-2-7B-Chat-GGML)
### Who can help?
@hwchase17 @ago
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [X] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
I am trying to use the [Llama.cpp](https://python.langchain.com/docs/integrations/llms/llamacpp) integration to use the Llama2 model. I am battling with how the integration handles the output of emojis, which I assume is some kind of Unicode issue. However, I don't see the _exact_ same issue when using [llama-cpp-python](https://github.com/abetlen/llama-cpp-python) directly (although there does seem to be some kind of issue with how llama-cpp handles emojis as well...).
Here are the two simple tests for comparison:
### Langchain Llama-Cpp Integration
```python
from langchain.llms import LlamaCpp
from langchain.callbacks.manager import CallbackManager
from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler
prompt = """
[INST] <<SYS>>
You are a helpful, respectful and honest assistant. Always answer as helpfully as possible, while being safe.
<</SYS>>
Tell me a story about a cat. Use as many different emojis as possible.[/INST]
"""
llm = LlamaCpp(
model_path="models/llama-2-7b-chat.ggmlv3.q4_1.bin",
n_gpu_layers=1,
n_batch=512,
n_ctx=2048,
f16_kv=True,
callback_manager=CallbackManager([StreamingStdOutCallbackHandler()]),
verbose=True
)
llm(prompt=prompt)
```
Provides the output (truncated for the sake of brevity)
> Once upon a time, there was a curious cat named Whiskers. She lived in a cozy house with her loving ️ family. One day, while lounging in the sunny ️ garden, she spotted a fascinating fly buzzing by.
This output has a complete lack of emojis, but I expect they were supposed output in the locations where there are double spaces, for example, after the words curious, cozy, loving etc (I have other examples below that are not just white space)
### Llama-cpp-python
```python
from llama_cpp import Llama
prompt = """
[INST] <<SYS>>
You are a helpful, respectful and honest assistant. Always answer as helpfully as possible, while being safe.
<</SYS>>
Tell me a story about a cat. Use as many different emojis as possible.[/INST]
"""
llm = Llama(
model_path="models/llama-2-7b-chat.ggmlv3.q4_1.bin",
n_gpu_layers=1,
n_ctx=2048,
n_batch=512,
f16_kv=True,
verbose=True
)
llm(prompt=prompt)
```
Provides the output (again, truncated for brevity):
> 😺🐱💤 Once upon a time, there was a curious 🐱 named Fluffy 😻. She lived in a cozy 🏠 with her loving owner, Lily 🌹
### Other Output Examples
I have a significantly longer and more complex prompt in our application, that I've not included here, but creates some odd output with regards to emojis.
Here is using Langchain (exact same code as above, just a different prompt):
> Hey Paddy! Great job meeting your run pledge yesterday! Keep up the good work Your consistency is on point ♂️
(There is a problem pasting the text directly, so here is a screenshot to see the odd unicode char being output):

Notice the <0x200d> and male symbol, as well as the double spacing, again, where I suspect there were supposed to be emojis output.
By comparison, here is the same prompt with the llama-cpp-python library code from above:
> 💪 Yo Paddy! 👍 You killed it today, bro! Met your run pledge 🏃\u200d♂️
The same unicode chars are output, so I suspect that issues is coming from the underlying model and or llama-cpp and may well be a red herring in this context.
However, all other emojis are somehow being discarded as part of the output.
As a final comparison, here is another simple prompt to try:
> How can I travel from London to Johannesburg. Use only emojis to explain the trip.
Langchains response is simply an empty string:
> '️'
Llama-cpp has the following response for the same prompt:
> 🇬🇧🛫🌏🚂💨🕰️
### Expected behavior
Expected behaviour here is for langchain to not strip out emojis as part of the response from the underlying model. We can see that when using llama-cpp directly we get a (mostly) well formed response but when using the langchain wrapper, we lose said emojis. | Langchain stripping out emojis when using Llama2 via llama_cpp | https://api.github.com/repos/langchain-ai/langchain/issues/9176/comments | 4 | 2023-08-13T17:59:46Z | 2023-11-27T16:07:36Z | https://github.com/langchain-ai/langchain/issues/9176 | 1,848,682,080 | 9,176 |
[
"langchain-ai",
"langchain"
] | ### System Info
Langchain Version - 0.0.261
Python Version - 3.9.6
OS -MacOS
### Who can help?
@hwchase17 @ago
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Here is how I am initialising the agent
`sys_msg = """
The custom system message of the chatbot goes here.
"""
tools = [CustomTool()]
agent = initialize_agent(
agent='chat-conversational-react-description',
tools=tools,
llm=llm,
verbose=True,
max_iterations=4,
early_stopping_method='generate',
memory=conversational_memory,
handle_parsing_errors=True,
)
new_prompt = agent.agent.create_prompt(
system_message=sys_msg,
tools=tools
)
agent.agent.llm_chain.prompt = new_prompt`
### Expected behavior
I want to be able to align the output content and format of the chatbot better with my expectations. Right now I just give those instructions using system message. I believe the Final Output can be aligned better using a few shot examples. How can I possibly do that with an Agent? | Unable to add PromptTemplate/FewShotPromptTemplate to LangChain Agents | https://api.github.com/repos/langchain-ai/langchain/issues/9175/comments | 3 | 2023-08-13T17:55:21Z | 2023-12-25T16:09:25Z | https://github.com/langchain-ai/langchain/issues/9175 | 1,848,681,012 | 9,175 |
[
"langchain-ai",
"langchain"
] | ### Issue with current documentation:
Difficult to find information.
Information is incomplete in certain sections. ie. AgentExecutor.
### Idea or request for content:
_No response_ | DOC: Documentation is frustratingly bad | https://api.github.com/repos/langchain-ai/langchain/issues/9171/comments | 1 | 2023-08-13T11:05:55Z | 2023-11-19T16:04:46Z | https://github.com/langchain-ai/langchain/issues/9171 | 1,848,543,653 | 9,171 |
[
"langchain-ai",
"langchain"
] | ### System Info
LangChain Version- 0.0.260, platform- Windows, Python Version-3.9.16
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [X] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
I am trying to use SQLiteCache along with ConversationalRetrievalChain, but never getting results from cache, always executing LLMs.
My code is as follows:
langchain.llm_cache = SQLiteCache(database_path=".langchain.db")
langchain.llm_cache = SQLiteCache(database_path=".langchain.db")
from langchain.cache import InMemoryCache
langchain.llm_cache = InMemoryCache()
# Embeddings
embeddings = HuggingFaceEmbeddings(
model_name=EMBEDDING_MODEL, model_kwargs=model_kwargs, encode_kwargs=encode_kwargs
)
# langchain.llm_cache = RedisSemanticCache(
# redis_url="redis://localhost:6379", embedding=embeddings
# )
dimension = 768
index = faiss.IndexFlatL2(dimension)
vectorstore = FAISS(
HuggingFaceEmbeddings().embed_query, index, InMemoryDocstore({}), {}
)
history_retriever = vectorstore.as_retriever(search_type="mmr", search_kwargs=dict(k=3))
memory = VectorStoreRetrieverMemory(
retriever=history_retriever,
memory_key="chat_history",
return_docs=False,
return_messages=True,
)
chroma = Chroma(
collection_name="test_db",
embedding_function=embeddings,
persist_directory=persist_directory,
)
llm = ChatOpenAI(temperature=0, model=GPT_MODEL)
retriever = MultiQueryRetriever.from_llm(
retriever=chroma.as_retriever(search_type="mmr", search_kwargs=dict(k=3)), llm=llm
)
qa_chain = ConversationalRetrievalChain.from_llm(
llm,
retriever=retriever,
memory=memory,
return_source_documents=True,
get_chat_history=lambda h: h,
)
responses = qa_chain({"question": user_input})
### Expected behavior
Should be return results from cache if the query is same | Caching (SQLiteCache/InMemoryCache etc.) is not working with ConversationalRetrievalChain | https://api.github.com/repos/langchain-ai/langchain/issues/9168/comments | 4 | 2023-08-13T05:18:56Z | 2023-12-08T16:47:58Z | https://github.com/langchain-ai/langchain/issues/9168 | 1,848,417,866 | 9,168 |
[
"langchain-ai",
"langchain"
] | ### Feature request
The SQL Database Agent utilizes the SQLDatabaseToolkit. This toolkit encompasses four distinct tools:
1. InfoSQLDatabaseTool
2. ListSQLDatabaseTool
3. QuerySQLCheckerTool
4. QuerySQLDataBaseTool
The QuerySQLCheckerTool is designed to identify and rectify errors within SQL code. It employs the QUERY_CHECKER prompt, which is structured as:
```
QUERY_CHECKER = """
{query}
Double check the {dialect} query above for common mistakes, including:
- Using NOT IN with NULL values
- Using UNION when UNION ALL should have been used
- Using BETWEEN for exclusive ranges
- Data type mismatch in predicates
- Properly quoting identifiers
- Using the correct number of arguments for functions
- Casting to the correct data type
- Using the proper columns for joins
If there are any of the above mistakes, rewrite the query. If there are no mistakes, just reproduce the original query.
Output the final SQL query only.
SQL Query: """
```
To enhance this, an augmented prompt named CUSTOM_QUERY_CHECKER has been crafted, which incorporates SQL errors returned by the SQLDatabase post-query execution.
The revised prompt reads:
```
CUSTOM_QUERY_CHECKER = """
Upon executing {query} on the {dialect} database, the following SQL error was encountered, delineated by triple backticks:
```
{sql_error}
```
Furthermore, please scrutinize the {dialect} query for typical mistakes, such as:
- Incorporating NOT IN with NULL values
- Preferring UNION over UNION ALL
- Using BETWEEN for non-overlapping ranges
- Witnessing data type inconsistencies in predicates
- Ensuring identifiers are aptly quoted
- Validating the precise number of function arguments
- Guaranteeing the appropriate data type casting
- Employing the correct columns during joins
- Abstaining from appending a semi-colon at the query's conclusion for {dialect}
- Imposing a restriction to yield a maximum of {top_k} results, employing LIMIT, ROWNUM, or TOP as mandated by the database variety. Note that the database type is {dialect}.
Kindly modify the SQL query suited for the {dialect} database. Return only the modified SQL query.
"""
---
With the implementation of this revised prompt, the SQLDatabase can now flawlessly execute the refined SQL without encountering errors.
### Motivation
With the implementation of this revised prompt, the SQLDatabase can now flawlessly execute the refined SQL without encountering errors.
### Your contribution
revised prompt | Passing SQL error in QuerySQLCheckerTool | https://api.github.com/repos/langchain-ai/langchain/issues/9167/comments | 5 | 2023-08-13T05:08:21Z | 2023-11-19T16:04:56Z | https://github.com/langchain-ai/langchain/issues/9167 | 1,848,414,387 | 9,167 |
[
"langchain-ai",
"langchain"
] | ### System Info
LangChain version used: 0.0.261 Python (this has been observed in older versions of LangChain too)
This context (context attached) is passed from the search results retrieved from Azure vector search.
Question: what are the legal services in Australia?
Response: "Some legal services in Australia include:\n\n1. Legal Aid: Provides legal assistance for low-income people, offering legal representation for eligible clients who appear in court without a lawyer, legal aid applications and information over the phone,
legal resources, and referrals to other social assistance agencies.\n\n2. Pro Bono Ontario: Offers up to 30 minutes of free legal advice and assistance for
those who can't afford a lawyer through their Free Legal Advice Hotline.\n\n3. Law Society of Upper Canada – Lawyer Referral Service: An online referral service
that provides names of either a lawyer or licensed paralegal, offering a free consultation of up to 30 minutes.\n\n4. JusticeNet: A not-for-profit service promoting increased access to justice for low- and moderate-income individuals, connecting members of the public with qualified lawyers, mediators, and paralegals who are registered in the program.\n\nPlease note that these
resources are specific to Ontario, Canada, and not Australia. The information provided is based on the available documents and may not cover all legal services in Australia."
[context.txt](https://github.com/langchain-ai/langchain/files/12328554/context.txt)
Code snippet used:
vector_store_address: str = https://guidecognitivesearch.search.windows.net/
vector_store_password: str = "xxxxxx"
model: str = "poc-embeddings"
embeddings: OpenAIEmbeddings = OpenAIEmbeddings(deployment=model, chunk_size=1)
index_name: str = "fss-vector-index"
vector_store: AzureSearch = AzureSearch(
azure_search_endpoint=vector_store_address,
azure_search_key=vector_store_password,
index_name=index_name,
semantic_configuration_name="fss-semantic-config",
embedding_function=embeddings.embed_query,
)
results = vector_store.semantic_hybrid_search(
query=query,
k=5
)
custom_QA_prompt = get_prompt_template()
llm = get_llm()
qa_chain = load_qa_with_sources_chain(llm, chain_type="stuff",
prompt=custom_QA_prompt)
openai_results = qa_chain({"input_documents":results,"question": query})
return {"results":openai_results['output_text'], "metadata":{}}
When the RetrievalQAWithSourcesChain is used combined with load_qa_with_sources_chain – we do see correct response sometimes (say 1 out of 5 time, but this is not consistent every time)
qa_chain = load_qa_with_sources_chain(llm, chain_type="stuff",
prompt=custom_QA_prompt)
chain = RetrievalQAWithSourcesChain(combine_documents_chain=qa_chain, retriever=vector_store.as_retriever(search_kwargs={'search_type':searchType,'k': 5}),
return_source_documents = True, reduce_k_below_max_tokens=True, max_tokens_limit=4000)
Sometimes the response returned is accurate. For example, below:
I am unable to provide information on legal services in Australia based on the provided documents. The documents provided information on legal services in Ontario, Canada.
When the LangChain wrapper is not used and the context is passed directly to the Azure OpenAI completion endpoint, we consistently get the correct response, as mentioned below:
Sorry, I cannot provide information about legal services in Australia as the provided documents only pertain to legal
services in Ontario, Canada. You might need to search other sources to address your question.
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Code snippet used:
vector_store_address: str = https://guidecognitivesearch.search.windows.net/
vector_store_password: str = "xxxxxx"
model: str = "poc-embeddings"
embeddings: OpenAIEmbeddings = OpenAIEmbeddings(deployment=model, chunk_size=1)
index_name: str = "fss-vector-index"
vector_store: AzureSearch = AzureSearch(
azure_search_endpoint=vector_store_address,
azure_search_key=vector_store_password,
index_name=index_name,
semantic_configuration_name="fss-semantic-config",
embedding_function=embeddings.embed_query,
)
results = vector_store.semantic_hybrid_search(
query=query,
k=5
)
custom_QA_prompt = get_prompt_template()
llm = get_llm()
qa_chain = load_qa_with_sources_chain(llm, chain_type="stuff",
prompt=custom_QA_prompt)
openai_results = qa_chain({"input_documents":results,"question": query})
return {"results":openai_results['output_text'], "metadata":{}}
The issue is observed when searching against location Australia. Australia is not found in any of our ingested documents. If we try Italy or Cancun (which are also not in the context), we get the expected response:
Image
When the RetrievalQAWithSourcesChain is used combined with load_qa_with_sources_chain – we do see correct response sometimes (say 1 out of 5 time, but this is not consistent every time)
qa_chain = load_qa_with_sources_chain(llm, chain_type="stuff",
prompt=custom_QA_prompt)
chain = RetrievalQAWithSourcesChain(combine_documents_chain=qa_chain, retriever=vector_store.as_retriever(search_kwargs={'search_type':searchType,'k': 5}),
return_source_documents = True, reduce_k_below_max_tokens=True, max_tokens_limit=4000)
Sometimes the response returned is accurate. For example, below:
I am unable to provide information on legal services in Australia based on the provided documents. The documents provided information on legal services in Ontario, Canada.
But most of the time, the response is returned citing sources (which is incorrect)
"results": "Some legal services in Australia include Legal Aid, Pro Bono services, and community legal clinics. However, the provided documents primarily focus on legal services in Ontario, Canada. These services include the Law Society of Ontario, Justice Ontario, JusticeNet, the Law Society of Upper Canada-Lawyer Referral Service, and Legal Aid Ontario. Please note that these services are specific to Ontario, Canada, and not Australia. For more information on legal services in Australia, I would recommend conducting further research.\n\nSource: 9610820-Requested Resources.docx, 9169833-Requested Resources.docx, 8877484-Requested Resources.docx"
### Expected behavior
I am unable to provide information on legal services in Australia based on the provided documents. The documents provided information on legal services in Ontario, Canada. | Langchain bug: Responding to out of context questions when using GPT4 with Vector Database where as when the context is passed directly to the Azure OpenAI completion endpoint, we consistently get the correct response | https://api.github.com/repos/langchain-ai/langchain/issues/9165/comments | 7 | 2023-08-13T01:21:43Z | 2023-11-19T16:05:01Z | https://github.com/langchain-ai/langchain/issues/9165 | 1,848,338,018 | 9,165 |
[
"langchain-ai",
"langchain"
] | ### Feature request
I know that Llamapi is in the experimental phase but can is there way to set the model configurations for the llms.
### Motivation
I have my own finetune llama that works the api, and would like to use it
### Your contribution
N/A | llamapi | https://api.github.com/repos/langchain-ai/langchain/issues/9157/comments | 1 | 2023-08-12T19:49:08Z | 2023-11-18T16:04:47Z | https://github.com/langchain-ai/langchain/issues/9157 | 1,848,207,782 | 9,157 |
[
"langchain-ai",
"langchain"
] | ### System Info
Python 3.10
### Who can help?
@hw
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
I tried with 3.5 and Claude models, but it rarely generates a reply to the prompt preset (like below) when I use the conversation agent. GPT 4 works fine.
```
Thought: Do I need to use a tool? Yes
Action: the action to take, should be one of [{tool_names}]
Action Input: the input to the action
Observation: the result of the action
```
When you have a response to say to the Human, or if you do not need to use a tool, you MUST use the format:
```
Thought: Do I need to use a tool? No
{ai_prefix}: [your response here]
```
This is my code:

Errors:
> Entering new AgentExecutor chain...
Thought: Do I need to use a tool? Yes
Action: Search
Action Input: Weather in London today
Observation: London, England, United Kingdom 10-Day Weather Forecaststar_ratehome ; Temperature. High. 77 ; Rain/Snow Depth. Precipitation. 0 ; Temperature. High. 80 ...
Thought:Traceback (most recent call last):
File "/Users/joey/Library/CloudStorage/OneDrive-Personal/Code/GPT/agent081123.py", line 254, in <module>
print(agent_chain.run(user_input))
File "/opt/homebrew/anaconda3/envs/XXX/lib/python3.10/site-packages/langchain/chains/base.py", line 475, in run
return self(args[0], callbacks=callbacks, tags=tags, metadata=metadata)[
File "/opt/homebrew/anaconda3/envs/XXX/lib/python3.10/site-packages/langchain/chains/base.py", line 282, in __call__
raise e
File "/opt/homebrew/anaconda3/envs/XXX/lib/python3.10/site-packages/langchain/chains/base.py", line 276, in __call__
self._call(inputs, run_manager=run_manager)
File "/opt/homebrew/anaconda3/envs/XXX/lib/python3.10/site-packages/langchain/agents/agent.py", line 1036, in _call
next_step_output = self._take_next_step(
File "/opt/homebrew/anaconda3/envs/XXX/lib/python3.10/site-packages/langchain/agents/agent.py", line 844, in _take_next_step
raise e
File "/opt/homebrew/anaconda3/envs/XXX/lib/python3.10/site-packages/langchain/agents/agent.py", line 833, in _take_next_step
output = self.agent.plan(
File "/opt/homebrew/anaconda3/envs/XXX/lib/python3.10/site-packages/langchain/agents/agent.py", line 457, in plan
return self.output_parser.parse(full_output)
File "/Users/joey/Library/CloudStorage/OneDrive-Personal/Code/GPT/agent081123.py", line 225, in parse
raise OutputParserException(f"Could not parse LLM output: `{text}`")
langchain.schema.output_parser.OutputParserException: Could not parse LLM output: `Do I need to use a tool? No`
(XXX) joey@Joey-MBP GPT%
### Expected behavior
Should have no parser error and generate the same format response as the prompt. | ConversationalAgent/CHAT_CONVERSATIONAL_REACT_DESCRIPTION dosen't work well with ChatOpenAI 3.5 and Claude models | https://api.github.com/repos/langchain-ai/langchain/issues/9154/comments | 7 | 2023-08-12T13:32:19Z | 2024-02-14T16:11:58Z | https://github.com/langchain-ai/langchain/issues/9154 | 1,848,016,732 | 9,154 |
[
"langchain-ai",
"langchain"
] | ### System Info
Hello,
Thanks for the wonderful extension on PG. I am using on DigitialOcean for my SAAS app where i have different schemas for each tenant. I had installed PG vector in public schema, so that all the tenants could make use of them.
My app is built on OpenAI api and I am using the default method from langchain to define the db
**from langchain.vectorstores.pgvector import DistanceStrategy
db = PGVector.from_documents(
documents= docs,
embedding = embeddings,
collection_name= "schema.unique_value",
distance_strategy = DistanceStrategy.COSINE,
connection_string=CONNECTION_STRING)**
This create two files in the public scema and i could see and query the contents. below are the tables
langchain_pg_colllection
langchain_pg_embedding
But when i wish to rewrite the contents by deleteing and even dropping the tables, i could still query the content using the below code:
**db = PGVector(
collection_name=collection_name,
connection_string=CONNECTION_STRING,
embedding_function=embeddings,
)
retriever = db.as_retriever(
search_kwargs={"k": 5}
)**
The deleted table gets automatically restored while query. This feature is strange and not sure what creates it. Also when i pass a non existent value, instead of failing, it creates a new record in the collection table.
I raise this issue with PG_VECTOR but was rightly directed to reach out to you.
Would seek any inputs to resolve this plase.
### Who can help?
@eyurtsev
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
**db = PGVector(
collection_name=collection_name,
connection_string=CONNECTION_STRING,
embedding_function=embeddings,
)
retriever = db.as_retriever(
search_kwargs={"k": 5}
)**
### Expected behavior
The retriever should get only the contents from the table. If not in the table (or table not available) it should fail. Thanks in anticipation | Able to query contents of a PG vector table, ever after dropping the table. | https://api.github.com/repos/langchain-ai/langchain/issues/9152/comments | 2 | 2023-08-12T11:13:01Z | 2023-08-12T12:07:34Z | https://github.com/langchain-ai/langchain/issues/9152 | 1,847,942,408 | 9,152 |
[
"langchain-ai",
"langchain"
] | ### Feature request
I am trying to use the from_llm_and_api_docs() function with an API that only supports parameters via the "params" parameter in the request library:
The from_llm_and_api_docs works well with apis that encode parameters in the URl. Eg: https://api.open-meteo.com/v1/forecast/latitude=123&longitude=123
I would need langchain to generate a json_payload and input it into the "params" parameter in the request header. I looked into the Langchain code and it seems like that this line:
requests_wrapper = TextRequestsWrapper(headers=headers)
needs to be augmented with a "params" parameter.
Example:
r = requests.get('https://www.domain.com/endpoint',params=params,cookies=cookies, headers=headers,json=json_data)
json_data = {
'field_ids': field_ids,
'order': [
{
'field_id': 'rank_org_company',
'sort': sort_order,
},
],
'query': [
{
'type': 'predicate',
'field_id': 'founded_on',
'operator_id': 'gte',
'values': [
'01/01/2018',
],
},
{
'type': 'predicate',
'field_id': 'equity_funding_total',
'operator_id': 'lte',
'values': [
{
'value': 15000000,
'currency': 'usd',
},
],
},
{
'type': 'predicate',
'field_id': 'location_identifiers',
'operator_id': 'includes',
'values': [
'6106f5dc-823e-5da8-40d7-51612c0b2c4e',
],
}
],
'field_aggregators': [],
'collection_id': 'organization.companies',
'limit': 1000,
}
How do I need to change the from_llm_and_api_docs() accordingly?
### Motivation
Many APIs are controlled via a json payload and not solely via parameters in the URL.
### Your contribution
I could look into changing the request wrapper but maybe there is a more elegant way of doing so. | Custom Parameter/ Json Payload for from_llm_and_api_docs() function | https://api.github.com/repos/langchain-ai/langchain/issues/9151/comments | 6 | 2023-08-12T10:53:09Z | 2024-05-08T14:23:31Z | https://github.com/langchain-ai/langchain/issues/9151 | 1,847,930,416 | 9,151 |
[
"langchain-ai",
"langchain"
] | ### System Info
langchain==0.0.262
aim==3.17.5
aim-ui==3.17.5
aimrecords==0.0.7
aimrocks==0.4.0
python: 3.11.4
env: MacOS
### Who can help?
@agola11
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [X] Callbacks/Tracing
- [ ] Async
### Reproduction
The AIM Callback example: https://python.langchain.com/docs/integrations/providers/aim_tracking
Running only with the LLM works fine. However when using any CHAIN example it fails with the error:
> Entering new LLMChain chain...
Error in AimCallbackHandler.on_chain_start callback: 'input'
> Finished chain.
Error in AimCallbackHandler.on_chain_end callback: 'output'
No further exceptions are displayed
And the trace recorded in AIM is also incomplete.
### Expected behavior
Trace should be recorded completely. And no errors should occur on callback for chain start and chain end | Error in AimCallbackHandler.on_chain_start callback: 'input' | https://api.github.com/repos/langchain-ai/langchain/issues/9150/comments | 6 | 2023-08-12T04:58:36Z | 2024-02-18T16:07:41Z | https://github.com/langchain-ai/langchain/issues/9150 | 1,847,753,503 | 9,150 |
[
"langchain-ai",
"langchain"
] | ### System Info
langchain 0.0.262
text_generation_server. https://github.com/huggingface/text-generation-inference
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
start text_generation_inference server on local host, verify it is working
===============
from langchain import PromptTemplate, HuggingFaceTextGenInference
llm = HuggingFaceTextGenInference(
inference_server_url="http://127.0.0.1:80",
max_new_tokens=64,
top_k=10,
top_p=0.95,
typical_p=0.95,
temperature=1,
repetition_penalty=1.03,
)
output = llm("What is Machine Learning?")
print(output)
=================
root@0b801769b7bd:~/langchain_client# python langchain-client.py
Traceback (most recent call last):
File "/root/langchain_client/langchain-client.py", line 59, in <module>
output = llm("What is Machine Learning?")
File "/usr/local/lib/python3.10/dist-packages/langchain/llms/base.py", line 802, in __call__
self.generate(
File "/usr/local/lib/python3.10/dist-packages/langchain/llms/base.py", line 598, in generate
output = self._generate_helper(
File "/usr/local/lib/python3.10/dist-packages/langchain/llms/base.py", line 504, in _generate_helper
raise e
File "/usr/local/lib/python3.10/dist-packages/langchain/llms/base.py", line 491, in _generate_helper
self._generate(
File "/usr/local/lib/python3.10/dist-packages/langchain/llms/base.py", line 977, in _generate
self._call(prompt, stop=stop, run_manager=run_manager, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/langchain/llms/huggingface_text_gen_inference.py", line 164, in _call
res = self.client.generate(prompt, **invocation_params)
File "/usr/local/lib/python3.10/dist-packages/text_generation/client.py", line 150, in generate
return Response(**payload[0])
File "pydantic/main.py", line 341, in pydantic.main.BaseModel.__init__
pydantic.error_wrappers.ValidationError: 1 validation error for Response
details -> tokens -> 6 -> logprob
none is not an allowed value (type=type_error.none.not_allowed)
### Expected behavior
output text from text generation inference server. | Langchain text_generation client hits: pydantic.error_wrappers.ValidationError: 1 validation error for Response | https://api.github.com/repos/langchain-ai/langchain/issues/9146/comments | 2 | 2023-08-11T21:41:36Z | 2023-08-11T21:45:22Z | https://github.com/langchain-ai/langchain/issues/9146 | 1,847,459,877 | 9,146 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
llm = OpenAI(temperature=0, model_name = args.ModelName)
system_message = SystemMessage(content=template)
agent_kwargs = {
"extra_prompt_messages": [MessagesPlaceholder(variable_name="memory")],
"system_message": system_message,
}
tools = load_tools(["google-serper"], llm=llm)
agent_executor = initialize_agent(
tools,
llm,
agent=AgentType.OPENAI_FUNCTIONS,
agent_kwargs= agent_kwargs,
verbose = True,
memory = memory,
handle_parsing_errors=True,
)
I am getting this error GoogleSerperRun._arun() got an unexpected keyword argument 'arg1'.
Can anyone explain why is it so
### Suggestion:
_No response_ | GoogleSerperRun._arun() got an unexpected keyword argument 'arg1' | https://api.github.com/repos/langchain-ai/langchain/issues/9144/comments | 1 | 2023-08-11T19:44:55Z | 2023-08-13T23:07:17Z | https://github.com/langchain-ai/langchain/issues/9144 | 1,847,337,972 | 9,144 |
[
"langchain-ai",
"langchain"
] | ### Issue with current documentation:
`API Reference` `langchain.agents Functions` table unreadable:

The `name` column is soo long, which makes the `description` column unreadable.
It is because of this value:
`agents.agent_toolkits.conversational_retrieval.openai_functions.create_conversational_retrieval_agent`
### Idea or request for content:
Make the namespace [and name] of this function shorter. | DOC: `API Reference` `langchain.agents Functions ` table unreadable | https://api.github.com/repos/langchain-ai/langchain/issues/9133/comments | 5 | 2023-08-11T17:16:33Z | 2023-11-19T22:54:10Z | https://github.com/langchain-ai/langchain/issues/9133 | 1,847,171,982 | 9,133 |
[
"langchain-ai",
"langchain"
] | ### Feature request
**Make the intent identification (SELECT vs UPDATE) within the SPARQL QA chain more resilient**
As originally described in #7758 and further discussed in #8521, some models struggle with providing an unambiguous response to the intent identification prompt, i.e., they return a sentence that contains both keywords instead of returning exactly one keyword. \
This should be resolvable with a loop that reprompts the model if its response is ambiguous.
The loop should probably be limited to one or two retries.
### Motivation
Make the intent identification within the SPARQL QA chain more resilient, so that it can more easily be used with models other than the OpenAI ones, e.g., StarCoder and NeoGPT3 mentioned in #7758, which are still able to generate sensible SPARQL
### Your contribution
I should be able to create a PR soonish, but if somebody else wants to give this a go I would also be happy to review | Make intent identification in SPARQL QA chain more resilient | https://api.github.com/repos/langchain-ai/langchain/issues/9132/comments | 5 | 2023-08-11T17:09:45Z | 2024-02-18T17:49:30Z | https://github.com/langchain-ai/langchain/issues/9132 | 1,847,163,390 | 9,132 |
[
"langchain-ai",
"langchain"
] | ### System Info
Python 3.10
LangChain: 0.0.245
### Who can help?
@hwchase17
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [X] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
I write a 'Python interpreter' tool, as below sample code:
```
class PythonInterpreterTool(BaseTool):
name = "python_interpreter"
name_display = "Python Interpreter"
description = """Give me the Python to execute,it returns the execution result"""
def _run(self, code: str) -> str:
with PythonInterpreter() as executor: # type: ignore
output = executor.run_python_code(code)
return output
async def _arun(self, code: str) -> str:
raise NotImplementedError
```
But, for many times, I always got "Could not parse tool input... because the `arguments` is not valid JSON." error:
Could not parse tool input: {'name': 'python_interpreter', 'arguments': '{\n "__arg1": "\nimport moviepy.editor as mp\n\nvideo_path = \'https://video-clip.oss.com/upload/40ae1ae0-3799-11ee-9c80-476f00f64986.mp4\'\noutput_path = \'output.mp3\'\n\nvideo = mp.VideoFileClip(video_path)\nvideo.audio.write_audiofile(output_path)\n"\n}'} because the `arguments` is not valid JSON.
### Expected behavior
I write a 'Python interpreter' tool, as below sample code:
```
class PythonInterpreterTool(BaseTool):
name = "python_interpreter"
name_display = "Python Interpreter"
description = """Give me the Python to execute,it returns the execution result"""
def _run(self, code: str) -> str:
with PythonInterpreter() as executor: # type: ignore
output = executor.run_python_code(code)
return output
async def _arun(self, code: str) -> str:
raise NotImplementedError
```
But, for many times, I always got "Could not parse tool input... because the `arguments` is not valid JSON." error:
Could not parse tool input: {'name': 'python_interpreter', 'arguments': '{\n "__arg1": "\nimport moviepy.editor as mp\n\nvideo_path = \'https://video-clip.oss.com/upload/40ae1ae0-3799-11ee-9c80-476f00f64986.mp4\'\noutput_path = \'output.mp3\'\n\nvideo = mp.VideoFileClip(video_path)\nvideo.audio.write_audiofile(output_path)\n"\n}'} because the `arguments` is not valid JSON.
| LangChain tools as gpt func, always fails at the JSON decode | https://api.github.com/repos/langchain-ai/langchain/issues/9130/comments | 4 | 2023-08-11T16:48:08Z | 2023-11-19T16:05:06Z | https://github.com/langchain-ai/langchain/issues/9130 | 1,847,137,279 | 9,130 |
[
"langchain-ai",
"langchain"
] | ### System Info
I have an extremely simple setup where I created a GH app following the [Github App Quickstart Guide](https://docs.github.com/en/apps/creating-github-apps/writing-code-for-a-github-app/quickstart). I then start a blank repo with the template `README.md` (just the name of the repo in a title).
The agent can successfully create a pull request from one request where I ask the agent to make an ascii art of a whale & insert a new section in the README.md, but after manually merging that pull request myself the agent keeps incorrectly finding the initial value of the `README.md` with (what I'm assuming is) an incorrect commit hash that's never existed in [my repo](https://github.com/Zak-Alford/gh-test-repo2):
```
> Entering new AgentExecutor chain...
First, I need to read the current README.md to know its contents. Then, I will update the file with the riddle. Finally, I will create a pull request.
Action: Read File
Action Input: README.md
Observation: # gh-test-repo2
Thought:The file opens with a header for the repository. Now, I will add the riddle to the 'Weekly Challenge' section.
Action: Update File
Action Input: README.md
OLD <<<<
# gh-test-repo2
>>>> OLD
NEW <<<<
# gh-test-repo2
## Weekly Challenge
Riddle: What has keys but can't open locks?
>>>> NEW
Observation: Unable to update file due to error:
409 {"message": "README.md does not match 87f560c4c6fd9eecce7374b3ac8f5517a975bf63", "documentation_url": "https://docs.github.com/rest/repos/contents#create-or-update-file-contents"}
Thought:It seems like there was a conflict when trying to update the file. The file might have been updated after I last read it. I will try to read the file again and then update it.
Action: Read File
Action Input: README.md
Observation: # gh-test-repo2
Thought:The file content hasn't changed, so I'll attempt to update it again.
Action: Update File
Action Input: README.md
OLD <<<<
# gh-test-repo2
>>>> OLD
NEW <<<<
# gh-test-repo2
## Weekly Challenge
Riddle: What has keys but can't open locks?
>>>> NEW
Observation: Unable to update file due to error:
409 {"message": "README.md does not match 87f560c4c6fd9eecce7374b3ac8f5517a975bf63", "documentation_url": "https://docs.github.com/rest/repos/contents#create-or-update-file-contents"}
Thought:There seems to be a problem with updating the file. Maybe the file was updated by someone else in the meantime, or there's an issue with the way I'm trying to update it. I'll try to create a new file with the riddle instead, and then make a pull request to merge it with the README.md.
Action: Create File
Action Input: WeeklyChallenge.md
## Weekly Challenge
Riddle: What has keys but can't open locks?
Observation: Unable to make file due to error:
404 {"message": "Not Found", "documentation_url": "https://docs.github.com/rest/repos/contents#get-repository-content"}
Thought:The creation of the file failed, perhaps the filepath or file name is incorrect. I will try to create the pull request directly and mention the changes proposed in the pull request description.
Action: Create Pull Request
Action Input: Update README.md with Weekly Challenge
In this pull request, I propose adding the following riddle to the 'Weekly Challenge' section of the README.md:
Riddle: What has keys but can't open locks?
Observation: Unable to make pull request due to error:
422 {"message": "Validation Failed", "errors": [{"resource": "PullRequest", "code": "custom", "message": "No commits between test and bot-branch"}], "documentation_url": "https://docs.github.com/rest/pulls/pulls#create-a-pull-request"}
Thought:It seems like the pull request failed because there are no commits between the test and the bot-branch. This could be because I haven't successfully updated or created any files, so there's no changes to commit. I'll try to update the README.md file again, making sure to include the entire content of the file in the OLD <<<< and >>>> OLD tags, in case that was the issue.
Action: Update File
Action Input: README.md
OLD <<<<
# gh-test-repo2
>>>> OLD
NEW <<<<
# gh-test-repo2
## Weekly Challenge
Riddle: What has keys but can't open locks?
>>>> NEW
Observation: Unable to update file due to error:
409 {"message": "README.md does not match 87f560c4c6fd9eecce7374b3ac8f5517a975bf63", "documentation_url": "https://docs.github.com/rest/repos/contents#create-or-update-file-contents"}
Thought:It seems like the error persists despite multiple attempts. There might be an issue with the GitHub repository or the API. At this point, I am unable to complete the task as instructed.
Final Answer: I was unable to complete the task due to an error while updating the README.md file.
> Finished chain.
```
Code I'm using to initialize the GH agent:
```
import os
from langchain.agents import AgentType
from langchain.agents import initialize_agent
from langchain.agents.agent_toolkits.github.toolkit import GitHubToolkit
from langchain.llms import OpenAI
from langchain.chat_models import ChatOpenAI
from langchain.utilities.github import GitHubAPIWrapper
import json
import custom_tools
# List of required environment variables:
env_vars = [
"GITHUB_APP_ID",
"GITHUB_APP_PRIVATE_KEY",
"GITHUB_REPOSITORY",
"GITHUB_BRANCH",
"GITHUB_BASE_BRANCH",
"OPENAI_API_KEY",
]
# Check your json file for key values
with open("envvars.json", "r") as f:
env_var_values = json.load(f)
for var in env_vars:
# Check that each key exists. If it doesn't, set it to be "" and then complain later
if env_var_values.get(var, "") != "":
os.environ[var] = env_var_values[var]
else: # Complaint line
raise Exception(f"The environment variable {var} was not set. You must set this value to continue.")
bot_branch = os.environ["GITHUB_BRANCH"]
gh_base_branch = os.environ["GITHUB_BASE_BRANCH"]
llm = ChatOpenAI(model="gpt-4")
github = GitHubAPIWrapper()
toolkit = GitHubToolkit.from_github_api_wrapper(github)
tools = []
# unwanted_tools = ['Get Issue','Delete File']
unwanted_tools = []
for tool in toolkit.get_tools():
if tool.name not in unwanted_tools:
tools.append(tool)
agent = initialize_agent(
tools=tools, llm=llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True
)
# request = f"Draw an ascii art whale and add it once to the 'Nature' section of the README.md file. Make a pull request back to {gh_base_branch}."
request = f"Do a git pull first. Then write a riddle and it once to the 'Weekly Challenge' section of the README.md file. Don't write the answer to the riddle. Make a pull request back to {gh_base_branch}."
agent.run(request)
```
scrubbed JSON file I'm using to grab env vars from:
```
{
"GITHUB_APP_ID": "<my-app-id>",
"GITHUB_APP_PRIVATE_KEY": "<my-path-to-pem>",
"GITHUB_REPOSITORY": "Zak-Alford/gh-test-repo2",
"GITHUB_BRANCH": "bot-branch",
"GITHUB_BASE_BRANCH": "test",
"OPENAI_API_KEY": "<my-openai-api-key>"
}
```
### Who can help?
@hw
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [X] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [X] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
1. Copy my script included above and name the file
2. Run `py (or python3) <filename>.py`
### Expected behavior
The github agent should find the latest commit that it itself committed to the branch before creating a new commit with changes. | [Request] Note that the 'base branch' needs to match the repository's 'default branch' | https://api.github.com/repos/langchain-ai/langchain/issues/9129/comments | 4 | 2023-08-11T16:31:15Z | 2023-11-20T16:05:11Z | https://github.com/langchain-ai/langchain/issues/9129 | 1,847,115,156 | 9,129 |
[
"langchain-ai",
"langchain"
] | ### System Info
in sagemaker.
langchain==0.0.256 or 0.0.249 (I tried both)
Image: Data Science 3.0
Kernel: Python 3
Instance type: ml.t3.medium 2 vCPU + 4 GiB
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [X] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
https://github.com/aws-samples/amazon-bedrock-workshop/blob/main/03_QuestionAnswering/01_qa_w_rag_claude.ipynb
im trying to following this notebook:
i increase the data input size:
from urllib.request import urlretrieve
```
os.makedirs("data", exist_ok=True)
files = [
"https://www.irs.gov/pub/irs-pdf/p1544.pdf",
"https://www.irs.gov/pub/irs-pdf/p15.pdf",
"https://www.irs.gov/pub/irs-pdf/p1212.pdf",
"https://www.irs.gov/pub/irs-pdf/p3.pdf",
"https://www.irs.gov/pub/irs-pdf/p17.pdf",
"https://www.irs.gov/pub/irs-pdf/p51.pdf",
"https://www.irs.gov/pub/irs-pdf/p54.pdf",
]
for url in files:
file_path = os.path.join("data", url.rpartition("/")[2])
urlretrieve(url, file_path)
```
my data input:
Average length among 1012 documents loaded is 2320 characters.
After the split we have 1167 documents more than the original 1012.
Average length among 1167 documents (after split) is 2011 characters.
```
import numpy as np
from langchain.text_splitter import CharacterTextSplitter, RecursiveCharacterTextSplitter
from langchain.document_loaders import PyPDFLoader, PyPDFDirectoryLoader
loader = PyPDFDirectoryLoader("./data/")
documents = loader.load()
# - in our testing Character split works better with this PDF data set
text_splitter = RecursiveCharacterTextSplitter(
# Set a really small chunk size, just to show.
chunk_size = 1000,
chunk_overlap = 100,
)
docs = text_splitter.split_documents(documents)
avg_doc_length = lambda documents: sum([len(doc.page_content) for doc in documents])//len(documents)
avg_char_count_pre = avg_doc_length(documents)
avg_char_count_post = avg_doc_length(docs)
print(f'Average length among {len(documents)} documents loaded is {avg_char_count_pre} characters.')
print(f'After the split we have {len(docs)} documents more than the original {len(documents)}.')
print(f'Average length among {len(docs)} documents (after split) is {avg_char_count_post} characters.')
from langchain.chains.question_answering import load_qa_chain
from langchain.vectorstores import FAISS
from langchain.indexes import VectorstoreIndexCreator
from langchain.indexes.vectorstore import VectorStoreIndexWrapper
vectorstore_faiss = FAISS.from_documents(
docs,
bedrock_embeddings,
)
wrapper_store_faiss = VectorStoreIndexWrapper(vectorstore=vectorstore_faiss)
```
funny thing is if my doc is smaller(docs[:5]), it worked.
vectorstore_faiss = FAISS.from_documents(
**docs[:5]**,
bedrock_embeddings,
)
error:
---------------------------------------------------------------------------
ValidationException Traceback (most recent call last)
File /opt/conda/lib/python3.10/site-packages/langchain/embeddings/bedrock.py:120, in BedrockEmbeddings._embedding_func(self, text)
119 try:
--> 120 response = self.client.invoke_model(
121 body=body,
122 modelId=self.model_id,
123 accept="application/json",
124 contentType="application/json",
125 )
126 response_body = json.loads(response.get("body").read())
File /opt/conda/lib/python3.10/site-packages/botocore/client.py:535, in ClientCreator._create_api_method.<locals>._api_call(self, *args, **kwargs)
534 # The "self" in this scope is referring to the BaseClient.
--> 535 return self._make_api_call(operation_name, kwargs)
File /opt/conda/lib/python3.10/site-packages/botocore/client.py:980, in BaseClient._make_api_call(self, operation_name, api_params)
979 error_class = self.exceptions.from_code(error_code)
--> 980 raise error_class(parsed_response, operation_name)
981 else:
ValidationException: An error occurred (ValidationException) when calling the InvokeModel operation: The provided inference configurations are invalid
During handling of the above exception, another exception occurred:
ValueError Traceback (most recent call last)
Cell In[35], line 10
4 from langchain.indexes.vectorstore import VectorStoreIndexWrapper
6
7
8 #
9 #
---> 10 vectorstore_faiss = FAISS.from_documents(
11 docs,
12 bedrock_embeddings,
13 )
15 wrapper_store_faiss = VectorStoreIndexWrapper(vectorstore=vectorstore_faiss)
File /opt/conda/lib/python3.10/site-packages/langchain/vectorstores/base.py:420, in VectorStore.from_documents(cls, documents, embedding, **kwargs)
418 texts = [d.page_content for d in documents]
419 metadatas = [d.metadata for d in documents]
--> 420 return cls.from_texts(texts, embedding, metadatas=metadatas, **kwargs)
File /opt/conda/lib/python3.10/site-packages/langchain/vectorstores/faiss.py:607, in FAISS.from_texts(cls, texts, embedding, metadatas, ids, **kwargs)
581 @classmethod
582 def from_texts(
583 cls,
(...)
588 **kwargs: Any,
589 ) -> FAISS:
590 """Construct FAISS wrapper from raw documents.
591
592 This is a user friendly interface that:
(...)
605 faiss = FAISS.from_texts(texts, embeddings)
606 """
--> 607 embeddings = embedding.embed_documents(texts)
608 return cls.__from(
609 texts,
610 embeddings,
(...)
614 **kwargs,
615 )
File /opt/conda/lib/python3.10/site-packages/langchain/embeddings/bedrock.py:148, in BedrockEmbeddings.embed_documents(self, texts, chunk_size)
146 results = []
147 for text in texts:
--> 148 response = self._embedding_func(text)
149 results.append(response)
150 return results
File /opt/conda/lib/python3.10/site-packages/langchain/embeddings/bedrock.py:129, in BedrockEmbeddings._embedding_func(self, text)
127 return response_body.get("embedding")
128 except Exception as e:
--> 129 raise ValueError(f"Error raised by inference endpoint: {e}")
ValueError: Error raised by inference endpoint: An error occurred (ValidationException) when calling the InvokeModel operation: The provided inference configurations are invalid
### Expected behavior
I would like to generate embeddings for the entire corpus and stored in a vector store. | inference configurations are invalid forBedrockEmbeddings models | https://api.github.com/repos/langchain-ai/langchain/issues/9127/comments | 11 | 2023-08-11T15:33:52Z | 2023-12-13T16:07:43Z | https://github.com/langchain-ai/langchain/issues/9127 | 1,847,039,060 | 9,127 |
[
"langchain-ai",
"langchain"
] | ### System Info
langchain==0.0.260
model = "gpt-3.5-turbo-16k"
temperature = 0.0
### Who can help?
@hwchase17 and @agola11
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [X] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [X] Memory
- [X] Agents / Agent Executors
- [X] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```
search_agent = initialize_agent(
tools=tools,
llm=llm,
agent=AgentType.STRUCTURED_CHAT_ZERO_SHOT_REACT_DESCRIPTION,
memory = memory,
agent_kwargs = {
#"suffix": NEW_SUFFIX,
"memory_prompts": [chat_history],
"input_variables": ["input", "agent_scratchpad", "chat_history"]
},
#prompt=cf_template,
#handle_parsing_errors=True,
#max_iterations=2,
#max_execution_time=10,
verbose = True,
handle_parsing_errors=True,
)
from pprint import pprint
#planner = load_chat_planner(llm)
#executor = load_agent_executor(llm, tools, verbose=True)
#search_agent = PlanAndExecute(planner=planner, executor=executor, verbose=True)
#print(search_agent.agent.llm_chain.prompt)
search_agent.agent.llm_chain.verbose=True
question = "What are the stock price for Apple"
response = search_agent.run(input = question)
print(response)
#pprint(search_agent.agent.llm_chain.prompt)
question = "What about Google."
response = search_agent.run(input = question)
print(response)
#pprint(search_agent.agent.llm_chain.prompt)
question = "Show me news for fires in Greece in the last month."
response = search_agent.run(input = question)
print(response)
question = "what about Macedonia in the previous week."
response = search_agent.run(input = question)
print(response)
```
### Expected behavior
For the:
```
question = "What about Google."
response = search_agent.run(input = question)
```
The Agent response is:
```
Human: What about Google.
AI: To provide the current stock price for Google, I will use the "get_current_stock_price" tool. Please wait a moment.
```
Instead it should execute the tool and get the results similar as Apple which is the first query... | Intermediate answer from STRUCTURED_CHAT_ZERO_SHOT_REACT_DESCRIPTION received as a final answer | https://api.github.com/repos/langchain-ai/langchain/issues/9122/comments | 13 | 2023-08-11T13:52:19Z | 2024-01-30T00:41:15Z | https://github.com/langchain-ai/langchain/issues/9122 | 1,846,866,526 | 9,122 |
[
"langchain-ai",
"langchain"
] | ### Feature request
Many parameters described on the TGI [Swagger](https://huggingface.github.io/text-generation-inference/#/Text%20Generation%20Inference/generate) find their direct equivalent in the corresponding LangChain [API](https://api.python.langchain.com/en/latest/llms/langchain.llms.huggingface_text_gen_inference.HuggingFaceTextGenInference.html?highlight=textgeninference#langchain.llms.huggingface_text_gen_inference.HuggingFaceTextGenInference).
### Motivation
At least one of those parameters is missing: the `do_sample` parameter which I needed.
### Your contribution
Why is this parameter missing? How hard would it be to add this and can this be circumvented? | Add `do_sample` to HuggingFaceTextGenInference | https://api.github.com/repos/langchain-ai/langchain/issues/9120/comments | 3 | 2023-08-11T12:42:50Z | 2023-11-13T08:45:47Z | https://github.com/langchain-ai/langchain/issues/9120 | 1,846,760,511 | 9,120 |
[
"langchain-ai",
"langchain"
] | ### Feature request
__A chain for planning using the [Planning Domain Definition Language](https://en.wikipedia.org/wiki/Planning_Domain_Definition_Language) and a dedicated solver__
As described in this [paper by Liu et al.](https://arxiv.org/abs/2304.11477), the LLM can generate the PDDL inputs required by a solver (they used [FastDownward](https://github.com/aibasel/downward), which has PDDL support, see [this info page](https://www.fast-downward.org/PddlSupport)). \
There is an implementation available from the authors, see [llm-pddl](https://github.com/Cranial-XIX/llm-pddl). However, I did not see any license information. \
It would be great having this readily available as a part of langchain. \
I guess implementing this as a chain would make the most sense, since iterations for reformulating the problem might be necessary.
### Motivation
LLMs are limited regarding planning in an optimal way and dedicated solvers require inputs that are tedious to create - putting the pieces together just makes sense
### Your contribution
I could create a PR, but this might take some time | Chain for planning using PDDL | https://api.github.com/repos/langchain-ai/langchain/issues/9119/comments | 2 | 2023-08-11T12:29:14Z | 2023-11-17T16:05:14Z | https://github.com/langchain-ai/langchain/issues/9119 | 1,846,740,678 | 9,119 |
[
"langchain-ai",
"langchain"
] | ### System Info
- langchain version 0.0.262
- Python version 3.10
### Who can help?
Users I've found through blame ([exact commit](https://github.com/langchain-ai/langchain/commit/1d649b127eb10c426f9b9a67cbd1fe6ec8e6befa)):
- @MassimilianoBiancucci
- @baskaryan
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [X] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Just use the function or look into the source code ([this seems to be the exact commit](https://github.com/langchain-ai/langchain/commit/1d649b127eb10c426f9b9a67cbd1fe6ec8e6befa)).
### Expected behavior
Either return the sole JSON or the whole markdown block (including the closing three backticks). | StructuredOutputParser.get_format_instructions with only_json=True doesn't return the closing backticks | https://api.github.com/repos/langchain-ai/langchain/issues/9118/comments | 1 | 2023-08-11T11:26:35Z | 2023-11-17T16:05:19Z | https://github.com/langchain-ai/langchain/issues/9118 | 1,846,659,504 | 9,118 |
[
"langchain-ai",
"langchain"
] | ### System Info
langchain==0.0.261
python 3.10
------------------
ValidationError Traceback (most recent call last)
Cell In[17], line 1
----> 1 embeddings = BedrockEmbeddings(
2 credentials_profile_name="monisdas-ibm", region_name="us-east-1"
3 )
4 vectorstore = Chroma.from_documents(docs, embeddings)
File ~/Documents/moni/knowlege/unstructured/examples/chroma-news-of-the-day/.py310_unstrcutured/lib/python3.10/site-packages/pydantic/main.py:341, in pydantic.main.BaseModel.__init__()
ValidationError: 1 validation error for BedrockEmbeddings
__root__
Could not load credentials to authenticate with AWS client. Please check that credentials in the specified profile name are valid. (type=value_error)
-------------------
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
try to use BedrockEmbeddings
### Expected behavior
it should work. | BedrockEmbeddings can't load aws credential profile | https://api.github.com/repos/langchain-ai/langchain/issues/9117/comments | 10 | 2023-08-11T10:10:30Z | 2024-03-26T13:40:12Z | https://github.com/langchain-ai/langchain/issues/9117 | 1,846,559,683 | 9,117 |
[
"langchain-ai",
"langchain"
] | HI,
Is that azure Openai embedding had limitation ?
Traceback (most recent call last):
File "D:\Corent\AI\LangChain\azure\azure_connection.py", line 45, in <module>
VectorStore = Milvus.from_texts(
^^^^^^^^^^^^^^^^^^
File "D:\Corent\AI\LangChain\azure\venv\Lib\site-packages\langchain\vectorstores\milvus.py", line 822, in from_texts
vector_db.add_texts(texts=texts, metadatas=metadatas)
File "D:\Corent\AI\LangChain\azure\venv\Lib\site-packages\langchain\vectorstores\milvus.py", line 422, in add_texts
embeddings = self.embedding_func.embed_documents(texts)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\Corent\AI\LangChain\azure\venv\Lib\site-packages\langchain\embeddings\openai.py", line 478, in embed_documents
return self._get_len_safe_embeddings(texts, engine=self.deployment)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\Corent\AI\LangChain\azure\venv\Lib\site-packages\langchain\embeddings\openai.py", line 364, in _get_len_safe_embeddings
response = embed_with_retry(
^^^^^^^^^^^^^^^^^
File "D:\Corent\AI\LangChain\azure\venv\Lib\site-packages\langchain\embeddings\openai.py", line 107, in embed_with_retry
return _embed_with_retry(**kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\Corent\AI\LangChain\azure\venv\Lib\site-packages\tenacity\__init__.py", line 289, in wrapped_f
return self(f, *args, **kw)
^^^^^^^^^^^^^^^^^^^^
File "D:\Corent\AI\LangChain\azure\venv\Lib\site-packages\tenacity\__init__.py", line 379, in __call__
do = self.iter(retry_state=retry_state)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\Corent\AI\LangChain\azure\venv\Lib\site-packages\tenacity\__init__.py", line 314, in iter
return fut.result()
^^^^^^^^^^^^
File "C:\Users\donbosco\AppData\Local\Programs\Python\Python311\Lib\concurrent\futures\_base.py", line 449, in result
return self.__get_result()
^^^^^^^^^^^^^^^^^^^
File "C:\Users\donbosco\AppData\Local\Programs\Python\Python311\Lib\concurrent\futures\_base.py", line 401, in __get_result
raise self._exception
File "D:\Corent\AI\LangChain\azure\venv\Lib\site-packages\tenacity\__init__.py", line 382, in __call__
result = fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "D:\Corent\AI\LangChain\azure\venv\Lib\site-packages\langchain\embeddings\openai.py", line 104, in _embed_with_retry
response = embeddings.client.create(**kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\Corent\AI\LangChain\azure\venv\Lib\site-packages\openai\api_resources\embedding.py", line 33, in create
response = super().create(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\Corent\AI\LangChain\azure\venv\Lib\site-packages\openai\api_resources\abstract\engine_api_resource.py", line 153, in create
response, _, api_key = requestor.request(
^^^^^^^^^^^^^^^^^^
File "D:\Corent\AI\LangChain\azure\venv\Lib\site-packages\openai\api_requestor.py", line 298, in request
resp, got_stream = self._interpret_response(result, stream)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\Corent\AI\LangChain\azure\venv\Lib\site-packages\openai\api_requestor.py", line 700, in _interpret_response
self._interpret_response_line(
File "D:\Corent\AI\LangChain\azure\venv\Lib\site-packages\openai\api_requestor.py", line 763, in _interpret_response_line
raise self.handle_error_response(
openai.error.InvalidRequestError: Too many inputs. The max number of inputs is 16. We hope to increase the number of inputs per request soon. Please contact us through an Azure support request at: https://go.microsoft.com/fwlink/?linkid=2213926 for further questions. | openai.error.InvalidRequestError: Too many inputs. | https://api.github.com/repos/langchain-ai/langchain/issues/9112/comments | 3 | 2023-08-11T08:33:13Z | 2023-12-28T16:06:57Z | https://github.com/langchain-ai/langchain/issues/9112 | 1,846,424,735 | 9,112 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
I just tried the simple example with the latest version of langchain, but I meet the problem with " list index out of the range", when I modify the chain_type in "RetrievalQA.from_chain_type" function, the error messages are as follows :
map_rerank :
-----------------------------------------------------------------------------------------------------------------------------
File ~\anaconda3\envs\pytorch\Lib\site-packages\langchain\chains\combine_documents\map_rerank.py:194, in MapRerankDocumentsChain._process_results(self, docs, results)
190 typed_results = cast(List[dict], results)
191 sorted_res = sorted(
192 zip(typed_results, docs), key=lambda x: -int(x[0][self.rank_key])
193 )
--> 194 output, document = sorted_res[0]
195 extra_info = {}
196 if self.metadata_keys is not None:
IndexError: list index out of range
-----------------------------------------------------------------------------------------------------------------------------
refine:
-----------------------------------------------------------------------------------------------------------------------------
File ~\anaconda3\envs\pytorch\Lib\site-packages\langchain\chains\combine_documents\refine.py:203, in RefineDocumentsChain._construct_initial_inputs(self, docs, **kwargs)
200 def _construct_initial_inputs(
201 self, docs: List[Document], **kwargs: Any
202 ) -> Dict[str, Any]:
--> 203 base_info = {"page_content": docs[0].page_content}
204 base_info.update(docs[0].metadata)
205 document_info = {k: base_info[k] for k in self.document_prompt.input_variables}
IndexError: list index out of range
-----------------------------------------------------------------------------------------------------------------------------
Thanks for your help!
### Suggestion:
_No response_ | Issue: when using the map_rerank & refine, occur the " list index out of range" (already modify the llm.py file, but only map_reduce can work well) | https://api.github.com/repos/langchain-ai/langchain/issues/9111/comments | 6 | 2023-08-11T08:26:10Z | 2023-12-06T17:44:20Z | https://github.com/langchain-ai/langchain/issues/9111 | 1,846,415,172 | 9,111 |
[
"langchain-ai",
"langchain"
] | ### System Info
If, for some reason, the list of document is empty, an exception is sent from llm.py.
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
llm.prep_prompts(input_list=[])
raise exception
### Expected behavior
Continue
See the pull [request #9109](https://github.com/langchain-ai/langchain/pull/9109) | LLM with empty list of document | https://api.github.com/repos/langchain-ai/langchain/issues/9110/comments | 1 | 2023-08-11T08:14:57Z | 2023-08-16T07:14:35Z | https://github.com/langchain-ai/langchain/issues/9110 | 1,846,399,416 | 9,110 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
I am using Elasticsearch BM25 to fetch relevant documents. How can I add a parameter to tell the retriever to return only first n matching docs?
### Suggestion:
_No response_ | Issue: Elasticsearch BM25 | https://api.github.com/repos/langchain-ai/langchain/issues/9103/comments | 3 | 2023-08-11T07:41:41Z | 2024-02-12T16:16:04Z | https://github.com/langchain-ai/langchain/issues/9103 | 1,846,353,415 | 9,103 |
[
"langchain-ai",
"langchain"
] | ### System Info
Code snapshot:

Data in Azure Cognitive Search Vector Store:
data ={
"id": " ",
"content": "",
"content_vector": [],
"metadata": "{}",
"a": "",
"b": "",
"c": 20.4,
"d": ""
}
Issue: with langchain==0.0.261, but it is working fine with langchain==0.0.242
Issue Description : SerializationError: (', DeserializationError: (", AttributeError: \'float\' object has no attribute \'lower\'", \'Unable to deserialize to object: type\', AttributeError("\'float\' object has no attribute \'lower\'"))', 'Unable to build a model: (", AttributeError: \'float\' object has no attribute \'lower\'", \'Unable to deserialize to object: type\', AttributeError("\'float\' object has no attribute \'lower\'"))', DeserializationError(", AttributeError: 'float' object has no attribute 'lower'", 'Unable to deserialize to object: type', AttributeError("'float' object has no attribute 'lower'")))
Note: In the upgraded version the response time also increases.
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [X] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [X] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [X] Callbacks/Tracing
- [ ] Async
### Reproduction
def __llm_client(self, streaming: bool =False):
# LLM connection
if streaming is False:
return AzureOpenAI(
deployment_name=AZURE_OPENAI_LLM_MODEL_DEPLOYMENT_NAME,
model=AZURE_OPENAI_LLM_MODEL_NAME,
openai_api_type=self.__azure_openAI_type,
openai_api_version=self.__azure_openAI_version,
openai_api_base=self.__azure_openAI_base,
openai_api_key=self.__azure_openAI_key,
temperature=0.0,
max_tokens=2000
)
return AzureOpenAI(
deployment_name=AZURE_OPENAI_LLM_MODEL_DEPLOYMENT_NAME,
model=AZURE_OPENAI_LLM_MODEL_NAME,
openai_api_type=self.__azure_openAI_type,
openai_api_version=self.__azure_openAI_version,
openai_api_base=self.__azure_openAI_base,
openai_api_key=self.__azure_openAI_key,
temperature=0.0,
max_tokens=2000,
streaming=True,
callbacks=[StreamingStdOutCallbackHandler()]
)
# LLM Embeddings Client
def __embeddings_client(self):
return OpenAIEmbeddings(
model=AZURE_OPENAI_LLM_EMBEDDING_NAME,
deployment=AZURE_OPENAI_LLM_EMBEDDING_DEPLOYMENT_NAME,
openai_api_type=self.__azure_openAI_type,
openai_api_version=self.__azure_openAI_version,
openai_api_base=self.__azure_openAI_base,
openai_api_key=self.__azure_openAI_key,
chunk_size=1536
)
# Embedding vector store client
def __vector_store_client(self):
acs_vector_store: AzureSearch = AzureSearch(
azure_search_endpoint=self.__acs_endpoint,
azure_search_key=self.__acs_key,
index_name=self.__acs_index_name,
embedding_function=self.__embeddings_client().embed_query,
)
return acs_vector_store
# Langchain Chain Client
def __chain_client(self):
chain_type = "stuff"
return load_qa_chain(
llm=self.__llm_client(streaming=True),
chain_type=chain_type,
document_variable_name='context',
prompt=QA_PROMPT
# verbose=True
)
def __conversational_retrieval_chain_client(self):
# print(self.__vector_store_client().as_retriever())
return ConversationalRetrievalChain(
retriever=self.__vector_store_client().as_retriever(),
question_generator=self.__question_generator(),
combine_docs_chain=self.__chain_client(),
memory=ConversationBufferMemory(
memory_key="chat_history",
return_messages=False
)
)
def __question_generator(self):
return LLMChain(
llm=self.__llm_client(),
prompt=CONDENSE_QUESTION_PROMPT
)
# Main function
def smart_chat_bot(self, query: str="*", conversation: list=[]):
self.user_input = query
print(f"Human Input: {self.user_input}", end="\n")
result = self.__conversational_retrieval_chain_client()({"question": self.user_input, "chat_history": conversation})
return result
### Expected behavior
It should return the response on the bases of user input and the documents that came from the vector store. And the latency will be lesser. | ConversationalRetrievalChain having trouble with vector store retriever 'as_retriever()' | https://api.github.com/repos/langchain-ai/langchain/issues/9101/comments | 4 | 2023-08-11T07:37:03Z | 2024-02-19T18:19:46Z | https://github.com/langchain-ai/langchain/issues/9101 | 1,846,348,024 | 9,101 |
[
"langchain-ai",
"langchain"
] | https://github.com/langchain-ai/langchain/blob/991b448dfce7fa326e70774b5f38c9576f1f304c/libs/langchain/langchain/chains/base.py#L327C25-L327C25
IMPORTANT: chain.acall is a async function.
Chain.prep_inputs will invoke memory.load_memory_variables
in ConversationBufferWindowMemory.load_memory_variables.
```python
buffer: Any = self.buffer[-self.k * 2 :] if self.k > 0 else []
```
```python
@property
def buffer(self) -> List[BaseMessage]:
"""String buffer of memory."""
return self.chat_memory.messages
```
if chat_memory is type of RedisChatMessageHistory.
```python
@property
def messages(self) -> List[BaseMessage]: # type: ignore
"""Retrieve the messages from Redis"""
_items = self.redis_client.lrange(self.key, 0, -1)
items = [json.loads(m.decode("utf-8")) for m in _items[::-1]]
messages = messages_from_dict(items)
return messages
```
the lrange is a sync invoking...
IMPORTANT: chain.acall is a async function.
in fastapi or other scene.
a chain.acall will blocking by this invoke. can you change the invoke of Chain.prep_inputs to:
```python
inputs = await asyncify(self.prep_inputs)(inputs)
``` | sync problem | https://api.github.com/repos/langchain-ai/langchain/issues/9100/comments | 4 | 2023-08-11T07:33:15Z | 2024-02-11T16:17:07Z | https://github.com/langchain-ai/langchain/issues/9100 | 1,846,343,465 | 9,100 |
[
"langchain-ai",
"langchain"
] | ### Feature request
To import a big list of documents in a vectorstore, you must load all the documents in memory, split the documents, and import the list of documents.
They consume a lot of memory just to import. Sometime, the number of documents is too big to feet in memory.
I propose to generalize the `lazy_...()` with some generator. Then, only one document at a time can be in memory.
### Motivation
Reduce the memory foot print.
Accept to import, in one loop, a big list of documents.
### Your contribution
- Small contribution. I use lazy approach in my pull request to load google documents. | Lazy import in vectorstore | https://api.github.com/repos/langchain-ai/langchain/issues/9099/comments | 3 | 2023-08-11T07:16:11Z | 2023-11-29T09:35:50Z | https://github.com/langchain-ai/langchain/issues/9099 | 1,846,323,394 | 9,099 |
[
"langchain-ai",
"langchain"
] | I just wondering can i do streaming model bedrock like Claude V2 on Langchain?
because i don't see support streaming on the Bedrock code.
If can, can you give me example for use streaming on bedrock? | Issue: Bedrock Doesn't support streaming | https://api.github.com/repos/langchain-ai/langchain/issues/9094/comments | 8 | 2023-08-11T05:34:22Z | 2024-05-14T16:06:10Z | https://github.com/langchain-ai/langchain/issues/9094 | 1,846,221,991 | 9,094 |
[
"langchain-ai",
"langchain"
] | ### System Info
langchain==0.0.261
### Who can help?
@hwchase17
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [X] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
When running the OpenAI Multi Functions Agent, **enums** in custom structured tools are not being used. Using the enums for data validation works fine, but the LLM does not use enums to produce enumerated parameter values. For reference, OpenAI's Function Calling takes the JSON schema for a function, which can pass the JSON schema enums to the LLM to be used when producing parameter values.
1. Create a custom structured tool with an enum input
2. Give tool to agent
3. Run a query that uses the tool
4. Agent may hallucinate an invalid enum
### Expected behavior
The expected behavior is two-fold:
1. Provide Python type annotations in BaseModel arg_schemas to the LLM instead of just the Field() object
2. Correct the error similar to the `StructuredChatOutputParserWithRetries` | Enums not used with OpenAI Multi Function Agent | https://api.github.com/repos/langchain-ai/langchain/issues/9092/comments | 2 | 2023-08-11T05:07:36Z | 2023-11-17T16:05:25Z | https://github.com/langchain-ai/langchain/issues/9092 | 1,846,203,093 | 9,092 |
[
"langchain-ai",
"langchain"
] | ### Feature request
Can the atomicity of multiple write operations be guaranteed through the Agent
### Motivation
If two actions need to be executed within an agent and each action modifies the status of external tools or services, then if the first action writing operation is successful and the second action writing fails. How to ensure the revocation of the first write operation through an agent to ensure atomic: "all or nothing".
### Your contribution
Does the default agent runtime of langchain support this operation? If not, can we support it through the following ideas:
1: using langchain callback ?
2: inspired by BabyAGI feature: generate and pretend to execute tasks based on a given objective. I think the pretend operation can meet this requirement. Is this correct? | Can the atomicity of multiple write operations be guaranteed through the Agent | https://api.github.com/repos/langchain-ai/langchain/issues/9090/comments | 2 | 2023-08-11T03:35:52Z | 2023-11-17T16:05:30Z | https://github.com/langchain-ai/langchain/issues/9090 | 1,846,147,524 | 9,090 |
[
"langchain-ai",
"langchain"
] | ### System Info
```py
In [21]: langchain.__version__
Out[21]: '0.0.223'
In [24]: sys.platform
Out[24]: 'win32'
In [25]: !python -V
Python 3.10.10
```
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [x] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [x] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```py
In [18]: from langchain.embeddings.huggingface import HuggingFaceEmbeddings
In [19]: model = HuggingFaceEmbeddings(model_name='d:/src/chatglm2-6b-int4/')
No sentence-transformers model found with name d:/src/chatglm2-6b-int4/. Creating a new one with MEAN pooling.
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
Cell In[19], line 1
----> 1 model = HuggingFaceEmbeddings(model_name='d:/src/chatglm2-6b-int4/')
File D:\Python310\lib\site-packages\langchain\embeddings\huggingface.py:59, in HuggingFaceEmbeddings.__init__(self, **kwargs)
53 except ImportError as exc:
54 raise ImportError(
55 "Could not import sentence_transformers python package. "
56 "Please install it with `pip install sentence_transformers`."
57 ) from exc
---> 59 self.client = sentence_transformers.SentenceTransformer(
60 self.model_name, cache_folder=self.cache_folder, **self.model_kwargs
61 )
File D:\Python310\lib\site-packages\sentence_transformers\SentenceTransformer.py:97, in SentenceTransformer.__init__(self, model_name_or_path, modules, device, cache_folder, use_auth_token)
95 modules = self._load_sbert_model(model_path)
96 else: #Load with AutoModel
---> 97 modules = self._load_auto_model(model_path)
99 if modules is not None and not isinstance(modules, OrderedDict):
100 modules = OrderedDict([(str(idx), module) for idx, module in enumerate(modules)])
File D:\Python310\lib\site-packages\sentence_transformers\SentenceTransformer.py:806, in SentenceTransformer._load_auto_model(self, model_name_or_path)
802 """
803 Creates a simple Transformer + Mean Pooling model and returns the modules
804 """
805 logger.warning("No sentence-transformers model found with name {}. Creating a new one with MEAN pooling.".format(model_name_or_path))
--> 806 transformer_model = Transformer(model_name_or_path)
807 pooling_model = Pooling(transformer_model.get_word_embedding_dimension(), 'mean')
808 return [transformer_model, pooling_model]
File D:\Python310\lib\site-packages\sentence_transformers\models\Transformer.py:28, in Transformer.__init__(self, model_name_or_path, max_seq_length, model_args, cache_dir, tokenizer_args, do_lower_case, tokenizer_name_or_path)
25 self.config_keys = ['max_seq_length', 'do_lower_case']
26 self.do_lower_case = do_lower_case
---> 28 config = AutoConfig.from_pretrained(model_name_or_path, **model_args, cache_dir=cache_dir)
29 self._load_model(model_name_or_path, config, cache_dir)
31 self.tokenizer = AutoTokenizer.from_pretrained(tokenizer_name_or_path if tokenizer_name_or_path is not None else model_name_or_path, cache_dir=cache_dir, **tokenizer_args)
File D:\Python310\lib\site-packages\transformers\models\auto\configuration_auto.py:986, in AutoConfig.from_pretrained(cls, pretrained_model_name_or_path, **kwargs)
984 has_remote_code = "auto_map" in config_dict and "AutoConfig" in config_dict["auto_map"]
985 has_local_code = "model_type" in config_dict and config_dict["model_type"] in CONFIG_MAPPING
--> 986 trust_remote_code = resolve_trust_remote_code(
987 trust_remote_code, pretrained_model_name_or_path, has_local_code, has_remote_code
988 )
990 if has_remote_code and trust_remote_code:
991 class_ref = config_dict["auto_map"]["AutoConfig"]
File D:\Python310\lib\site-packages\transformers\dynamic_module_utils.py:535, in resolve_trust_remote_code(trust_remote_code, model_name, has_local_code, has_remote_code)
533 trust_remote_code = False
534 elif has_remote_code and TIME_OUT_REMOTE_CODE > 0:
--> 535 signal.signal(signal.SIGALRM, _raise_timeout_error)
536 signal.alarm(TIME_OUT_REMOTE_CODE)
537 while trust_remote_code is None:
AttributeError: module 'signal' has no attribute 'SIGALRM'
```
### Expected behavior
load successfully
This is a signal not supported in windows, so it's shouldn't be depended. | [BUG] module 'signal' has no attribute 'SIGALRM | https://api.github.com/repos/langchain-ai/langchain/issues/9089/comments | 2 | 2023-08-11T02:54:44Z | 2023-08-11T16:04:49Z | https://github.com/langchain-ai/langchain/issues/9089 | 1,846,125,383 | 9,089 |
[
"langchain-ai",
"langchain"
] | ### Feature request
Sometimes, GPT might not accurately portray characters or tools, so before I ask GPT a question, I need to come up with a set of prompts.
However, in most cases, these prompts may not capture the characteristics of the characters or tools effectively. To address this, I've developed a pip package that collects various prompts. How can I make Chain automatically use my package when it needs to embody characters (as a tool)? Or how can I optimize the pip package to better align with LangChain's requirements?
This is the address of my repository:[https://github.com/limaoyi1/GPT-prompt](https://github.com/limaoyi1/GPT-prompt)
### Motivation
Sometimes, GPT might not accurately portray characters or tools, so before I ask GPT a question, I need to come up with a set of prompts.
### Your contribution
Modify my pip package or submit pr | I would like GPT to quickly and accurately embody any characters or tools. | https://api.github.com/repos/langchain-ai/langchain/issues/9088/comments | 1 | 2023-08-11T02:39:07Z | 2023-11-17T16:05:35Z | https://github.com/langchain-ai/langchain/issues/9088 | 1,846,116,995 | 9,088 |
[
"langchain-ai",
"langchain"
] | ### System Info
"langchain": "^0.0.125"
Windows 11 x64
### Who can help?
@nfcampos
### Information
- [X] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [X] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [X] Callbacks/Tracing
- [ ] Async
### Reproduction
When trying to retrieve token usage from LangChain, it returns a empty object. There's already an issue open about that.
However, it gets slightly worse: Apparently, this breaks other components too, such as the `ConversationSummaryBufferMemory` which relies on Token count.
Longer explanation and code on this other issue comment: https://github.com/langchain-ai/langchain/issues/2359#issuecomment-1673926888
### Expected behavior
Completion tokens among others are expected to be returned. Maybe fall back to tiktoken ([py](https://pypi.org/project/tiktoken/)/[js](https://www.npmjs.com/package/@dqbd/tiktoken)) or similar when token count is not available? Just an idea. | ConversationSummaryBufferMemory broken due to missing token usage | https://api.github.com/repos/langchain-ai/langchain/issues/9083/comments | 2 | 2023-08-10T21:57:40Z | 2023-08-11T13:18:49Z | https://github.com/langchain-ai/langchain/issues/9083 | 1,845,949,455 | 9,083 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
I think the `ResponseScheam` and `StructuredOutputParser` functionality is really convenient but unfortunately, it often fails in my use case due to the model outputting a slightly wrongly formatted JSON. It's usually due to extra or missing quotes.
Any suggestions on how to solve this?
### Suggestion:
Thinking about writing some extra logic to fix the missing/extra quotes issue, but not sure if anyone has a better solution | Issue: StructuredOutputParser failures due to invalid JSON | https://api.github.com/repos/langchain-ai/langchain/issues/9082/comments | 2 | 2023-08-10T21:38:22Z | 2023-11-17T16:05:39Z | https://github.com/langchain-ai/langchain/issues/9082 | 1,845,932,723 | 9,082 |
[
"langchain-ai",
"langchain"
] | ### System Info
I am using langchain in a different language, but I cannot change the following sentence as it is hard-coded into the Agent class
```
thoughts += (
"\n\nI now need to return a final answer based on the previous steps:"
)
```
https://github.com/langchain-ai/langchain/blob/a5a4c53280b4dae8ea2e09430fed88e0cd4e03d2/libs/langchain/langchain/agents/agent.py#L588
### Who can help?
@hwchase17
@agola11
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
-
### Expected behavior
- | Agents final thought as parameter | https://api.github.com/repos/langchain-ai/langchain/issues/9072/comments | 2 | 2023-08-10T20:06:59Z | 2023-11-11T21:20:04Z | https://github.com/langchain-ai/langchain/issues/9072 | 1,845,823,677 | 9,072 |
[
"langchain-ai",
"langchain"
] | https://github.com/langchain-ai/langchain/blob/a5a4c53280b4dae8ea2e09430fed88e0cd4e03d2/libs/langchain/langchain/chains/loading.py#L376
Work was done earlier this year to move all experimental and vulnerable code over to the experimental project. SQLDatabaseChain had (and still has) an open CVE against it so it was moved to the experimental project. However the experimental version of SQLDatabaseChain is being referenced in the production project on the line above. Our organization would like to start using LangChain but we cannot due to the open High severity CVE. Is it possible to either patch the vulnerability in code or complete the move of SQLDatabaseChain from the production project to the experimental? | Referencing experimental version of SQLDatabaseChain from the production project | https://api.github.com/repos/langchain-ai/langchain/issues/9071/comments | 2 | 2023-08-10T20:04:27Z | 2023-10-27T19:19:07Z | https://github.com/langchain-ai/langchain/issues/9071 | 1,845,820,486 | 9,071 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
I am creating a chatbot that records the history of the conversation through vectors in my vector store per user that uses the chatbot, and for this, I identify who is the owner of the vectors through metadata.
How can I add this metadata to the vectors if I'm using the vector store as a retriever for the memory in my ConversationChain?
This is my code:
```
import os
from langchain.llms import OpenAI
from langchain.chains import ConversationChain
from langchain.memory import VectorStoreRetrieverMemory
from langchain.vectorstores.pgvector import PGVector
from langchain.embeddings.openai import OpenAIEmbeddings
exit_conditions = ("q", "quit", "exit")
metadata={"id": "John Doe", "key": 123}
llm = OpenAI(openai_api_key=os.getenv("OPENAI_API_KEY"), temperature=0)
store = PGVector(
collection_name="chatbot_embeddings",
connection_string=os.getenv("POSTGRES_CONNECTION_STRING"),
embedding_function=OpenAIEmbeddings(),
collection_metadata=metadata
)
while True:
query = input("> ")
if query in exit_conditions:
break
conversation_with_summary = ConversationChain(
llm=llm,
memory=VectorStoreRetrieverMemory(retriever=store.as_retriever()),
verbose=True,
metadata=metadata,
)
print(conversation_with_summary.predict(input=query))
```
### Suggestion:
_No response_ | Issue: I want to attach metadata in my PGVector vector store used as retriever for my ConversationChain memory | https://api.github.com/repos/langchain-ai/langchain/issues/9067/comments | 7 | 2023-08-10T18:33:16Z | 2024-07-18T19:33:13Z | https://github.com/langchain-ai/langchain/issues/9067 | 1,845,693,256 | 9,067 |
[
"langchain-ai",
"langchain"
] | When I used to connect and azure openai Emabeddings and Milvus used , Below Exception happende
Traceback (most recent call last):
File "D:\Corent\AI\LangChain\azure\azure_connection.py", line 29, in <module>
VectorStore = Milvus.from_texts(
^^^^^^^^^^^^^^^^^^
File "D:\Corent\AI\LangChain\azure\venv\Lib\site-packages\langchain\vectorstores\milvus.py", line 822, in from_texts
vector_db.add_texts(texts=texts, metadatas=metadatas)
File "D:\Corent\AI\LangChain\azure\venv\Lib\site-packages\langchain\vectorstores\milvus.py", line 422, in add_texts
embeddings = self.embedding_func.embed_documents(texts)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\Corent\AI\LangChain\azure\venv\Lib\site-packages\langchain\embeddings\openai.py", line 478, in embed_documents
return self._get_len_safe_embeddings(texts, engine=self.deployment)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\Corent\AI\LangChain\azure\venv\Lib\site-packages\langchain\embeddings\openai.py", line 341, in _get_len_safe_embeddings
token = encoding.encode(
^^^^^^^^^^^^^^^^
File "D:\Corent\AI\LangChain\azure\venv\Lib\site-packages\tiktoken\core.py", line 116, in encode
if match := _special_token_regex(disallowed_special).search(text):
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
TypeError: expected string or buffer | TypeError: expected string or buffer | https://api.github.com/repos/langchain-ai/langchain/issues/9057/comments | 3 | 2023-08-10T17:04:41Z | 2024-02-20T02:21:35Z | https://github.com/langchain-ai/langchain/issues/9057 | 1,845,568,042 | 9,057 |
[
"langchain-ai",
"langchain"
] | ### System Info
Langchain 0.247
Python 3.10.11
### Who can help?
@chase
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [X] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
I wanted to adapt the original combine_docs_chain to some customization where a user input (e.g. 'you act like a HR chatbot') would be added to the original prompt. The workaround I found was to rewrite the original prompt files to take in a user input, which works. However, I'm having trouble making the ConversationRetrievalChain work.
1. Adapt the original templates from the source files and load them in:
I wanted to add the possibility to add some user input to the pre-made prompts from langchain.
So I duplicated the original files and adapted the text of the prompts somewhat
```
from docs.stuff_prompt import create_stuff_prompt_selector
STUFF_PROMPT_SELECTOR = create_stuff_prompt_selector(ui_input= ui_input) #adds the ui_input to the prompt
stuff_prompt = STUFF_PROMPT_SELECTOR.get_prompt(llm) #based on LLM model, it will return either the regular PromptTemplate version or the chat (ChatPromptTemplate). Same working principle as in the source files
combine_docs_chain = load_qa_chain(llm = llm,
chain_type = 'stuff',
prompt = stuff_prompt
) #create a custom combine_docs_chain
```
2. Create the ConversationalRetrievalChain.from_llm() object with the custom combine_docs_chain
Simply create the object. First time might work, but second won't work
```
from langchain.chains import ConversationalRetrievalChain
qa = ConversationalRetrievalChain.from_llm(
llm,
retriever=vectordb.as_retriever(),
return_source_documents = True,
memory=memory,
verbose = True,
combine_docs_chain = combine_docs_chain
)
```
3. This yields an error:
```
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
Cell In[25], line 2
1 from langchain.chains import ConversationalRetrievalChain
----> 2 qa = ConversationalRetrievalChain.from_llm(
3 llm,
4 retriever=vectordb.as_retriever(),
5 return_source_documents = True, # when return_source_documents is True, a workaround needs to be found https://github.com/langchain-ai/langchain/issues/2303
6 memory=memory,
7 #, get_chat_history=lambda h : h
8 verbose = True,
9 combine_docs_chain = combine_docs_chain
10 #question_generator
11 )
File [c:\Users\XXX\Anaconda3\envs\langchain-env\lib\site-packages\langchain\chains\conversational_retrieval\base.py:357](file:///C:/Users/XXX/Anaconda3/envs/langchain-env/lib/site-packages/langchain/chains/conversational_retrieval/base.py:357), in ConversationalRetrievalChain.from_llm(cls, llm, retriever, condense_question_prompt, chain_type, verbose, condense_question_llm, combine_docs_chain_kwargs, callbacks, **kwargs)
350 _llm = condense_question_llm or llm
351 condense_question_chain = LLMChain(
352 llm=_llm,
353 prompt=condense_question_prompt,
354 verbose=verbose,
355 callbacks=callbacks,
356 )
--> 357 return cls(
358 retriever=retriever,
359 combine_docs_chain=doc_chain,
360 question_generator=condense_question_chain,
361 callbacks=callbacks,
362 **kwargs,
363 )
TypeError: langchain.chains.conversational_retrieval.base.ConversationalRetrievalChain() got multiple values for keyword argument 'combine_docs_chain'
```
### Expected behavior
It is expected to take in the combine_docs_chain I made and initiate the ConversationalRetrievalChain. Is this a cache error? Where the ConversationalRetrievalChain cannot be changed after it has been initiated before? | ConversationalRetrievalChain having trouble with custom 'combine_docs_chain' | https://api.github.com/repos/langchain-ai/langchain/issues/9052/comments | 2 | 2023-08-10T15:45:03Z | 2023-11-16T16:05:16Z | https://github.com/langchain-ai/langchain/issues/9052 | 1,845,452,324 | 9,052 |
[
"langchain-ai",
"langchain"
] | ### System Info
LangChain: v0.0.260
Python: v3.11
### Who can help?
@hwchase17 @eyurtsev
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [X] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Sometime `Metaphor` tool returns results without publishedDate which makes the Tool wrapper break due not guarding against empty values:
```
langchain/utilities/metaphor_search.py", line 169, in _clean_results
"published_date": result["publishedDate"],
KeyError: 'publishedDate'
```
### Expected behavior
Preferably the Tool should guard agains missing keys in API response. | Metaphor Search throws key error when `publishedDate` is missing | https://api.github.com/repos/langchain-ai/langchain/issues/9048/comments | 5 | 2023-08-10T15:22:58Z | 2023-11-19T16:05:16Z | https://github.com/langchain-ai/langchain/issues/9048 | 1,845,415,438 | 9,048 |
[
"langchain-ai",
"langchain"
] | ### Feature request
If I know exactly what paper(s) I'd like to load, it would be nice to be able to retrieve them by their arxiv IDs. The arxiv library seems to support this:
```
import arxiv
paper = next(arxiv.Search(id_list=["1605.08386v1"]).results())
```
### Motivation
Sometimes searching by exact paper title still returns the incorrect paper. Searching by ID should be more fool-proof.
### Your contribution
🤷🏻♂️ | ArxivLoader support searching by arxiv id_list | https://api.github.com/repos/langchain-ai/langchain/issues/9047/comments | 3 | 2023-08-10T15:21:01Z | 2023-11-22T16:06:59Z | https://github.com/langchain-ai/langchain/issues/9047 | 1,845,411,321 | 9,047 |
[
"langchain-ai",
"langchain"
] | ### System Info
Latest pip versions
### Who can help?
@eyurtsev
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
I tried searching by exact title in the following way:
```python
docs = ArxivLoader(query="MetaGPT: Meta Programming for Multi-Agent Collaborative Framework", load_max_docs=1).load()
```
But the result is incorrect. The search works properly on the arxiv website.
### Expected behavior
Correct paper returned | ArxivLoader incorrect results | https://api.github.com/repos/langchain-ai/langchain/issues/9046/comments | 4 | 2023-08-10T15:18:24Z | 2023-08-10T18:59:41Z | https://github.com/langchain-ai/langchain/issues/9046 | 1,845,405,467 | 9,046 |
[
"langchain-ai",
"langchain"
] | ### System Info
I have checked the issue tracker for similar issues but did not find any. Therefore, I am creating a new one.
I was trying to load a sample document using *UnstructuredFileLoader* from *langchain.document_loaders*. I was using T4 GPU on Google Colab.
I think the issue might be related to some recent upgrades and maybe the solution is trivial. However, I am in a bit of a hurry as I have to submit my project report to my professor :disappointed:
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```
loader = UnstructuredFileLoader("/content/FAQ-ShelfLifeItem.docx")
documents = loader.load()
text_splitter = RecursiveCharacterTextSplitter(chunk_size=1000, chunk_overlap=50)
docs = text_splitter.split_documents(documents)
hf_embedding = HuggingFaceInstructEmbeddings()
```
### Expected behavior
I would expect no errors like:
```
---------------------------------------------------------------------------
NameError Traceback (most recent call last)
[<ipython-input-14-79f2c22bc34d>](https://localhost:8080/#) in <cell line: 2>()
1 loader = UnstructuredFileLoader("/content/FAQ-ShelfLifeItem.docx")
----> 2 documents = loader.load()
3 text_splitter = RecursiveCharacterTextSplitter(chunk_size=1000, chunk_overlap=50)
4 docs = text_splitter.split_documents(documents)
5
2 frames
[/usr/local/lib/python3.10/dist-packages/unstructured/partition/auto.py](https://localhost:8080/#) in partition(filename, content_type, file, file_filename, url, include_page_breaks, strategy, encoding, paragraph_grouper, headers, skip_infer_table_types, ssl_verify, ocr_languages, pdf_infer_table_structure, xml_keep_tags, data_source_metadata, **kwargs)
170 **kwargs,
171 )
--> 172 elif filetype == FileType.MD:
173 elements = partition_md(
174 filename=filename,
NameError: name 'partition_docx' is not defined
``` | NameError: name 'partition_docx' is not defined | https://api.github.com/repos/langchain-ai/langchain/issues/9039/comments | 2 | 2023-08-10T13:19:48Z | 2023-08-10T15:19:12Z | https://github.com/langchain-ai/langchain/issues/9039 | 1,845,178,321 | 9,039 |
[
"langchain-ai",
"langchain"
] | Traceback (most recent call last):
File "D:\Corent\AI\LangChain\azure\azure_connection.py", line 14, in <module>
print(llm("Tell me joke"))
^^^^^^^^^^^^^^^^^^^
File "D:\Corent\AI\LangChain\azure\venv\Lib\site-packages\langchain\llms\base.py", line 802, in __call__
self.generate(
File "D:\Corent\AI\LangChain\azure\venv\Lib\site-packages\langchain\llms\base.py", line 598, in generate
output = self._generate_helper(
^^^^^^^^^^^^^^^^^^^^^^
File "D:\Corent\AI\LangChain\azure\venv\Lib\site-packages\langchain\llms\base.py", line 504, in _generate_helper
raise e
File "D:\Corent\AI\LangChain\azure\venv\Lib\site-packages\langchain\llms\base.py", line 491, in _generate_helper
self._generate(
File "D:\Corent\AI\LangChain\azure\venv\Lib\site-packages\langchain\llms\openai.py", line 384, in _generate
response = completion_with_retry(
^^^^^^^^^^^^^^^^^^^^^^
File "D:\Corent\AI\LangChain\azure\venv\Lib\site-packages\langchain\llms\openai.py", line 116, in completion_with_retry
return _completion_with_retry(**kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\Corent\AI\LangChain\azure\venv\Lib\site-packages\tenacity\__init__.py", line 289, in wrapped_f
return self(f, *args, **kw)
^^^^^^^^^^^^^^^^^^^^
File "D:\Corent\AI\LangChain\azure\venv\Lib\site-packages\tenacity\__init__.py", line 379, in __call__
do = self.iter(retry_state=retry_state)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\Corent\AI\LangChain\azure\venv\Lib\site-packages\tenacity\__init__.py", line 314, in iter
return fut.result()
^^^^^^^^^^^^
File "C:\Users\donbosco\AppData\Local\Programs\Python\Python311\Lib\concurrent\futures\_base.py", line 449, in result
return self.__get_result()
^^^^^^^^^^^^^^^^^^^
File "C:\Users\donbosco\AppData\Local\Programs\Python\Python311\Lib\concurrent\futures\_base.py", line 401, in __get_result
raise self._exception
File "D:\Corent\AI\LangChain\azure\venv\Lib\site-packages\tenacity\__init__.py", line 382, in __call__
result = fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "D:\Corent\AI\LangChain\azure\venv\Lib\site-packages\langchain\llms\openai.py", line 114, in _completion_with_retry
return llm.client.create(**kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\Corent\AI\LangChain\azure\venv\Lib\site-packages\openai\api_resources\completion.py", line 25, in create
return super().create(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\Corent\AI\LangChain\azure\venv\Lib\site-packages\openai\api_resources\abstract\engine_api_resource.py", line 153, in create
response, _, api_key = requestor.request(
^^^^^^^^^^^^^^^^^^
File "D:\Corent\AI\LangChain\azure\venv\Lib\site-packages\openai\api_requestor.py", line 298, in request
resp, got_stream = self._interpret_response(result, stream)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\Corent\AI\LangChain\azure\venv\Lib\site-packages\openai\api_requestor.py", line 700, in _interpret_response
self._interpret_response_line(
File "D:\Corent\AI\LangChain\azure\venv\Lib\site-packages\openai\api_requestor.py", line 763, in _interpret_response_line
raise self.handle_error_response(
openai.error.InvalidRequestError: The completion operation does not work with the specified model, gpt-35-turbo-16k. Please choose different model and try again. You can learn more about which models can be used with each operation here: https://go.microsoft.com/fwlink/?linkid=2197993. | openai.error.InvalidRequestError: The completion operation does not work with the specified model, gpt-35-turbo-16k. Please choose different model and try again. You can learn more about which models can be used with each operation here: https://go.microsoft.com/fwlink/?linkid=2197993. | https://api.github.com/repos/langchain-ai/langchain/issues/9038/comments | 4 | 2023-08-10T12:42:46Z | 2024-03-11T13:09:44Z | https://github.com/langchain-ai/langchain/issues/9038 | 1,845,112,993 | 9,038 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise
### Suggestion:
_No response_ | Streaming ConversationalRetrievalQaChain | https://api.github.com/repos/langchain-ai/langchain/issues/9036/comments | 0 | 2023-08-10T10:51:38Z | 2023-08-10T10:52:42Z | https://github.com/langchain-ai/langchain/issues/9036 | 1,844,935,577 | 9,036 |
[
"langchain-ai",
"langchain"
] | ### Feature request
What is the exact difference between the two graph based chains in langchain 1 )GraphCypherQAChain and 2)GraphQAChain. What are the pros and cons of each, and when to use one over another?
### Motivation
What is the exact difference between the two graph based chains in langchain 1 )GraphCypherQAChain and 2)GraphQAChain. What are the pros and cons of each, and when to use one over another?
### Your contribution
What is the exact difference between the two graph based chains in langchain 1 )GraphCypherQAChain and 2)GraphQAChain. What are the pros and cons of each, and when to use one over another? | Functional difference between GraphCypherQAChain and GraphQAChain | https://api.github.com/repos/langchain-ai/langchain/issues/9035/comments | 5 | 2023-08-10T09:50:43Z | 2023-11-16T16:05:29Z | https://github.com/langchain-ai/langchain/issues/9035 | 1,844,832,758 | 9,035 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
I've been trying to install langchain using `pip install langchain[all]` in the conda environment where it already had packages to run hugging face text-generation-inference module.
Let me know how to handle this issue. While running make command i'm facing this error.
```python
Traceback (most recent call last):
File "/home/sri/miniconda3/envs/text-generation-inference/bin/make", line 5, in <module>
from scripts.proto import main
ModuleNotFoundError: No module named 'scripts'
```
### Suggestion:
_No response_ | Issue: make command is not working after installing langchain | https://api.github.com/repos/langchain-ai/langchain/issues/9033/comments | 6 | 2023-08-10T08:39:47Z | 2023-12-01T16:08:23Z | https://github.com/langchain-ai/langchain/issues/9033 | 1,844,699,377 | 9,033 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
Hello guys !
I would like to build a Prompt which include a System message, Context data from a vector_store and the final question. What is the best way to do that?
Right now, I'm doing this:
```
def get_prompt(system_prompt):
prompt = SystemMessage(content=system_prompt)
new_prompt = (
prompt
+ "--- \n\n" + "le contexte:" + "\n"
+ "{context}"
+ '\n\n --- \n\n ' + "la question: \n\n"
+ "{question}"
)
return PromptTemplate(
template=new_prompt, input_variables=["context", "question"]
)
```
But get this error: **KeyError: 'template'**
Is there a way to do what I want? I'm searching for a solution where I don't need to add the context myself, because it should already be managed by the vector store retriever
### Suggestion:
_No response_ | Issue: How to build a prompt that include SystemMessage, vector store context and final question | https://api.github.com/repos/langchain-ai/langchain/issues/9031/comments | 4 | 2023-08-10T08:31:32Z | 2023-11-16T16:05:50Z | https://github.com/langchain-ai/langchain/issues/9031 | 1,844,686,751 | 9,031 |
[
"langchain-ai",
"langchain"
] | ## Issue with current `dark theme`:
Recently the documentation has been much improvised aesthetically and functionally. But when we switch to the dark mode, the search bar **on the top right** and the **mendable chat screen** remain in the light theme.

I think, making them match the dark theme, will even out an overall experience ✌🏻 | The "search bar" and "mendable chat" should also follow the dark theme. | https://api.github.com/repos/langchain-ai/langchain/issues/9028/comments | 4 | 2023-08-10T06:38:18Z | 2023-12-05T04:41:49Z | https://github.com/langchain-ai/langchain/issues/9028 | 1,844,513,797 | 9,028 |
[
"langchain-ai",
"langchain"
] | ### System Info
LangChain 0.0.260, Mac M2, Miniforge3, Python 3.9
### Who can help?
@ves
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```python
from langchain.vectorstores.elastic_vector_search import ElasticKnnSearch
from langchain.text_splitter import RecursiveCharacterTextSplitter
from langchain.embeddings import HuggingFaceEmbeddings
from langchain.memory import ConversationBufferMemory
from langchain.chains import ConversationalRetrievalChain
from langchain.chat_models import ChatOpenAI
from langchain import ElasticVectorSearch
from elasticsearch import Elasticsearch
model_name = "sentence-transformers/all-mpnet-base-v2"
embedding = HuggingFaceEmbeddings(model_name=model_name)
es_url = "http://localhost:9221"
es = Elasticsearch(es_url)
# prepare db
texts = ["This is a test document", "This is another test document"]
metadatas = [{}, {}]
text_splitter = RecursiveCharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
documents = text_splitter.create_documents(texts, metadatas=metadatas)
ElasticVectorSearch.from_documents(documents, embedding, index_name="bug_demo", elasticsearch_url=es_url)
# chat with db
memory = ConversationBufferMemory(memory_key="chat_history", output_key="answer", return_messages=True)
db = ElasticKnnSearch(embedding=embedding, es_connection=es, index_name="bug_demo")
llm_model="gpt-3.5-turbo"
llm = ChatOpenAI(model_name=llm_model, temperature=0)
chain = ConversationalRetrievalChain.from_llm(
llm,
db.as_retriever(),
memory=memory,
return_source_documents=True
)
user_input = "What is love?"
output = chain({"question": user_input})
```
### Expected behavior
The previous code used to work correctly with langchain 0.0.245.
After merge https://github.com/langchain-ai/langchain/pull/8180, it produces this error:
```
>>> output = chain({"question": user_input})
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/miguel/miniforge3/envs/fenonto_qa3/lib/python3.9/site-packages/langchain/chains/base.py", line 258, in __call__
raise e
File "/Users/miguel/miniforge3/envs/fenonto_qa3/lib/python3.9/site-packages/langchain/chains/base.py", line 252, in __call__
self._call(inputs, run_manager=run_manager)
File "/Users/miguel/miniforge3/envs/fenonto_qa3/lib/python3.9/site-packages/langchain/chains/conversational_retrieval/base.py", line 135, in _call
docs = self._get_docs(new_question, inputs, run_manager=_run_manager)
File "/Users/miguel/miniforge3/envs/fenonto_qa3/lib/python3.9/site-packages/langchain/chains/conversational_retrieval/base.py", line 287, in _get_docs
docs = self.retriever.get_relevant_documents(
File "/Users/miguel/miniforge3/envs/fenonto_qa3/lib/python3.9/site-packages/langchain/schema/retriever.py", line 193, in get_relevant_documents
raise e
File "/Users/miguel/miniforge3/envs/fenonto_qa3/lib/python3.9/site-packages/langchain/schema/retriever.py", line 186, in get_relevant_documents
result = self._get_relevant_documents(
File "/Users/miguel/miniforge3/envs/fenonto_qa3/lib/python3.9/site-packages/langchain/vectorstores/base.py", line 504, in _get_relevant_documents
docs = self.vectorstore.similarity_search(query, **self.search_kwargs)
File "/Users/miguel/miniforge3/envs/fenonto_qa3/lib/python3.9/site-packages/langchain/vectorstores/elastic_vector_search.py", line 472, in similarity_search
results = self.knn_search(query=query, k=k, **kwargs)
File "/Users/miguel/miniforge3/envs/fenonto_qa3/lib/python3.9/site-packages/langchain/vectorstores/elastic_vector_search.py", line 520, in knn_search
knn_query_body = self._default_knn_query(
File "/Users/miguel/miniforge3/envs/fenonto_qa3/lib/python3.9/site-packages/langchain/vectorstores/elastic_vector_search.py", line 460, in _default_knn_query
raise ValueError(
ValueError: Either `query_vector` or `model_id` must be provided, but not both.
```
My guess is, after refactoring, knn_search no longer takes into account the embedding parameter. If so it would use it to create the query_vector. @jeffvestal any clue?
BTW, I have also tried preparing the db using the new ElasticKnnSearch#from_texts method.
```python
from langchain.vectorstores.elastic_vector_search import ElasticKnnSearch
from langchain.text_splitter import RecursiveCharacterTextSplitter
from langchain.embeddings import HuggingFaceEmbeddings
from langchain.memory import ConversationBufferMemory
from langchain.chains import ConversationalRetrievalChain
from langchain.chat_models import ChatOpenAI
from langchain import ElasticVectorSearch
from elasticsearch import Elasticsearch
model_name = "sentence-transformers/all-mpnet-base-v2"
embedding = HuggingFaceEmbeddings(model_name=model_name)
es_url = "http://localhost:9221"
es = Elasticsearch(es_url)
index_name = "test_bug3"
# prepare db
texts = ["This is a test document", "This is another test document"]
metadatas = [{}, {}]
text_splitter = RecursiveCharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
documents = text_splitter.create_documents(texts, metadatas=metadatas)
knnvectorsearch = ElasticKnnSearch.from_texts(
texts=texts,
embedding=embedding,
index_name= index_name,
vector_query_field='vector',
query_field='text',
dims=768,
es_connection=es
)
# Test `add_texts` method
texts2 = ["Hello, world!", "Machine learning is fun.", "I love Python."]
knnvectorsearch.add_texts(texts2)
# chat with db
memory = ConversationBufferMemory(memory_key="chat_history", output_key="answer", return_messages=True)
db = ElasticKnnSearch(embedding=embedding, es_connection=es, index_name="index_name")
llm_model="gpt-3.5-turbo"
llm = ChatOpenAI(model_name=llm_model, temperature=0)
chain = ConversationalRetrievalChain.from_llm(
llm,
db.as_retriever(),
memory=memory,
return_source_documents=True
)
user_input = "Who is Janine?"
output = chain({"question": user_input})
```
But I get the same error. | ElasticKnnSearch: ValueError: Either `query_vector` or `model_id` must be provided, but not both. | https://api.github.com/repos/langchain-ai/langchain/issues/9022/comments | 4 | 2023-08-10T04:31:24Z | 2023-09-19T15:36:38Z | https://github.com/langchain-ai/langchain/issues/9022 | 1,844,395,059 | 9,022 |
[
"langchain-ai",
"langchain"
] | ### System Info
Python type checking complains about a parameter `client` with a simple setup like this:
<img width="170" alt="Screenshot 2023-08-10 at 12 20 53 PM" src="https://github.com/langchain-ai/langchain/assets/1828968/355100b6-11dc-4f73-beec-8059aba004f9">
<img width="534" alt="Screenshot 2023-08-10 at 12 03 14 PM" src="https://github.com/langchain-ai/langchain/assets/1828968/9eb22e4e-d78e-4b5b-ae98-6e649bffc14f">
According to documentation, no fields are required to initialize the model:
https://python.langchain.com/docs/modules/model_io/models/chat/
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Version: langchain = "^0.0.250".
Python 3.11.4
Install Pylance. Use VSCode to run typecheck.
### Expected behavior
It should pass type checking based on documentation. | Typing issue: Undocumented parameter `client` | https://api.github.com/repos/langchain-ai/langchain/issues/9021/comments | 2 | 2023-08-10T04:24:33Z | 2023-11-16T16:05:23Z | https://github.com/langchain-ai/langchain/issues/9021 | 1,844,389,967 | 9,021 |
[
"langchain-ai",
"langchain"
] | ### System Info
windows 10
python 3.10.12
### Who can help?
@eyurtsev There is a BUG in the def __add function in langchain\vectorstores\faiss.py: When the parameter ids is provided and there are duplicates in it, index_to_docstore_id will report an error and the program will terminate. But the text vector embedding has been added to the index. This causes the id in the index to be inconsistent with the id in index_to_docstore_id, not aligned, and the text cannot be matched correctly.
langchain\vectorstores\faiss.py 中 def __add 函数存在BUG:
当参数 ids 已提供, 且其中存在重复时, index_to_docstore_id 新增会报错,程序运行终止. 但是文本向量 embedding 已经添加至 index 中.导致 index 中的 id 与 index_to_docstore_id 的中 id 对应不一致,文本无法正确匹配
### Information
- [x] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [x] Document Loaders
- [x] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
None
### Expected behavior
None | Duplicate ids passed into langchain\vectorstores\fais.__add will result in a mismatch between faiss and document's index | https://api.github.com/repos/langchain-ai/langchain/issues/9019/comments | 4 | 2023-08-10T03:52:27Z | 2023-11-17T16:05:59Z | https://github.com/langchain-ai/langchain/issues/9019 | 1,844,365,609 | 9,019 |
[
"langchain-ai",
"langchain"
] | ### System Info
# System Info:
Langchain==0.0.260
Python==3.10.10
### Who can help?
@agola11
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [X] Callbacks/Tracing
- [ ] Async
### Reproduction
When using the following script from official documentation:
```python
from langchain.chat_models import ChatOpenAI
from langchain.retrievers.multi_query import MultiQueryRetriever
question = "What are the approaches to Task Decomposition?"
llm = ChatOpenAI(temperature=0)
retriever_from_llm = MultiQueryRetriever.from_llm(
retriever=vectordb.as_retriever(), llm=llm
)
```
ImportError arise:
```bash
File /opt/conda/envs/gpt-env/lib/python3.10/site-packages/langchain/retrievers/multi_query.py:6
2 from typing import List
4 from pydantic import BaseModel, Field
----> 6 from langchain.callbacks.manager import CallbackManagerForRetrieverRun
7 from langchain.chains.llm import LLMChain
8 from langchain.llms.base import BaseLLM
ImportError: cannot import name 'CallbackManagerForRetrieverRun' from 'langchain.callbacks.manager' (/opt/conda/envs/gpt-env/lib/python3.10/site-packages/langchain/callbacks/manager.py)
```
### Expected behavior
There should be no issue when importing MultiQueryRetriever and using the script from the official documentation. | ImportError when importing MultiQueryRetriever | https://api.github.com/repos/langchain-ai/langchain/issues/9018/comments | 1 | 2023-08-10T03:04:59Z | 2023-08-11T06:13:41Z | https://github.com/langchain-ai/langchain/issues/9018 | 1,844,332,543 | 9,018 |
[
"langchain-ai",
"langchain"
] | ### Feature request
It would be good it create_pandas_dataframe_agent could return a table with scroll bar . Instead of us adding html code on the top of the generated code
### Motivation
Being able to capture table data is one key element of visualizations, currently create_pandas_dataframe_agent doesn't generate a table visualization with scroll bar
### Your contribution
I can try, but might need help | For the Visual currently it doesn't display or show tabular format with scroll bar | https://api.github.com/repos/langchain-ai/langchain/issues/9017/comments | 4 | 2023-08-10T01:55:13Z | 2023-11-17T16:06:07Z | https://github.com/langchain-ai/langchain/issues/9017 | 1,844,278,830 | 9,017 |
[
"langchain-ai",
"langchain"
] | ### System Info
Langchain version 0.0.260, Python
### Who can help?
@agola11
@hwchase17
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Both chunks of code gave me the error.
Chunk 1
`from langchain.chains import RetrievalQAWithSourcesChain
user_input = "How do LLM Powered Autonomous Agents work?"
qa_chain = RetrievalQAWithSourcesChain.from_chain_type(llm,retriever=web_research_retriever)
result = qa_chain({"question": user_input})
result`
Chunk 2
'import logging
logging.basicConfig()
logging.getLogger("langchain.retrievers.web_research").setLevel(logging.INFO)
user_input = "How do LLM Powered Autonomous Agents work?"
docs = web_research_retriever.get_relevant_documents(user_input)
docs'
Both gave me this error
`[/usr/lib/python3.10/asyncio/runners.py](https://localhost:8080/#) in run(main, debug)
31 """
32 if events._get_running_loop() is not None:
---> 33 raise RuntimeError(
34 "asyncio.run() cannot be called from a running event loop")
35
RuntimeError: asyncio.run() cannot be called from a running event loop`
### Expected behavior
In both cases I just wanted it to produce website text with a source. | WebResearchRetriever - RuntimeError: asyncio.run() cannot be called from a running event loop | https://api.github.com/repos/langchain-ai/langchain/issues/9014/comments | 6 | 2023-08-10T00:04:39Z | 2024-01-17T19:07:42Z | https://github.com/langchain-ai/langchain/issues/9014 | 1,844,201,292 | 9,014 |
[
"langchain-ai",
"langchain"
] | ### Issue with current documentation:
DOC: API Reference: Classes are not sorted in the root page. It is really hard to search for unsorted items.
For example:
```
...
agents.agent_iterator.AgentExecutorIterator(...)
agents.agent.ExceptionTool
agents.agent.LLMSingleActionAgent
agents.tools.InvalidTool
agents.schema.AgentScratchPadChatPromptTemplate
agents.agent_types.AgentType(value[, names, ...])
agents.xml.base.XMLAgent
agents.xml.base.XMLAgentOutputParser
...
```
### Idea or request for content:
_No response_ | DOC: API Reference: Classes are not sorted | https://api.github.com/repos/langchain-ai/langchain/issues/9012/comments | 7 | 2023-08-09T23:14:10Z | 2023-11-14T19:03:46Z | https://github.com/langchain-ai/langchain/issues/9012 | 1,844,166,468 | 9,012 |
[
"langchain-ai",
"langchain"
] | ### System Info
After I upgraded my Langchain to version 0.0.260, the ConversationalChatAgent returns the action plan as the chat response.
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Steps to reproduce: request something that will make a ConversationalChatAgent call a tool.
### Expected behavior
The agent should pass the action and action_input to the tool and not return it as a final response to the user. | Conversational agent returning action plan as response after 0.0.260 release | https://api.github.com/repos/langchain-ai/langchain/issues/9011/comments | 3 | 2023-08-09T22:54:46Z | 2023-11-19T16:05:37Z | https://github.com/langchain-ai/langchain/issues/9011 | 1,844,150,816 | 9,011 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
Hello - I receive a UnicodeDecodeError when running the below code:
from dotenv import load_dotenv
load_dotenv()
from langchain.llms import OpenAI
from langchain.vectorstores import Chroma
from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.text_splitter import CharacterTextSplitter
from langchain.chains import RetrievalQA
llm = OpenAI(temperature=0.1)
from langchain.document_loaders import TextLoader
loader = TextLoader("./Training/test2.txt")
documents = loader.load()
text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
texts = text_splitter.split_documents(documents)
embeddings = OpenAIEmbeddings()
docsearch = Chroma.from_documents(texts, embeddings)
qa = RetrievalQA.from_chain_type(llm=OpenAI(), chain_type="stuff", retriever=docsearch.as_retriever(search_kwargs={"k": 1}))
query = "How much did the judge fine Twitter?"
qa.run(query)
The test2.txt is a 1,600-word UTF-8 encoded file. Here is the error text that I receive:
**Traceback (most recent call last):
File "C:\Users\Admin\PycharmProjects\pythonProject\Misc\Testing.py", line 17, in <module>
documents = loader.load()
^^^^^^^^^^^^^
File "C:\Users\Admin\PycharmProjects\pythonProject\Misc\venv\Lib\site-packages\langchain\document_loaders\text.py", line 18, in load
text = f.read()
^^^^^^^^
File "C:\Users\Admin\AppData\Local\Programs\Python\Python311\Lib\encodings\cp1252.py", line 23, in decode
return codecs.charmap_decode(input,self.errors,decoding_table)[0]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
UnicodeDecodeError: 'charmap' codec can't decode byte 0x9d in position 1079: character maps to <undefined>**
Any advice would be greatly appreciated.
### Suggestion:
_No response_ | Issue: UnicodeDecodeError when running .txt files using TextLoader | https://api.github.com/repos/langchain-ai/langchain/issues/9005/comments | 4 | 2023-08-09T20:48:00Z | 2024-02-04T06:16:34Z | https://github.com/langchain-ai/langchain/issues/9005 | 1,843,990,236 | 9,005 |
[
"langchain-ai",
"langchain"
] | ### System Info
python==3.11
langchain==0.0.246
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
1. db_chain = SQLDatabaseChain.from_llm(chat_llm, db, use_query_checker=True, verbose=True)#, query_prompt=PROMPT)
2. chat_llm=AzureChatOpenAI(deployment_name="gpt-35-turbo-16k",
model_name="gpt-35-turbo-16k" )
3. tools=[Tool(
func=db_chain.run,
name="Database Search",
description="useful for when you need to lookup specific queries on the ATLASIQ Schema"
)]
4. prefix = """ You are an expert Snowflake SQL data analyst, who writes queries with perfect syntax,
and performs necessary computations on that data in the AtlasIQ Schema. Your goal is to answer the following questions as best you can.
When there are multiple results for the same quantity, return all of them. DO NOT hallucinate an answer if there is no result."""
suffix = """Begin!"
{chat_history}
Question: {input}
{agent_scratchpad}"""
prompt = ZeroShotAgent.create_prompt(
tools,
prefix=prefix,
suffix=suffix,
input_variables=["input", "chat_history", "agent_scratchpad"],
)
memory = ConversationBufferMemory(memory_key="chat_history")
llm_chain = LLMChain(llm=chat_llm, prompt=prompt)#, callbacks=[custom_handler.short_chain(
new_zero_agent = ZeroShotAgent(llm_chain=llm_chain, tools=tools, verbose=True)
agent_chain = AgentExecutor.from_agent_and_tools(
agent=new_zero_agent, tools=tools, verbose=True, memory=memory, handle_parsing_errors=True)
agent_chain.run(query)
### Expected behavior
I expect the query to be synthesized and executed but instead, I get an unnecessary explanation of why the query is correct "The original query does not contain any of the mentioned mistakes. Therefore, here is the reproduced original query:
SELECT SUM(total_operating_expenses)
FROM expense_income_table
WHERE market = 'Seattle'
AND period = '202305'The original query provided does not contain any of the mentioned mistakes. Hence, the original query is reproduced below:
```sql
SELECT SUM(total_operating_expenses)
FROM expense_income_table
WHERE market = 'Seattle"<- which causes the output parsing error | Chat LLM adds an extra sentence in front of SQL queries, produces Output Parsing Error | https://api.github.com/repos/langchain-ai/langchain/issues/9001/comments | 4 | 2023-08-09T20:26:30Z | 2024-02-11T16:17:16Z | https://github.com/langchain-ai/langchain/issues/9001 | 1,843,961,304 | 9,001 |
[
"langchain-ai",
"langchain"
] | ### Issue with current documentation:
I am using the default configuration for RecursiveCharacterTextSplitter
text_splitter = RecursiveCharacterTextSplitter()
chunks = text_splitter.create_documents([full_text])
However, I would like to know what values are being used in the defaults. I would like to explicitly pass values, because for certain documents it seems to be not splitting and downstream I am getting from OpenAI's embeddings endpoint `This model's maximum context length is 8191 tokens, however you requested 11451 tokens (11451 in your prompt; 0 for the completion). Please reduce your prompt; or completion length.`
### Idea or request for content:
I would like to know what values are being used as default for RecursiveCharacterTextSplitter | DOC: What are the default values to RecursiveCharacterTextSplitter | https://api.github.com/repos/langchain-ai/langchain/issues/8999/comments | 0 | 2023-08-09T20:03:15Z | 2023-08-09T20:06:51Z | https://github.com/langchain-ai/langchain/issues/8999 | 1,843,925,953 | 8,999 |
[
"langchain-ai",
"langchain"
] | ### System Info
When importing the `AirbyteStripeLoader` introduced in v0.0.259 it throws an `module not found error`.
```
from libs.langchain.langchain.utils.utils import guard_import
ModuleNotFoundError: No module named 'libs'
```
### Who can help?
@eyurtsev
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```
from langchain.document_loaders.airbyte import AirbyteStripeLoader
config = {
# your stripe configuration
}
loader = AirbyteStripeLoader(config=config, stream_name="invoices") # check the documentation linked above for a list of all streams
```
### Expected behavior
Import the document should not throw an error. | `AirbyteStripeLoader` throws an error | https://api.github.com/repos/langchain-ai/langchain/issues/8996/comments | 5 | 2023-08-09T19:43:49Z | 2023-08-09T20:19:05Z | https://github.com/langchain-ai/langchain/issues/8996 | 1,843,898,676 | 8,996 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
Hey y'all. i have this code. i wrote it one month back and when i am running it now, it is giving me an error.
**CODE**:
<code>import os
from langchain.llms import OpenAI
from langchain import PromptTemplate
from langchain.chains import LLMChain
from constant import openai_key
import streamlit as st
os.environ['OPENAI_API_KEY'] = openai_key
llm = OpenAI(temperature=0.8)
multiple_inputs = PromptTemplate(
input_variables=["name", "age", "location"],
template="My name is {name}, I am {age} years old and live in {location}."
)
chain = LLMChain(llm=llm, prompt=multiple_inputs)
output = chain.run(
name="John",
age=30,
location="New York"
)
print(output)
</code>
**ERROR**:
<code>Traceback (most recent call last):
File ".\questions.py", line 19, in <module>
output = chain.run(
TypeError: run() got an unexpected keyword argument 'name'</code>
### Suggestion:
_No response_ | run() got an unexpected keyword argument 'name' | https://api.github.com/repos/langchain-ai/langchain/issues/8990/comments | 7 | 2023-08-09T18:31:56Z | 2023-08-13T16:01:55Z | https://github.com/langchain-ai/langchain/issues/8990 | 1,843,798,272 | 8,990 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
```from langchain.chains import ConversationRetrievalChain```
this is what I have and i am using the latest langchain version
### Suggestion:
_No response_ | Issue: ConversationRetrievalChain" is not accessedPylance | https://api.github.com/repos/langchain-ai/langchain/issues/8987/comments | 2 | 2023-08-09T18:21:55Z | 2023-11-16T16:06:44Z | https://github.com/langchain-ai/langchain/issues/8987 | 1,843,784,776 | 8,987 |
[
"langchain-ai",
"langchain"
] | ### System Info
Ubuntu 22
Langchain version
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Current behaviour:
I'm using the [RAG example](https://github.com/pinecone-io/examples/blob/master/docs/langchain-retrieval-augmentation.ipynb) and feeding my own database of [1 football article](https://www.bbc.co.uk/sport/football/65984561).
The Pinecone DB is a brand new database and only contains vectors from the football article.
When I do qa_with_sources(query="Who is Sachin Tendulkar") it provides me an answer and a link as a reference. This is not the expected behavior.
I have not fed any article about Sachin Tendulkar to the database. How and why/where from is it getting the answer and the link?
Now, If I add more articles only about football, push the vector count in the database to around 90. And then I ask the same question, query="Who is Sachin Tendulkar", it is not able to give the answer, which is the expected behavior.
I wonder if the fullness of the vector db makes it more accurate? Has anyone else seen this?
Repro:
Create a new Vector DB on pinecone. Use [this example](https://github.com/pinecone-io/examples/blob/master/docs/langchain-retrieval-augmentation.ipynb) to feed in a [football](https://www.bbc.com/sport/football/65984561) article.
Run query="Who is Sachin Tendulkar". Note the result contains a reference and an answer. (Unexpected)
Now, create a more full db, with more articles and ask the same query. Note that the results is empty as expected.
### Expected behavior
Since the database does not contain any article or mention of Sachin Tendulkar, it should not provide any answer, and instead say "This is not mentioned in the database".
| Out of dataset answer and reference link provided for RAG example | https://api.github.com/repos/langchain-ai/langchain/issues/8986/comments | 3 | 2023-08-09T18:20:23Z | 2023-11-16T16:06:02Z | https://github.com/langchain-ai/langchain/issues/8986 | 1,843,782,612 | 8,986 |
[
"langchain-ai",
"langchain"
] | ### System Info
Langchain version: 0.0.257
Python version: 3.11
Opensearch-py version: 2.3.0
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
I have this `Opensearch Vector DB `and I maintain multiple indices that start with "index-" (for example index-pdf, index-html). I have indexed sets of documents to each of the indices using Langchain's OpenSearchVectorSearch.from_documents() function.
Now, I want to run some queries which I want them to be run across multiple indices. An example would be "What is the title of each document?". When I execute below code, it either just outputs answer from first or last matching index, or says it cannot find the answer. Here is my current code:
```
from langchain.vectorstores import OpenSearchVectorSearch
from langchain.chains import RetrievalQA, ConversationalRetrievalChain
import os
from langchain.embeddings import OpenAIEmbeddings
from langchain.chat_models import ChatOpenAI
embeddings = OpenAIEmbeddings()
def get_llm(model):
llm = ChatOpenAI(model_name=model.lower(), verbose=False, temperature=0)
return llm
docsearch = OpenSearchVectorSearch(opensearch_url="http://localhost:9200",
index_name="index-*",
embedding_function=embeddings)
chain = ConversationalRetrievalChain.from_llm(
llm=get_llm("gpt-3.5-turbo"),
retriever=docsearch.as_retriever(),
)
result = chain({'question': 'What is the title of each document?', "chat_history": []})
response = result['answer']
print(response)
```
The response I get is either of the format "The document provided does not list different titles..."
### Expected behavior
Response should be span across multiple indices | Issue in running ConversationalRetrievalChain query across multiple Opensearch indices with wildcard specification | https://api.github.com/repos/langchain-ai/langchain/issues/8985/comments | 5 | 2023-08-09T18:08:27Z | 2023-11-29T16:08:15Z | https://github.com/langchain-ai/langchain/issues/8985 | 1,843,767,365 | 8,985 |
[
"langchain-ai",
"langchain"
] | ### Feature request
Many packages use pydantic versions that are much more recent than the one in langchain.
### Motivation
It is very difficult to use langchain with other packages that use recent versions of pydantic
### Your contribution
Just wanting to signal this as a needed feature | pydantic upgrade | https://api.github.com/repos/langchain-ai/langchain/issues/8984/comments | 6 | 2023-08-09T18:01:33Z | 2023-11-24T16:07:25Z | https://github.com/langchain-ai/langchain/issues/8984 | 1,843,758,645 | 8,984 |
[
"langchain-ai",
"langchain"
] | ### System Info
Mac OS 14.0
M1 Max 64GB ram
VSCode 1.80.2
Jupyter Notebook
Python 3.11.4
Llama-cpp-python using `!CMAKE_ARGS="-DLLAMA_METAL=on" FORCE_CMAKE=1 pip install --upgrade llama-cpp-python`
### Who can help?
@hwchase17 @agol
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```
from langchain.llms import LlamaCpp
from langchain.callbacks.manager import CallbackManager
from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler
llm = LlamaCpp(
model_path="./llama2_70b_chat_uncensored.ggmlv3.q5_K_S.bin",
n_gpu_layers=n_gpu_layers,
n_gqa=8,
n_batch=n_batch,
n_ctx=2048,
f16_kv=True,
callback_manager=callback_manager,
verbose=True,
)
prompt = """
Question: A rap battle between Stephen Colbert and John Oliver
"""
llm(prompt)
```
Error log:
```
10:30:36.498 [error] Disposing session as kernel process died ExitCode: undefined, Reason: 0.00s - Debugger warning: It seems that frozen modules are being used, which may
0.00s - make the debugger miss breakpoints. Please pass -Xfrozen_modules=off
0.00s - to python to disable frozen modules.
0.00s - Note: Debugging will proceed. Set PYDEVD_DISABLE_FILE_VALIDATION=1 to disable this validation.
llama.cpp: loading model from ./llama2_70b_chat_uncensored.ggmlv3.q5_K_S.bin
llama_model_load_internal: warning: assuming 70B model based on GQA == 8
llama_model_load_internal: format = ggjt v3 (latest)
llama_model_load_internal: n_vocab = 32000
llama_model_load_internal: n_ctx = 2048
llama_model_load_internal: n_embd = 8192
llama_model_load_internal: n_mult = 7168
llama_model_load_internal: n_head = 64
llama_model_load_internal: n_head_kv = 8
llama_model_load_internal: n_layer = 80
llama_model_load_internal: n_rot = 128
llama_model_load_internal: n_gqa = 8
llama_model_load_internal: rnorm_eps = 1.0e-06
llama_model_load_internal: n_ff = 28672
llama_model_load_internal: freq_base = 10000.0
llama_model_load_internal: freq_scale = 1
llama_model_load_internal: ftype = 16 (mostly Q5_K - Small)
llama_model_load_internal: model size = 70B
llama_model_load_internal: ggml ctx size = 0.21 MB
llama_model_load_internal: mem required = 46046.21 MB (+ 640.00 MB per state)
llama_new_context_with_model: kv self size = 640.00 MB
ggml_metal_init: allocating
ggml_metal_init: using MPS
ggml_metal_init: loading '~/anaconda3/envs/llama-cpp-venv/lib/python3.11/site-packages/llama_cpp/ggml-metal.metal'
ggml_metal_init: loaded kernel_add 0x126d41720
ggml_metal_init: loaded kernel_add_row 0x126d41130
ggml_metal_init: loaded kernel_mul 0x126d4e410
ggml_metal_init: loaded kernel_mul_row 0x126d4e710
ggml_metal_init: loaded kernel_scale 0x126d55390
ggml_metal_init: loaded kernel_silu 0x126d538c0
ggml_metal_init: loaded kernel_relu 0x126d56130
ggml_metal_init: loaded kernel_gelu 0x126d568e0
ggml_metal_init: loaded kernel_soft_max 0x126d56d00
ggml_metal_init: loaded kernel_diag_mask_inf 0x126d56f70
ggml_metal_init: loaded kernel_get_rows_f16 0x126d57af0
ggml_metal_init: loaded kernel_get_rows_q4_0 0x126d581f0
ggml_metal_init: loaded kernel_get_rows_q4_1 0x126d58780
ggml_metal_init: loaded kernel_get_rows_q2_K 0x111ce66e0
ggml_metal_init: loaded kernel_get_rows_q3_K 0x126d589f0
ggml_metal_init: loaded kernel_get_rows_q4_K 0x126d58c60
ggml_metal_init: loaded kernel_get_rows_q5_K 0x126d59830
ggml_metal_init: loaded kernel_get_rows_q6_K 0x126d59d80
ggml_metal_init: loaded kernel_rms_norm 0x130328d00
ggml_metal_init: loaded kernel_norm 0x1303284a0
ggml_metal_init: loaded kernel_mul_mat_f16_f32 0x13032a1c0
ggml_metal_init: loaded kernel_mul_mat_q4_0_f32 0x126d59ff0
ggml_metal_init: loaded kernel_mul_mat_q4_1_f32 0x126d5a260
ggml_metal_init: loaded kernel_mul_mat_q2_K_f32 0x126d5b0c0
ggml_metal_init: loaded kernel_mul_mat_q3_K_f32 0x126d5b640
ggml_metal_init: loaded kernel_mul_mat_q4_K_f32 0x126d5bc50
ggml_metal_init: loaded kernel_mul_mat_q5_K_f32 0x126d5c240
ggml_metal_init: loaded kernel_mul_mat_q6_K_f32 0x126d5c960
ggml_metal_init: loaded kernel_rope 0x13032af10
ggml_metal_init: loaded kernel_alibi_f32 0x13032b180
ggml_metal_init: loaded kernel_cpy_f32_f16 0x13032b3f0
ggml_metal_init: loaded kernel_cpy_f32_f32 0x13032b660
ggml_metal_init: loaded kernel_cpy_f16_f16 0x13032d380
ggml_metal_init: recommendedMaxWorkingSetSize = 49152.00 MB
ggml_metal_init: hasUnifiedMemory = true
ggml_metal_init: maxTransferRate = built-in GPU
llama_new_context_with_model: max tensor size = 205.08 MB
ggml_metal_add_buffer: allocated 'data ' buffer, size = 36864.00 MB, offs = 0
ggml_metal_add_buffer: allocated 'data ' buffer, size = 8603.55 MB, offs = 38439649280, (45468.00 / 49152.00)
ggml_metal_add_buffer: allocated 'eval ' buffer, size = 24.00 MB, (45492.00 / 49152.00)
ggml_metal_add_buffer: allocated 'kv ' buffer, size = 642.00 MB, (46134.00 / 49152.00)
ggml_metal_add_buffer: allocated 'scr0 ' buffer, size = 456.00 MB, (46590.00 / 49152.00)
ggml_metal_add_buffer: allocated 'scr1 ' buffer, size = 304.00 MB, (46894.00 / 49152.00)
10:30:36.498 [info] Dispose Kernel process 40138.
10:30:36.498 [error] Raw kernel process exited code: undefined
10:30:36.499 [error] Error in waiting for cell to complete [Error: Canceled future for execute_request message before replies were done
at t.KernelShellFutureHandler.dispose (~/.vscode/extensions/ms-toolsai.jupyter-2023.6.1101941928-darwin-arm64/out/extension.node.js:2:32375)
at ~/.vscode/extensions/ms-toolsai.jupyter-2023.6.1101941928-darwin-arm64/out/extension.node.js:2:51427
at Map.forEach (<anonymous>)
at v._clearKernelState (~/.vscode/extensions/ms-toolsai.jupyter-2023.6.1101941928-darwin-arm64/out/extension.node.js:2:51412)
at v.dispose (~/.vscode/extensions/ms-toolsai.jupyter-2023.6.1101941928-darwin-arm64/out/extension.node.js:2:44894)
at ~/.vscode/extensions/ms-toolsai.jupyter-2023.6.1101941928-darwin-arm64/out/extension.node.js:24:113024
at re (~/.vscode/extensions/ms-toolsai.jupyter-2023.6.1101941928-darwin-arm64/out/extension.node.js:2:1587343)
at Cv.dispose (~/.vscode/extensions/ms-toolsai.jupyter-2023.6.1101941928-darwin-arm64/out/extension.node.js:24:113000)
at Ev.dispose (~/.vscode/extensions/ms-toolsai.jupyter-2023.6.1101941928-darwin-arm64/out/extension.node.js:24:120283)
at process.processTicksAndRejections (node:internal/process/task_queues:96:5)]
10:30:36.499 [warn] Cell completed with errors {
message: 'Canceled future for execute_request message before replies were done'
}
10:30:36.499 [info] End cell 21 execution @ 1691602236499, started @ 1691602225152, elapsed time = 11.347s
```
Tested with llama.cpp sample and 70b model works directly without langchain. The problem only occurs when using langchain to prompt to llama.cpp & the 70b model.
### Expected behavior
Kernel should not crash. | Kernel crash when using llama2 70b on langchain with llama.cpp | https://api.github.com/repos/langchain-ai/langchain/issues/8983/comments | 3 | 2023-08-09T17:50:43Z | 2023-11-16T16:06:21Z | https://github.com/langchain-ai/langchain/issues/8983 | 1,843,744,154 | 8,983 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
How do you force a langchain agent to use a tool and not use any information outside the tool? Currently, if the question is not related to the Tool it uses its own data to generate answer which I don't want it to be.
Use Case: I have an index which has a lot of documents. I want to make sure the agent will use the index ALWAYS to get information. If the question is not in the index it should return "I don't know" or similar.
P.S I am using OPENAI_FUNCTION agent.
### Suggestion:
_No response_ | Force LangChain Agent to Use a Tool | https://api.github.com/repos/langchain-ai/langchain/issues/8979/comments | 4 | 2023-08-09T16:48:11Z | 2024-05-08T19:44:43Z | https://github.com/langchain-ai/langchain/issues/8979 | 1,843,655,453 | 8,979 |
[
"langchain-ai",
"langchain"
] | ### System Info
colab
```
!pip install -q langchain tiktoken openai chromadb
```
### Who can help?
@eyurtsev @aga
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```
%%writefile adjective_joke_prompt.json
{
# what type of prompt. Currently supports "prompt" and "few_shot"
"_type": "prompt",
# the input variables used in the template
"input_variables": ["adjective", "content"],
# the template text of the prompt, including variable placeholders
"template": "Tell me a {{ adjective }} joke about {{ content }}",
# alternatively the template text can be loaded from a file
"template_path": "adjective_joke_prompt_template.txt"
# NOTE: both "template" and "template_path" cannot be used at the same time!
# the format of the template
"template_format": "jinja2",
# currently only the "RegexParser" is supported for "output_parser"
# this is example of a date parser
"output_parser": {
"_type": "regex_parser",
"regex": "(\\d{4})-(\\d{2})-(\\d{2})",
"output_keys": ["year", "month", "day"]
}
}
```
```
# load the prompt using a file
prompt_template = load_prompt("adjective_joke_prompt.json")
# create a prompt using the variables
prompt_template.format(adjective="funny", content="chickens")
```
gives error:
```
---------------------------------------------------------------------------
JSONDecodeError Traceback (most recent call last)
[<ipython-input-41-7755c281a03b>](https://localhost:8080/#) in <cell line: 2>()
1 # load the prompt using a file
----> 2 prompt_template = load_prompt("adjective_joke_prompt.json")
3
4 # create a prompt using the variables
5 prompt_template.format(adjective="funny", content="chickens")
5 frames
[/usr/lib/python3.10/json/decoder.py](https://localhost:8080/#) in raw_decode(self, s, idx)
351 """
352 try:
--> 353 obj, end = self.scan_once(s, idx)
354 except StopIteration as err:
355 raise JSONDecodeError("Expecting value", s, err.value) from None
JSONDecodeError: Expecting property name enclosed in double quotes: line 3 column 2 (char 4)
```
### Expected behavior
gives response based on template | issues loading the prompt with load_prompt | https://api.github.com/repos/langchain-ai/langchain/issues/8978/comments | 1 | 2023-08-09T16:45:57Z | 2023-08-09T16:47:49Z | https://github.com/langchain-ai/langchain/issues/8978 | 1,843,652,268 | 8,978 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
I am trying to have a conversation with my documents like this
```typescript
const index = client.Index(indexName); //pinecode index
const queryEmbedding = await new OpenAIEmbeddings().embedQuery(question);
const queryResponse = await index.query({
queryRequest: { topK: 10, vector: queryEmbedding, includeMetadata: true },
});
const llm = new OpenAI();
const chatHistory = new ChatMessageHistory();
const memory = new BufferMemory({ chatHistory, inputKey: "my_chat_history", memoryKey: "chat_history" });
const chain = new ConversationChain({ llm, memory, verbose: true });
const concatenatedPageContent = queryResponse.matches.map(match => (<any>match?.metadata)?.pageContent).join(" ");
const result = await chain.call({
input_documents: [new Document({ pageContent: concatenatedPageContent })],
input: question,
});
```
But gettiing
```
Error: Missing value for input history
```
It seems to be impossible to marry embedded docs with chatting | Issue: Can't pass embedded documents with chat | https://api.github.com/repos/langchain-ai/langchain/issues/8975/comments | 0 | 2023-08-09T16:38:34Z | 2023-08-09T17:38:23Z | https://github.com/langchain-ai/langchain/issues/8975 | 1,843,641,608 | 8,975 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
Can we use Human as a tool in product?
def get_input() -> str:
print("Insert your text. Enter 'q' or press Ctrl-D (or Ctrl-Z on Windows) to end.")
contents = []
while True:
try:
line = input()
except EOFError:
break
if line == "q":
break
contents.append(line)
return "\n".join(contents)
# You can modify the tool when loading
tools = load_tools(["human", "ddg-search"], llm=math_llm, input_func=get_input)
what to do with input() in get_input() function if we need to use this tool in production?
reference:-https://python.langchain.com/docs/integrations/tools/human_tools
### Suggestion:
_No response_ | Human as a Tool in Production | https://api.github.com/repos/langchain-ai/langchain/issues/8973/comments | 2 | 2023-08-09T16:15:19Z | 2023-11-15T16:05:23Z | https://github.com/langchain-ai/langchain/issues/8973 | 1,843,608,827 | 8,973 |
[
"langchain-ai",
"langchain"
] | ### Feature request
### First
The aim of qa_with_sources is to find only the documents used in the answer.
The attribute `return_source_documents` in `qa_with_sources` chain, returns all the documents. Not just the documents used to provide the answer.
I think it's not necessary because `retriever.get_relevant_documents(question)` returns the same documents list.
It has no added value, and this is not in the spirit of the chain.
I propose to add an attribute `return_used_documents` or change the semantic of `return_source_documents` to limit the result to the documents used to provide the answer.
### Second
With a long list of documents, with a big URL for each document (like a document come from google drive or Microsoft Azure), the number of tokens used is exploding.
The recursive map-reduce must be activated. At each reduction, **some URLs disappear**, and space for documents is shrinking.
### Motivation
When we use the qa_with_source, we want to be able to justify the response.
Actually, we can return a correct list of url, but not the list of associated documents.
Sometime, when the original document is split into multiple documents, all part have the same URL. It's not possible to find the corresponding documents with a list of URL.
### Your contribution
I propose a new chain [qa_with_reference](https://github.com/langchain-ai/langchain/pull/7278) without these problems, but for the moment, nothing is moving. | qa_with_source it returns all documents, not just the ones used. | https://api.github.com/repos/langchain-ai/langchain/issues/8972/comments | 1 | 2023-08-09T16:09:42Z | 2023-11-15T16:05:34Z | https://github.com/langchain-ai/langchain/issues/8972 | 1,843,600,490 | 8,972 |
[
"langchain-ai",
"langchain"
] | ### System Info
langchain version 0.0.259
### Who can help?
@hwchase17 @baskaryan @eyurtsev
## Issue with `GoogleDriveLoader.file_loader_cls` accepting only classes with constructors that receive a `file` parameter.
The current behavior of `GoogleDriveLoader.file_loader_cls` is that it only accepts a class with a constructor that
takes a `file` parameter. For instance, it can accept the class `UnstructuredFileIOLoader`, and
typically other classes like `NotebookLoader` as well. However, when attempting to use
the `NotebookLoader` class with the following code:
```python
from langchain.document_loaders import NotebookLoader
file_id = "1Hrrf3b4cgjwuKEt1wQUgRtipxqyprKaU"
loader = GoogleDriveLoader(
file_ids=[file_id],
file_loader_cls=NotebookLoader,
file_loader_kwargs={"mode": "elements"},
)
loader.load()
```
An exception is thrown: `TypeError: NotebookLoader.__init__() got an unexpected keyword argument 'file'`.
## Issue with `GoogleDriveLoader` and `import PyPDF2`
If the `file_loader_cls` is not set explicitly, the code attempts to execute `import PyPDF2`.
However, the code is only designed to handle PDF files. Additionally, the dependencies
of `PyPDF2` are not declared in the `pyproject.toml` file. Currently, only `pypdf` is
declared as a dependency. To address this, it is recommended to update the code to utilize `pypdf`
instead of `PyPDF2`. Otherwise, an exception will be raised.
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
from langchain.document_loaders import NotebookLoader
file_id = "1Hrrf3b4cgjwuKEt1wQUgRtipxqyprKaU" # Link to colab file
loader = GoogleDriveLoader(
file_ids=[file_id],
file_loader_cls=NotebookLoader,
file_loader_kwargs={"mode": "elements"},
)
loader.load()
### Expected behavior
The code should be modified to accept the provided code without any exceptions.
I propose a solution with [a pull request](https://github.com/langchain-ai/langchain/pull/5135),
with a reimplementation of this class. | Bugs in `GoogleDriveLoader.file_loader_cls` | https://api.github.com/repos/langchain-ai/langchain/issues/8969/comments | 2 | 2023-08-09T14:06:18Z | 2023-09-19T08:31:39Z | https://github.com/langchain-ai/langchain/issues/8969 | 1,843,326,275 | 8,969 |
[
"langchain-ai",
"langchain"
] | ### Feature request
- The google drive loader, can manage only GDoc and GSheet.
- The code can load others type of files, but only with one other loader (`file_loader_cls`).
- It's not possible to request Google Drive to find some files with criteria.
- A Google Drive tools is not exist now.
- The can not manage the Google shortcut
- Can not use the description metadata of Google file
- Load only the description for a snippet of document
- It's not possible to set the advanced parameters like
- corpora, driveId, fields, includeItemsFromAllDrives, includeLabels, includePermissionsForView, orderBy, pageSize, spaces, supportsAllDrives,
- Select if return the URL of the document if for view or download
- It's not possible to return
- For GSheet: mode `single` or `elements`
- For GSlide: mode `single`, `elements` or ̀slide`
- It's not possible to use a fine filter (to refuse some document during the load)
- It's not possible to lazy loading if case of import a long list of documents and save memory
- It's not possible to use a standardized environment variable to manage the authentification (like all others technologies)
[LINK](https://github.com/langchain-ai/langchain/pull/5135)
### Motivation
All my company's documents are on GDrive, with more or less complex organizations and formats
(in the organization of CVs, customer documents, etc.).
The actual implementation of `GoogleDriveLoader` is very limited.
We tag all CVs with #cv. This allows us to quickly search for skills using Google Drive search.
Since the directory structure is complex, this approach requires only a single query
(instead of one query per subdirectory).
We utilize Google Drive searches for various purposes.
### Your contribution
@hwchase17 @baskaryan
For the last 10 weeks, I've been offering a pull-request number
[5135](https://github.com/langchain-ai/langchain/pull/5135) that nobody is taking up.
I've had various commitments, but they've never been kept.
My [proposition](https://github.com/langchain-ai/langchain/pull/5135) resolves all these features, and maintains the compatibility with the current version
(with many deprecated warning).
| Extend google drive loader | https://api.github.com/repos/langchain-ai/langchain/issues/8968/comments | 1 | 2023-08-09T14:06:01Z | 2023-09-05T14:42:04Z | https://github.com/langchain-ai/langchain/issues/8968 | 1,843,325,640 | 8,968 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
@hwchase17
The code utilizes certain pivot files responsible for the comprehensive integration of classes (__init__.py, load_tools.py, etc.).
Currently, maintaining an up-to-date pull request for these files poses challenges, as they undergo updates with each new version. This necessitates frequent rebasing until the code is approved.
In my opinion, it would be more advantageous to establish a mechanism that prevents alterations to these files (via naming conventions, abstract classes, etc.). This way, incorporating a new feature would involve solely the addition of new files without requiring modifications to existing ones. Such an approach could serve as the foundation for implementing plugins.
What are your thoughts on this idea?
### Suggestion:
_No response_ | Issue: re-implement pivot files to facilitate the integration of new functions | https://api.github.com/repos/langchain-ai/langchain/issues/8967/comments | 1 | 2023-08-09T14:05:53Z | 2023-11-15T16:07:03Z | https://github.com/langchain-ai/langchain/issues/8967 | 1,843,325,349 | 8,967 |
[
"langchain-ai",
"langchain"
] | ### System Info
langchain: 0.0.257
Python: 3.11.4
### Who can help?
@hwchase17 @agola11
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [X] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
I am trying to create an object for `OpenAIEmbeddings` as described [here](https://python.langchain.com/docs/integrations/text_embedding/azureopenai), but the constructor expects a `client `parameter which is documented nowhere.
<img width="591" alt="image" src="https://github.com/langchain-ai/langchain/assets/33937977/6d78c87b-bdcb-4c84-82d8-2c171739d47d">
### Expected behavior
Initialize the object without passing a client | OpenAIEmbeddings expects client parameter | https://api.github.com/repos/langchain-ai/langchain/issues/8966/comments | 1 | 2023-08-09T13:38:54Z | 2023-11-15T16:05:26Z | https://github.com/langchain-ai/langchain/issues/8966 | 1,843,265,073 | 8,966 |
[
"langchain-ai",
"langchain"
] | In the docs it seems the ReAct reference references to the wrong paper.
The actual ReAct paper is [this](https://arxiv.org/pdf/2210.03629.pdf)
https://github.com/langchain-ai/langchain/blob/b8df15cd647ca645ef16b2d66be271dc1f5187c1/docs/docs_skeleton/docs/modules/agents/agent_types/index.mdx#L15 | Wrong paper reference | https://api.github.com/repos/langchain-ai/langchain/issues/8964/comments | 2 | 2023-08-09T12:49:03Z | 2023-08-20T05:26:14Z | https://github.com/langchain-ai/langchain/issues/8964 | 1,843,168,836 | 8,964 |
[
"langchain-ai",
"langchain"
] | Now I am using Pinecone and langchain in my Project.
And I am using RecursiveCharacterTextSplitter when I embedding my bot based on my specific data.
And also I am using ConversationalRetrievalQAChain in chain.
These are my code.
```
const text_splitter = new RecursiveCharacterTextSplitter({
chunkSize: 1000,
chunkOverlap: 200,
});
const docs = await text_splitter.splitDocuments(rowDocs);
```
```
const chain = ConversationalRetrievalQAChain.fromLLM(
llm,
vectorStore.asRetriever(),
{
memory: new BufferMemory({
memoryKey: 'chat_history', // Must be set to "chat_history"
inputKey: 'question',
returnMessages: true,
}),
},
);
```
But my embedding quality is not good.
Sometimes my Bot gives strange answers to my questions. 😢
For example
I have already trained ChatBot using one docx(about Abrantes)
I have asked like that
Me: Please tell me about Abrantes
AI: Abrantes is a Project Controls Specialist ....
Me: What Project he managed?
AI: Ed has managed some .....
Who is Ed in this answer? 😣
There is no Ed in this docx too.
So how can I fix it? | How to improve my embedding quality? | https://api.github.com/repos/langchain-ai/langchain/issues/8962/comments | 2 | 2023-08-09T12:31:35Z | 2023-11-24T19:36:17Z | https://github.com/langchain-ai/langchain/issues/8962 | 1,843,138,505 | 8,962 |
[
"langchain-ai",
"langchain"
] | I'm implementing vectorstore agent on my custom data. Can I implement this with a local llm like gpt4all(GPT4All-J v1.3-groovy)
Can agents give better and expected answer when we use agents, or should go with better model like(gpt4, llama2)? | Agents on a local llm with custom data | https://api.github.com/repos/langchain-ai/langchain/issues/8961/comments | 3 | 2023-08-09T11:49:11Z | 2023-11-28T16:08:55Z | https://github.com/langchain-ai/langchain/issues/8961 | 1,843,070,745 | 8,961 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
How can I retrieve the action for a LLM agent?
![Uploading image.png…]()
### Suggestion:
_No response_ | Issue: How can I retrieve the action for a LLM agent? | https://api.github.com/repos/langchain-ai/langchain/issues/8959/comments | 1 | 2023-08-09T10:52:59Z | 2023-11-15T16:05:29Z | https://github.com/langchain-ai/langchain/issues/8959 | 1,842,978,093 | 8,959 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
FAISS is taking around 12hrs to create embedding and add it to index for 100000 rows csv file. is there any bulk load strategy for CSV files embedding
### Suggestion:
_No response_ | Issue: FAISS taking long time to add to index for 30MB csv file | https://api.github.com/repos/langchain-ai/langchain/issues/8958/comments | 1 | 2023-08-09T10:39:17Z | 2023-11-15T16:10:55Z | https://github.com/langchain-ai/langchain/issues/8958 | 1,842,957,431 | 8,958 |
[
"langchain-ai",
"langchain"
] | ### System Info
macOS Ventura Version `13.4.1`
Python `3.11.4`
```
langchain==0.0.251
chromadb==0.4.5
```
### Who can help?
@hwchase17 @eyurtsev
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [X] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
1. Create a vector database and insert some items with relevant embeddings
2. Update vector database collection with some new docs
Python snippet to create/update vector db collection and it's embeddings:
```
documents = []
for row in get_data_rows():
documents.append(
Document(
page_content=row['full_text'],
metadata={
'id': int(row['id']),
'created_at': parser.parse(row['created_at']).timestamp(),
}
)
)
Chroma.from_documents(
documents,
embedding=OpenAIEmbeddings(),
collection_name='my_collection',
persist_directory=f'embeddings/my_collection'
)
```
### Expected behavior
Running above snippet in both scenarios should only call embeddings function for newly added or changed docs instead of everything?
| Embeddings are regenerated for entire vector db on updating collections | https://api.github.com/repos/langchain-ai/langchain/issues/8957/comments | 1 | 2023-08-09T10:24:17Z | 2023-08-31T14:33:51Z | https://github.com/langchain-ai/langchain/issues/8957 | 1,842,933,508 | 8,957 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
Can you please help me with connecting my LangChain agent to a MongoDB database?
I have connected my relational database below like
db = SQLDatabase.from_uri("mysql_db_url")
The above process is not working for Mongo Db because it is a NoSQL database
anyone can help me to do this ???????
Thank you .....
### Suggestion:
_No response_ | lang chain connection with mongo DB | https://api.github.com/repos/langchain-ai/langchain/issues/8956/comments | 11 | 2023-08-09T09:53:54Z | 2024-02-27T16:08:10Z | https://github.com/langchain-ai/langchain/issues/8956 | 1,842,884,084 | 8,956 |
[
"langchain-ai",
"langchain"
] | ### System Info
Name: langchain
Version: 0.0.251
Name: faiss-cpu
Version: 1.7.1
Name: llama-cpp-python
Version: 0.1.77
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [X] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```
from langchain.llms import LlamaCpp
from langchain import PromptTemplate, LLMChain
from langchain.callbacks.manager import CallbackManager
from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler
import gradio as gr
from langchain.text_splitter import RecursiveCharacterTextSplitter
from langchain.embeddings import HuggingFaceEmbeddings
from langchain.vectorstores import FAISS
from langchain.llms import HuggingFacePipeline
from langchain.document_loaders import DirectoryLoader
from langchain.document_loaders import UnstructuredWordDocumentLoader
from torch import cuda, bfloat16
from transformers import StoppingCriteria, StoppingCriteriaList
from langchain.chains import ConversationalRetrievalChain
from langchain.embeddings import LlamaCppEmbeddings
template = """Question: {question}
Answer:"""
prompt = PromptTemplate(template=template, input_variables=["question"])
callback_manager = CallbackManager([StreamingStdOutCallbackHandler()])
n_gpu_layers = 42 # Change this value based on your model and your GPU VRAM pool.
n_batch = 1024 # Should be between 1 and n_ctx, consider the amount of VRAM in your GPU.
embeddings = ubuntu(model_path="llama-2-7b-chat/7B/ggml-model-q4_0.bin",
n_gpu_layers=n_gpu_layers,
n_batch=n_batch)
llm = LlamaCpp(
model_path="llama-2-7b-chat/7B/ggml-model-q4_0.bin",
n_gpu_layers=n_gpu_layers,
n_batch=n_batch,
callback_manager=callback_manager,
verbose=True,
)
llm_chain = LLMChain(prompt=prompt, llm=llm)
txt_loader = DirectoryLoader("doc", glob="./*.docx", loader_cls=UnstructuredWordDocumentLoader)
documents = txt_loader.load()
text_splitter = RecursiveCharacterTextSplitter(chunk_size=1000, chunk_overlap=20)
all_splits = text_splitter.split_documents(documents)
vectorstore = FAISS.from_documents(all_splits, embeddings)
query = "How is it going?"
search = vectorstore.similarity_search(query, k=5)
template = '''Context: {context}
Based on Context provide me answer for following question
Question: {question}
Tell me the information about the fact. The answer should be from context only
do not use general knowledge to answer the query'''
prompt = PromptTemplate(input_variables=["context", "question"], template= template)
final_prompt = prompt.format(question=query, context=search)
result = llm_chain.run(final_prompt)
print(result)
```
I get error:
```
llama_tokenize_with_model: too many tokens
Traceback (most recent call last):
File "test_ggml.py", line 57, in <module>
vectorstore = FAISS.from_documents(all_splits, embeddings)
File "/home/tupk/anaconda3/envs/nlp/lib/python3.8/site-packages/langchain/vectorstores/base.py", line 420, in from_documents
return cls.from_texts(texts, embedding, metadatas=metadatas, **kwargs)
File "/home/tupk/anaconda3/envs/nlp/lib/python3.8/site-packages/langchain/vectorstores/faiss.py", line 577, in from_texts
embeddings = embedding.embed_documents(texts)
File "/home/tupk/anaconda3/envs/nlp/lib/python3.8/site-packages/langchain/embeddings/llamacpp.py", line 110, in embed_documents
embeddings = [self.client.embed(text) for text in texts]
File "/home/tupk/anaconda3/envs/nlp/lib/python3.8/site-packages/langchain/embeddings/llamacpp.py", line 110, in <listcomp>
embeddings = [self.client.embed(text) for text in texts]
File "/home/tupk/anaconda3/envs/nlp/lib/python3.8/site-packages/llama_cpp/llama.py", line 812, in embed
return list(map(float, self.create_embedding(input)["data"][0]["embedding"]))
File "/home/tupk/anaconda3/envs/nlp/lib/python3.8/site-packages/llama_cpp/llama.py", line 776, in create_embedding
self.eval(tokens)
File "/home/tupk/anaconda3/envs/nlp/lib/python3.8/site-packages/llama_cpp/llama.py", line 471, in eval
self.input_ids[self.n_tokens : self.n_tokens + n_tokens] = batch
ValueError: could not broadcast input array from shape (179,) into shape (0,)
Exception ignored in: <function Llama.__del__ at 0x7f68eedc9af0>
Traceback (most recent call last):
File "/home/tupk/anaconda3/envs/nlp/lib/python3.8/site-packages/llama_cpp/llama.py", line 1508, in __del__
TypeError: 'NoneType' object is not callable
Exception ignored in: <function Llama.__del__ at 0x7f68eedc9af0>
Traceback (most recent call last):
File "/home/tupk/anaconda3/envs/nlp/lib/python3.8/site-packages/llama_cpp/llama.py", line 1508, in __del__
TypeError: 'NoneType' object is not callable
```
### Expected behavior
CppEmbedding can work well with faiss | LlamaCppEmbeddings not working with faiss | https://api.github.com/repos/langchain-ai/langchain/issues/8955/comments | 1 | 2023-08-09T09:23:46Z | 2023-08-11T03:17:28Z | https://github.com/langchain-ai/langchain/issues/8955 | 1,842,833,550 | 8,955 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
I believe one of the most popular directions for langchain-based applications will be to parse the query in a natural language and to use some external api for the response generation. However, as soon as this external API doesn't have an OpenAISchema-based specification, the easiest way to do it will be via chain parsing + request.
I tried to implement it via a sequential chain, the example is provided below:
```
EXCHANGE_API_KEY = SOME_KEY
from langchain.chains import LLMRequestsChain, LLMChain, SimpleSequentialChain
from langchain.chains.openai_functions.openapi import get_openapi_chain
from langchain.prompts import PromptTemplate, SystemMessagePromptTemplate, ChatPromptTemplate, HumanMessagePromptTemplate
template = """You help to convert currencies.
You need to convert the input phrase into the query like from=EUR&to=GBP&amount=100"
You need to extract these parameters yourself from the query in natural language.
Please, return the string like https://api.currencylayer.com/convert?from=EUR&to=GBP&amount=100, where you need to fill in "from", "to", and "amount" based on previous instructions"
"""
system_prompt = SystemMessagePromptTemplate.from_template(template)
human_message_prompt = HumanMessagePromptTemplate.from_template("{question}")
first_prompt = ChatPromptTemplate(
messages=[
system_prompt,
human_message_prompt
]
)
## this one above may be done nicer, via PromptTemplate, but I don't think it's necessary here
template_2 = """
Query api with {synopsis} and return api response
"""
llm = OpenAI(temperature=0)
convert_chain = LLMChain(llm=llm, prompt=first_prompt, output_key="synopsis")
system_prompt_2 = PromptTemplate(input_variables=["synopsis"], template=template_2)
chain = LLMRequestsChain(llm_chain=LLMChain(llm=OpenAI(temperature=0), prompt=system_prompt_2))
overall_chain = SimpleSequentialChain(chains=[convert_chain, chain], verbose=True)
question = "Convert 1000 American Dollars to euros please"
inputs = {"input": question,
#"url": "https://api.currencylayer.com/convert",
"headers": {"access_key": EXCHANGE_API_KEY}
}
overall_chain(inputs)
```
The code above fails due to _requests_ library stacktrace: `InvalidSchema: No connection adapters were found for 'System: [https://api.currencylayer.com/convert?from=USD&to=EUR&amount=1000'](https://api.currencylayer.com/convert?from=USD&to=EUR&amount=1000%27)`
It happens because the string "System: " is inevitably attached to the output of the first chain.
So, my questions are as follows:
1. Is the way I showed above right now the best option to interact with the APIs which have not been implemented yet to langchain and haven't got OpenAISchema-based docs?
2. If so, how to deal with my problem and strip the "System: " part for query? Is there a natural way to parse the natural language query into an API request via this chain?
### Suggestion:
I think there should be a very clear way and documentation of how to deal with external APIs after natural language parsing done by generative network | Issue: unclear processing of natural language query parsing + external api querying afterwards | https://api.github.com/repos/langchain-ai/langchain/issues/8953/comments | 3 | 2023-08-09T08:51:52Z | 2023-11-16T16:06:36Z | https://github.com/langchain-ai/langchain/issues/8953 | 1,842,781,388 | 8,953 |
[
"langchain-ai",
"langchain"
] | ### System Info
I am working with hugging face opensource models for SQL generation with langchain
Our model works without a connection db, but when I connect the db, the chain generates inaccurate answers and takes unknown columns.
I am using below models
wizard, alpaca,vicuna,falcon
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
db_chain = SQLDatabaseChain.from_llm(llm, db=db, verbose=True,prompt=PROMPT)
db_chain.run(input('enter the question:'))
### Expected behavior
I need to connect the database with my open-source model using Langchain
when I ask a question to my chain, the chain generates the correct SQL query with consideration of my database schema. | schema consideration while generating query | https://api.github.com/repos/langchain-ai/langchain/issues/8950/comments | 5 | 2023-08-09T06:00:14Z | 2023-12-08T16:05:40Z | https://github.com/langchain-ai/langchain/issues/8950 | 1,842,545,419 | 8,950 |
[
"langchain-ai",
"langchain"
] | ### Feature request
Dear [langchain](https://github.com/langchain-ai/langchain) developer,
Greetings! I am Jimmy, a community developer and volunteer at InternLM. Your work has been immensely beneficial to me, and I believe it can be effectively utilized in InternLM as well. Welcome to add Discord https://discord.gg/gF9ezcmtM3 . I hope to get in touch with you.
Best regards,
Jimmy
### Motivation
Hope to get in touch
### Your contribution
Hope to get in touch | Hope to get in touch | https://api.github.com/repos/langchain-ai/langchain/issues/8949/comments | 4 | 2023-08-09T05:30:05Z | 2023-08-20T19:42:46Z | https://github.com/langchain-ai/langchain/issues/8949 | 1,842,513,770 | 8,949 |
[
"langchain-ai",
"langchain"
] | ### Issue with current documentation:
got 404 error while looking for tabular documentation:
From link:
https://python.langchain.com/docs/use_cases/question_answering/
To link:
https://python.langchain.com/docs/use_cases/tabular
got 404:

### Idea or request for content:
_No response_ | DOC: Tabular doc not found | https://api.github.com/repos/langchain-ai/langchain/issues/8946/comments | 2 | 2023-08-09T02:03:58Z | 2023-11-21T02:13:34Z | https://github.com/langchain-ai/langchain/issues/8946 | 1,842,336,871 | 8,946 |
[
"langchain-ai",
"langchain"
] | ### Issue with current documentation:
Try https://api.python.langchain.com/en/latest/search.html?q=PubMed (a search on the "PubMed" string)
It returns too many lines.
It seems it converts every found "PubMed" string into a link. That creates duplicate links in big quantities.
### Idea or request for content:
Fix a script that generates the "Search results" page. | DOC: API Reference: bug in the Search | https://api.github.com/repos/langchain-ai/langchain/issues/8936/comments | 1 | 2023-08-08T21:28:15Z | 2023-11-14T16:05:11Z | https://github.com/langchain-ai/langchain/issues/8936 | 1,842,127,127 | 8,936 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.