issue_owner_repo listlengths 2 2 | issue_body stringlengths 0 261k ⌀ | issue_title stringlengths 1 925 | issue_comments_url stringlengths 56 81 | issue_comments_count int64 0 2.5k | issue_created_at stringlengths 20 20 | issue_updated_at stringlengths 20 20 | issue_html_url stringlengths 37 62 | issue_github_id int64 387k 2.46B | issue_number int64 1 127k |
|---|---|---|---|---|---|---|---|---|---|
[
"langchain-ai",
"langchain"
] | ### System Info
Im trying to implement this in sagemaker with bedrock claude v2
https://github.com/langchain-ai/langchain/blob/master/templates/rag-aws-bedrock/rag_aws_bedrock/chain.py
Here is my code
```
`import os
from langchain.embeddings import BedrockEmbeddings
from langchain.llms.bedrock import Bedrock
from langchain.prompts import ChatPromptTemplate
from langchain.vectorstores import FAISS
from langchain_core.output_parsers import StrOutputParser
from langchain_core.pydantic_v1 import BaseModel
from langchain_core.runnables import RunnableParallel, RunnablePassthrough
region = "us-east-1"
model_id="anthropic.claude-v2"
# Set LLM and embeddings
model = Bedrock(
model_id=model_id,
region_name=region,
model_kwargs={"max_tokens_to_sample": 200},
)
bedrock_embeddings = BedrockEmbeddings(model_id="amazon.titan-embed-text-v1")
# Add to vectorDB
vectorstore = FAISS.from_texts(
["harrison worked at kensho"], embedding=bedrock_embeddings
)
retriever = vectorstore.as_retriever()
# Get retriever from vectorstore
retriever = vectorstore.as_retriever()
# RAG prompt
template = """Answer the question based only on the following context:
{context}
Question: {question}
"""
prompt = ChatPromptTemplate.from_template(template)
# RAG
chain = (
RunnableParallel({"context": retriever, "question": RunnablePassthrough()})
| prompt
| model
| StrOutputParser()
)
`
```
got unexpected error:
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
Cell In[47], line 40
35 prompt = ChatPromptTemplate.from_template(template)
38 # RAG
39 chain = (
---> 40 RunnableParallel({"context": retriever, "question": RunnablePassthrough()})
41 | prompt
42 | model
43 | StrOutputParser()
44 )
TypeError: Expected a Runnable, callable or dict.Instead got an unsupported type: <class 'langchain.schema.vectorstore.VectorStoreRetriever'>
Name: langchain
Version: 0.0.352
Name: langchain-core
Version: 0.1.2
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
1. !pip install langchain==0.0.352
2.
```
import os
from langchain.embeddings import BedrockEmbeddings
from langchain.llms.bedrock import Bedrock
from langchain.prompts import ChatPromptTemplate
from langchain.vectorstores import FAISS
from langchain_core.output_parsers import StrOutputParser
from langchain_core.pydantic_v1 import BaseModel
from langchain_core.runnables import RunnableParallel, RunnablePassthrough
region = "us-east-1"
model_id="anthropic.claude-v2"
# Set LLM and embeddings
model = Bedrock(
model_id=model_id,
region_name=region,
model_kwargs={"max_tokens_to_sample": 200},
)
bedrock_embeddings = BedrockEmbeddings(model_id="amazon.titan-embed-text-v1")
# Add to vectorDB
vectorstore = FAISS.from_texts(
["harrison worked at kensho"], embedding=bedrock_embeddings
)
retriever = vectorstore.as_retriever()
# Get retriever from vectorstore
retriever = vectorstore.as_retriever()
# RAG prompt
template = """Answer the question based only on the following context:
{context}
Question: {question}
"""
prompt = ChatPromptTemplate.from_template(template)
# RAG
chain = (
RunnableParallel({"context": retriever, "question": RunnablePassthrough()})
| prompt
| model
| StrOutputParser()
)
```
### Expected behavior
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
Cell In[47], line 40
35 prompt = ChatPromptTemplate.from_template(template)
38 # RAG
39 chain = (
---> 40 RunnableParallel({"context": retriever, "question": RunnablePassthrough()})
41 | prompt
42 | model
43 | StrOutputParser()
44 )
File /opt/conda/lib/python3.10/site-packages/langchain_core/runnables/base.py:1937, in RunnableParallel.__init__(self, _RunnableParallel__steps, **kwargs)
1934 merged = {**__steps} if __steps is not None else {}
1935 merged.update(kwargs)
1936 super().__init__(
-> 1937 steps={key: coerce_to_runnable(r) for key, r in merged.items()}
1938 )
File /opt/conda/lib/python3.10/site-packages/langchain_core/runnables/base.py:1937, in <dictcomp>(.0)
1934 merged = {**__steps} if __steps is not None else {}
1935 merged.update(kwargs)
1936 super().__init__(
-> 1937 steps={key: coerce_to_runnable(r) for key, r in merged.items()}
1938 )
File /opt/conda/lib/python3.10/site-packages/langchain_core/runnables/base.py:3232, in coerce_to_runnable(thing)
3230 return cast(Runnable[Input, Output], RunnableParallel(thing))
3231 else:
-> 3232 raise TypeError(
3233 f"Expected a Runnable, callable or dict."
3234 f"Instead got an unsupported type: {type(thing)}"
3235 )
TypeError: Expected a Runnable, callable or dict.Instead got an unsupported type: < | Can't use RunnablePassthrough | https://api.github.com/repos/langchain-ai/langchain/issues/15085/comments | 3 | 2023-12-23T00:38:06Z | 2024-03-30T16:06:36Z | https://github.com/langchain-ai/langchain/issues/15085 | 2,054,597,980 | 15,085 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
SQLDatabaseChain is throwing a TypeError when executing the .run or .invoke functions. All arguments or kwargs passed to the class are valid, but the error persists. I have trapped back to the database and there is a valid connection. My code and error are below:
```python
sqlite_db = SQLDatabase.from_uri(
"sqlite:///./sqlite/ibdss.sqlite3",
sample_rows_in_table_info=2,
include_tables=["colts"]
)
db_chain = SQLDatabaseChain.from_llm(
llm,
sqlite_db,
return_direct=True,
use_query_checker=True,
return_intermediate_steps=True,
verbose=True
)
sqlite_response = db_chain.invoke({"query": "How many generators have over 10,000 hours of operation?"})
```
The issue happens when using .run as well.
Error Output:
TypeError: must be real number, not str
The traceback points to the invocation of the chain as the error area.
### Suggestion:
_No response_ | SQLDatabaseChain raising TypeError exception with SQLite | https://api.github.com/repos/langchain-ai/langchain/issues/15077/comments | 11 | 2023-12-22T20:25:25Z | 2024-07-12T18:11:07Z | https://github.com/langchain-ai/langchain/issues/15077 | 2,054,482,659 | 15,077 |
[
"langchain-ai",
"langchain"
] | ### System Info
I have observed that while using the command belonging to importing "langchain.llms (for example as in from langchain.llms import HuggingFaceHub") have no problem when I deploy my web app to multiple hosting sites like streamlit, Render, Heroku etc.
However, using the "langchain.memory" as in from langchain.memory import ConversationBufferMemory
and "langchain.prompts" as in from langchain.prompts import PromptTemplate
causes deployment to fail in all said hosting sites. They throw a module not found error for the langchain.memory and langchain.prompts.
Please note: that running in google colab or vscode does not have this error.
Also, python version and langchain versions are irrelevant to this problem since I have tested through a lot of variations.
Please help out. Thanks in advance
### Who can help?
@hwchase17
@agola11
@eyurtsev
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [X] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [X] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
import soundfile as sf
from langchain.llms import HuggingFaceHub
# Below line gives error not in vscode or google colab but in deployment to render, heroku, streamlit and other hosting sites as well. I have not used ConversationBufferMemory in below code even, since the this line causes problem even in initial importing
from langchain.memory import ConversationBufferMemory
import speech_recognition as sr
# Initialize any API keys that are needed
import os
from flask import Flask, render_template, request, session, flash, get_flashed_messages
os.environ["HUGGINGFACEHUB_API_TOKEN"] = "write your own api here"
app = Flask(__name__)
app.secret_key = '123'
@app.route("/LLMEXP", methods=['GET', 'POST'])
def llmexp():
if 'chat_messages' not in session:
session['chat_messages'] = []
conversation = HuggingFaceHub(repo_id="google/flan-t5-small", model_kwargs={"temperature": 0.1, "max_length": 256})
if request.method == 'POST':
if 'record_audio' in request.form:
# Record audio and convert to text
user_input = request.form['user_input']
else:
# User input from textarea
user_input = request.form['user_input']
if not user_input or user_input.isspace():
flash("Please provide a valid input.")
else:
user_message = {'role': 'user', 'content': f"User: {user_input}"}
session['chat_messages'].append(user_message)
response = conversation(user_input)
if not response.strip():
response = "I didn't understand the question."
#text_to_speech(response)
assistant_message = {'role': 'assistant', 'content': f"Bot: {response}"}
session['chat_messages'].append(assistant_message)
session.modified = True
return render_template("index.html", chat_messages=session['chat_messages'])
if __name__ == "__main__":
app.run(debug=True)
### Expected behavior
I expect that I do not receive any error related to "langchain.memory" or "langchain.prompts" error while deploying to a hosting site. | Bugs in importing when Deploying a LangChain web app to multiple hosting platforms! | https://api.github.com/repos/langchain-ai/langchain/issues/15074/comments | 1 | 2023-12-22T19:12:34Z | 2024-03-29T16:08:40Z | https://github.com/langchain-ai/langchain/issues/15074 | 2,054,421,969 | 15,074 |
[
"langchain-ai",
"langchain"
] | ### Feature request
Google's `gemini-pro` supports function calling. It would be nice to be able to use langchain to support function calling when using the `VertexAI` class similar to OpenAI and OpenAI's version of function calling: https://cloud.google.com/vertex-ai/docs/generative-ai/multimodal/function-calling
### Motivation
Here's a notebook where to access this functionality you have to use the `vertexai` library directly which means we lose the langchain standardization of input and output schemas: https://github.com/GoogleCloudPlatform/generative-ai/blob/main/gemini/function-calling/intro_function_calling.ipynb
### Your contribution
Possibly can help. | Feature request: Vertex AI Function Calling | https://api.github.com/repos/langchain-ai/langchain/issues/15073/comments | 1 | 2023-12-22T19:10:42Z | 2024-03-29T16:08:35Z | https://github.com/langchain-ai/langchain/issues/15073 | 2,054,420,480 | 15,073 |
[
"langchain-ai",
"langchain"
] | ### System Info
langchain==0.0.351
python==3.10
### Who can help?
@hwchase17 This should be an easy one to fix.
When using regex in the output parser of the StructuredChatAgent, the output parser cuts off the output at the first ending ``` it finds. For example, if my string was
```
"""```json
{
"action": "Final Answer",
"action_input": "Here is some basic Oceananigans code that sets up a simple simulation environment:\n\n```julia\nusing Oceananigans\n\n# Define the grid\ngrid = RectilinearGrid(size=(64, 64, 64), extent=(1, 1, 1))\n\n# Create a nonhydrostatic model\nmodel = NonhydrostaticModel(grid=grid)\n\n# The model is now ready for further configuration and running a simulation.\n```\n\nThis code initializes a `NonhydrostaticModel` on a 64x64x64 `RectilinearGrid` with an extent of 1x1x1 in each direction. The model is created with default settings, which you can customize according to your simulation needs."
}
```"""
```
Then the output parser would get """```json
{
"action": "Final Answer",
"action_input": "Here is some basic Oceananigans code that sets up a simple simulation environment:\n\n```julia\nusing Oceananigans\n\n# Define the grid\ngrid = RectilinearGrid(size=(64, 64, 64), extent=(1, 1, 1))\n\n# Create a nonhydrostatic model\nmodel = NonhydrostaticModel(grid=grid)\n\n# The model is now ready for further configuration and running a simulation.\n```
which would cause an error.
I have not throughly verified this solution but changing the regex in the structured chat output parser -
https://github.com/langchain-ai/langchain/blob/aad3d8bd47d7f5598156ff2bdcc8f736f24a7412/libs/langchain/langchain/agents/structured_chat/output_parser.py#L23
to pattern = re.compile(r"```json\s*\n(.*?)```(?=\s*```json|\Z)", re.DOTALL) seems to fix the problem.
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [X] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
pattern = re.compile(r"```(?:json\s+)?(\W.*?)```", re.DOTALL)
text="""```json
{
"action": "Final Answer",
"action_input": "Here is some basic Oceananigans code that sets up a simple simulation environment:\n\n```julia\nusing Oceananigans\n\n# Define the grid\ngrid = RectilinearGrid(size=(64, 64, 64), extent=(1, 1, 1))\n\n# Create a nonhydrostatic model\nmodel = NonhydrostaticModel(grid=grid)\n\n# The model is now ready for further configuration and running a simulation.\n```\n\nThis code initializes a `NonhydrostaticModel` on a 64x64x64 `RectilinearGrid` with an extent of 1x1x1 in each direction. The model is created with default settings, which you can customize according to your simulation needs."
}
```"""
action_match = pattern.search(text)
response = json.loads(action_match.group(1).strip(), strict=False)
### Expected behavior
I would expect the code to get the entire json blob block but it doesn't. | Structured Chat Output Parser doesn't work when model outputs a code block with ``` around the code block | https://api.github.com/repos/langchain-ai/langchain/issues/15069/comments | 3 | 2023-12-22T17:40:20Z | 2024-04-09T16:14:46Z | https://github.com/langchain-ai/langchain/issues/15069 | 2,054,285,645 | 15,069 |
[
"langchain-ai",
"langchain"
] | in retrievalQa from langchain, we have a retriever that retrieves docs from a vector db and provides a context to the llm, let's say i'm using gpt3.5 whose max tokens is 4096... how do i handle huge context to be sent to it ? any suggestions will be appreciated | send context of docs through Chroma().as_retriever multiple times in the same conversation | https://api.github.com/repos/langchain-ai/langchain/issues/15062/comments | 1 | 2023-12-22T13:52:13Z | 2024-03-29T16:08:31Z | https://github.com/langchain-ai/langchain/issues/15062 | 2,053,953,258 | 15,062 |
[
"langchain-ai",
"langchain"
] | ### Discussed in https://github.com/langchain-ai/langchain/discussions/15060
<div type='discussions-op-text'>
<sup>Originally posted by **ShehneelAhmedKhan** December 22, 2023</sup>
This is my code:
llm = OpenAI(temperature=0, model_name="gpt-3.5-turbo")
db_chain = SQLDatabaseSequentialChain.from_llm(llm=llm, database=db, verbose=True, top_k=0) # Can use top_k if face error
return db_chain.run(message)
Output:

The answer is denying although the sqlresult gets the answer.
Note: This is happening only in this table, other tables are being answered correctly.
using langchain==0.0.175
Any suggestion?</div> | Using SQLDatabaseSequentialChain, discrepancy between Answer and SQLResult | https://api.github.com/repos/langchain-ai/langchain/issues/15061/comments | 2 | 2023-12-22T13:48:55Z | 2024-03-29T16:08:26Z | https://github.com/langchain-ai/langchain/issues/15061 | 2,053,949,456 | 15,061 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
i want force the model to call a specific function. but i didn't found how to ues tool_choice with AgentExecutor from doc.
please give me demo. Thanks
### Suggestion:
_No response_ | How to use tool_choice with initialize_agent? | https://api.github.com/repos/langchain-ai/langchain/issues/15059/comments | 3 | 2023-12-22T13:20:20Z | 2024-03-29T16:08:20Z | https://github.com/langchain-ai/langchain/issues/15059 | 2,053,915,647 | 15,059 |
[
"langchain-ai",
"langchain"
] | ### Feature request
The `GoogleDriveLoader` currently supports 3 different ways to authenticate a user.
1. Via a Service Account File
2. Via a Token File
3. Via a Live server
All those ways work perfectly, but I'm missing a way to authenticate the user via an existing JWT. There is a workaround by saving the JWT into a token file and providing this file via the `token_path` parameter. But there should also be a way to provide exactly these credentials via a parameter.
### Motivation
Maybe there is a use case where multiple accounts are required, or the user wants to authenticate over a third-party website running on an external server (not localhost). I think it would be useful to have a parameter that allows the functionality to pass a JWT without having to store it first in a file.
### Your contribution
I'm willing to submit a PR. | Make it possible to give credentials directly via parameter on GoogleDriveLoader | https://api.github.com/repos/langchain-ai/langchain/issues/15058/comments | 3 | 2023-12-22T12:59:29Z | 2024-06-30T23:27:12Z | https://github.com/langchain-ai/langchain/issues/15058 | 2,053,889,545 | 15,058 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
Hi Guy, this world requires a Langchain framework written in the Rust language, and Python is not the future of AI.
### Suggestion:
_No response_ | this world requires a Langchain framework written in the Rust language | https://api.github.com/repos/langchain-ai/langchain/issues/15057/comments | 3 | 2023-12-22T11:01:51Z | 2024-03-29T16:08:15Z | https://github.com/langchain-ai/langchain/issues/15057 | 2,053,758,780 | 15,057 |
[
"langchain-ai",
"langchain"
] | ### Feature request
The [Snowflakeconnector](https://docs.snowflake.com/en/developer-guide/python-connector/python-connector-api#functions) supports authentication via browser. It would be nice if the [Langchain Snowflake Loader](https://python.langchain.com/docs/integrations/document_loaders/snowflake) also supports this feature
### Motivation
Basic authentication works in most cases. But there might be a use case where the user wants to use OAuth to log into his Snowflake instance.
### Your contribution
I'm willing to submit a PR | Add external browser authentication for Snowflake. | https://api.github.com/repos/langchain-ai/langchain/issues/15056/comments | 1 | 2023-12-22T10:43:44Z | 2024-03-29T16:08:10Z | https://github.com/langchain-ai/langchain/issues/15056 | 2,053,735,631 | 15,056 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
Hi,
I want to use the ContextualCompressionRetriever and wondering how the prompt looks like or if you can use it for non-english languages (e.g. German)?
I am using ContextualCompressionRetriever at the moment and realized that my LLM responses often switch to English, so I am assuming that the prompt for ContextualCompressionRetriever is in English language and the model gets confused which language to use.
Any recommendations?
### Suggestion:
_No response_ | ContextualCompressionRetriever for non-english languages | https://api.github.com/repos/langchain-ai/langchain/issues/15052/comments | 1 | 2023-12-22T08:16:48Z | 2024-03-29T16:08:05Z | https://github.com/langchain-ai/langchain/issues/15052 | 2,053,554,200 | 15,052 |
[
"langchain-ai",
"langchain"
] | ### System Info
when i use sql agent, i want get table desc , but agent can't work , i am sure REPICA.ICA_PERSON_DATA_ALL table is existed
Question: Describe the REPICA.ICA_PERSON_DATA_ALL table
Thought: I should query the schema of the REPICA.ICA_PERSON_DATA_ALL table to get information about its columns and data types.
Action: sql_db_schema
Action Input: REPICA.ICA_PERSON_DATA_ALL
Observation: Error: table_names {'REPICA.ICA_PERSON_DATA_ALL'} not found in database
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
llm = ChatOpenAI(
openai_api_base=openai_api_base,
temperature=llm_temp,
openai_api_key=openai_api_key,
model_name=llm_model_name,
)
toolkit = SQLDatabaseToolkit(db=db, llm=llm)
agent_executor = create_sql_agent(llm = llm, toolkit=toolkit, prefix=prefix, verbose=True, agent_type=AgentType.ZERO_SHOT_REACT_DESCRIPTION, top_k=10)
agent_executor.run("Describe the REPICA.ICA_PERSON_DATA_ALL table")
### Expected behavior
i need get REPICA.ICA_PERSON_DATA_ALL desc info | oracle db when use sql agent can't find table name[Error: table_names] | https://api.github.com/repos/langchain-ai/langchain/issues/15051/comments | 1 | 2023-12-22T07:38:33Z | 2024-03-29T16:08:00Z | https://github.com/langchain-ai/langchain/issues/15051 | 2,053,514,880 | 15,051 |
[
"langchain-ai",
"langchain"
] | ### System Info
Python 3.10.10
langchain 0.0.350
langchain-community 0.0.3
langchain-core 0.1.1
google-search-results 2.4.2
Windows
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
**Summary**
Firstly, I ask the LangChain agent using code below and get a good response.
`result = agent_executor({"What is the recommended dose of doxycycline for acne for a male aged 12 years old?"})`
However, I ask a new followup question by changing the query and get and error.
`result = agent_executor({"What other forms of acne treatment are there for male teenagers around the age of 12 years old ?"})`
`BadRequestError: Error code: 400 - {'error': {'message': "'$.messages[1].content' is invalid.`
**Code Used**
```
model = AzureChatOpenAI(
openai_api_version="2023-12-01-preview",
azure_deployment="35-16k",
temperature=0
)
# Define which tools the agent can use to answer user queries
SERPAPI_API_KEY='------'
search = SerpAPIWrapper(serpapi_api_key=SERPAPI_API_KEY)
tools= [
Tool(
name="Search",
func=search.run,
description="useful for when you need to answer questions about current events"
)
]
# This is needed for both the memory and the prompt
memory_key = "history"
from langchain.agents.openai_functions_agent.agent_token_buffer_memory import (
AgentTokenBufferMemory,
)
memory = AgentTokenBufferMemory(memory_key=memory_key, llm=model, max_token_limit = 13000)
from langchain.agents.openai_functions_agent.base import OpenAIFunctionsAgent
from langchain.prompts import MessagesPlaceholder
from langchain_core.messages import SystemMessage
system_message = SystemMessage(
content=(
"You are an expert Doctor."
"Use the web search tool to gain insights from web search, summarise the results and provide an answer."
"Keep the source links during generation to produce inline citation."
"Cite search results using [${{number}}] notation. Only cite the most \
relevant results that answer the question accurately. Place these citations at the end \
of the sentence or paragraph that reference them - do not put them all at the end. If \
different results refer to different entities within the same name, write separate \
answers for each entity. If you want to cite multiple results for the same sentence, \
format it as `[${{number1}}] [${{number2}}]`. However, you should NEVER do this with the \
same number - if you want to cite `number1` multiple times for a sentence, only do \
`[${{number1}}]` not `[${{number1}}] [${{number1}}]`"
"From the retrieved results, generate an answer."
"At the end of the answer, show the source to the [${{number}}]. Follow this format, every [${{number}}] notation is followed by the corresponding source link "
"You should use bullet points in your answer for readability. Put citations where they apply \
rather than putting them all at the end."
)
)
prompt = OpenAIFunctionsAgent.create_prompt(
system_message=system_message,
extra_prompt_messages=[MessagesPlaceholder(variable_name=memory_key)],
)
#The Agent
agent = OpenAIFunctionsAgent(llm=model, tools=tools, prompt=prompt)
#The Agent Executor
from langchain.agents import AgentExecutor
agent_executor = AgentExecutor(
agent=agent,
tools=tools,
memory=memory,
verbose=True,
return_intermediate_steps=True,
)
result = agent_executor({"What is the recommended dose of doxycycline for acne for a male aged 12 years old?"})
```
**Output**
Shows good results
**Next** I run agen_executor with another question and this is where I get a bug.
result = agent_executor({"_new question_"})
Even if I just repeat the question ie use the initial one. I get this error.
**Error**
`---------------------------------------------------------------------------
BadRequestError Traceback (most recent call last)
Cell In[15], [line 1](vscode-notebook-cell:?execution_count=15&line=1)
----> [1](vscode-notebook-cell:?execution_count=15&line=1) result = agent_executor({"What is the recommended dose of doxycycline for acne for a male aged 12 years old?"})
File /anaconda/envs/langchain/lib/python3.10/site-packages/langchain/chains/base.py:312, in Chain.__call__(self, inputs, return_only_outputs, callbacks, tags, metadata, run_name, include_run_info)
[310](https://vscode-remote+amlext-002b2f737562736372697074696f6e732f31653065376137362d306236362d343632622d613831352d3733363863633464363832312f7265736f7572636547726f7570732f47656e657261746976655f41492f70726f7669646572732f4d6963726f736f66742e4d616368696e654c6561726e696e6753657276696365732f776f726b7370616365732f6c6c616d612f636f6d70757465732f7368616d75732d637075.vscode-resource.vscode-cdn.net/anaconda/envs/langchain/lib/python3.10/site-packages/langchain/chains/base.py:310) except BaseException as e:
[311](https://vscode-remote+amlext-002b2f737562736372697074696f6e732f31653065376137362d306236362d343632622d613831352d3733363863633464363832312f7265736f7572636547726f7570732f47656e657261746976655f41492f70726f7669646572732f4d6963726f736f66742e4d616368696e654c6561726e696e6753657276696365732f776f726b7370616365732f6c6c616d612f636f6d70757465732f7368616d75732d637075.vscode-resource.vscode-cdn.net/anaconda/envs/langchain/lib/python3.10/site-packages/langchain/chains/base.py:311) run_manager.on_chain_error(e)
--> [312](https://vscode-remote+amlext-002b2f737562736372697074696f6e732f31653065376137362d306236362d343632622d613831352d3733363863633464363832312f7265736f7572636547726f7570732f47656e657261746976655f41492f70726f7669646572732f4d6963726f736f66742e4d616368696e654c6561726e696e6753657276696365732f776f726b7370616365732f6c6c616d612f636f6d70757465732f7368616d75732d637075.vscode-resource.vscode-cdn.net/anaconda/envs/langchain/lib/python3.10/site-packages/langchain/chains/base.py:312) raise e
[313](https://vscode-remote+amlext-002b2f737562736372697074696f6e732f31653065376137362d306236362d343632622d613831352d3733363863633464363832312f7265736f7572636547726f7570732f47656e657261746976655f41492f70726f7669646572732f4d6963726f736f66742e4d616368696e654c6561726e696e6753657276696365732f776f726b7370616365732f6c6c616d612f636f6d70757465732f7368616d75732d637075.vscode-resource.vscode-cdn.net/anaconda/envs/langchain/lib/python3.10/site-packages/langchain/chains/base.py:313) run_manager.on_chain_end(outputs)
[314](https://vscode-remote+amlext-002b2f737562736372697074696f6e732f31653065376137362d306236362d343632622d613831352d3733363863633464363832312f7265736f7572636547726f7570732f47656e657261746976655f41492f70726f7669646572732f4d6963726f736f66742e4d616368696e654c6561726e696e6753657276696365732f776f726b7370616365732f6c6c616d612f636f6d70757465732f7368616d75732d637075.vscode-resource.vscode-cdn.net/anaconda/envs/langchain/lib/python3.10/site-packages/langchain/chains/base.py:314) final_outputs: Dict[str, Any] = self.prep_outputs(
[315](https://vscode-remote+amlext-002b2f737562736372697074696f6e732f31653065376137362d306236362d343632622d613831352d3733363863633464363832312f7265736f7572636547726f7570732f47656e657261746976655f41492f70726f7669646572732f4d6963726f736f66742e4d616368696e654c6561726e696e6753657276696365732f776f726b7370616365732f6c6c616d612f636f6d70757465732f7368616d75732d637075.vscode-resource.vscode-cdn.net/anaconda/envs/langchain/lib/python3.10/site-packages/langchain/chains/base.py:315) inputs, outputs, return_only_outputs
[316](https://vscode-remote+amlext-002b2f737562736372697074696f6e732f31653065376137362d306236362d343632622d613831352d3733363863633464363832312f7265736f7572636547726f7570732f47656e657261746976655f41492f70726f7669646572732f4d6963726f736f66742e4d616368696e654c6561726e696e6753657276696365732f776f726b7370616365732f6c6c616d612f636f6d70757465732f7368616d75732d637075.vscode-resource.vscode-cdn.net/anaconda/envs/langchain/lib/python3.10/site-packages/langchain/chains/base.py:316) )
File /anaconda/envs/langchain/lib/python3.10/site-packages/langchain/chains/base.py:306, in Chain.__call__(self, inputs, return_only_outputs, callbacks, tags, metadata, run_name, include_run_info)
[299](https://vscode-remote+amlext-002b2f737562736372697074696f6e732f31653065376137362d306236362d343632622d613831352d3733363863633464363832312f7265736f7572636547726f7570732f47656e657261746976655f41492f70726f7669646572732f4d6963726f736f66742e4d616368696e654c6561726e696e6753657276696365732f776f726b7370616365732f6c6c616d612f636f6d70757465732f7368616d75732d637075.vscode-resource.vscode-cdn.net/anaconda/envs/langchain/lib/python3.10/site-packages/langchain/chains/base.py:299) run_manager = callback_manager.on_chain_start(
[300](https://vscode-remote+amlext-002b2f737562736372697074696f6e732f31653065376137362d306236362d343632622d613831352d3733363863633464363832312f7265736f7572636547726f7570732f47656e657261746976655f41492f70726f7669646572732f4d6963726f736f66742e4d616368696e654c6561726e696e6753657276696365732f776f726b7370616365732f6c6c616d612f636f6d70757465732f7368616d75732d637075.vscode-resource.vscode-cdn.net/anaconda/envs/langchain/lib/python3.10/site-packages/langchain/chains/base.py:300) dumpd(self),
[301](https://vscode-remote+amlext-002b2f737562736372697074696f6e732f31653065376137362d306236362d343632622d613831352d3733363863633464363832312f7265736f7572636547726f7570732f47656e657261746976655f41492f70726f7669646572732f4d6963726f736f66742e4d616368696e654c6561726e696e6753657276696365732f776f726b7370616365732f6c6c616d612f636f6d70757465732f7368616d75732d637075.vscode-resource.vscode-cdn.net/anaconda/envs/langchain/lib/python3.10/site-packages/langchain/chains/base.py:301) inputs,
[302](https://vscode-remote+amlext-002b2f737562736372697074696f6e732f31653065376137362d306236362d343632622d613831352d3733363863633464363832312f7265736f7572636547726f7570732f47656e657261746976655f41492f70726f7669646572732f4d6963726f736f66742e4d616368696e654c6561726e696e6753657276696365732f776f726b7370616365732f6c6c616d612f636f6d70757465732f7368616d75732d637075.vscode-resource.vscode-cdn.net/anaconda/envs/langchain/lib/python3.10/site-packages/langchain/chains/base.py:302) name=run_name,
[303](https://vscode-remote+amlext-002b2f737562736372697074696f6e732f31653065376137362d306236362d343632622d613831352d3733363863633464363832312f7265736f7572636547726f7570732f47656e657261746976655f41492f70726f7669646572732f4d6963726f736f66742e4d616368696e654c6561726e696e6753657276696365732f776f726b7370616365732f6c6c616d612f636f6d70757465732f7368616d75732d637075.vscode-resource.vscode-cdn.net/anaconda/envs/langchain/lib/python3.10/site-packages/langchain/chains/base.py:303) )
[304](https://vscode-remote+amlext-002b2f737562736372697074696f6e732f31653065376137362d306236362d343632622d613831352d3733363863633464363832312f7265736f7572636547726f7570732f47656e657261746976655f41492f70726f7669646572732f4d6963726f736f66742e4d616368696e654c6561726e696e6753657276696365732f776f726b7370616365732f6c6c616d612f636f6d70757465732f7368616d75732d637075.vscode-resource.vscode-cdn.net/anaconda/envs/langchain/lib/python3.10/site-packages/langchain/chains/base.py:304) try:
[305](https://vscode-remote+amlext-002b2f737562736372697074696f6e732f31653065376137362d306236362d343632622d613831352d3733363863633464363832312f7265736f7572636547726f7570732f47656e657261746976655f41492f70726f7669646572732f4d6963726f736f66742e4d616368696e654c6561726e696e6753657276696365732f776f726b7370616365732f6c6c616d612f636f6d70757465732f7368616d75732d637075.vscode-resource.vscode-cdn.net/anaconda/envs/langchain/lib/python3.10/site-packages/langchain/chains/base.py:305) outputs = (
--> [306](https://vscode-remote+amlext-002b2f737562736372697074696f6e732f31653065376137362d306236362d343632622d613831352d3733363863633464363832312f7265736f7572636547726f7570732f47656e657261746976655f41492f70726f7669646572732f4d6963726f736f66742e4d616368696e654c6561726e696e6753657276696365732f776f726b7370616365732f6c6c616d612f636f6d70757465732f7368616d75732d637075.vscode-resource.vscode-cdn.net/anaconda/envs/langchain/lib/python3.10/site-packages/langchain/chains/base.py:306) self._call(inputs, run_manager=run_manager)
[307](https://vscode-remote+amlext-002b2f737562736372697074696f6e732f31653065376137362d306236362d343632622d613831352d3733363863633464363832312f7265736f7572636547726f7570732f47656e657261746976655f41492f70726f7669646572732f4d6963726f736f66742e4d616368696e654c6561726e696e6753657276696365732f776f726b7370616365732f6c6c616d612f636f6d70757465732f7368616d75732d637075.vscode-resource.vscode-cdn.net/anaconda/envs/langchain/lib/python3.10/site-packages/langchain/chains/base.py:307) if new_arg_supported
[308](https://vscode-remote+amlext-002b2f737562736372697074696f6e732f31653065376137362d306236362d343632622d613831352d3733363863633464363832312f7265736f7572636547726f7570732f47656e657261746976655f41492f70726f7669646572732f4d6963726f736f66742e4d616368696e654c6561726e696e6753657276696365732f776f726b7370616365732f6c6c616d612f636f6d70757465732f7368616d75732d637075.vscode-resource.vscode-cdn.net/anaconda/envs/langchain/lib/python3.10/site-packages/langchain/chains/base.py:308) else self._call(inputs)
...
(...)
[937](https://vscode-remote+amlext-002b2f737562736372697074696f6e732f31653065376137362d306236362d343632622d613831352d3733363863633464363832312f7265736f7572636547726f7570732f47656e657261746976655f41492f70726f7669646572732f4d6963726f736f66742e4d616368696e654c6561726e696e6753657276696365732f776f726b7370616365732f6c6c616d612f636f6d70757465732f7368616d75732d637075.vscode-resource.vscode-cdn.net/anaconda/envs/langchain/lib/python3.10/site-packages/openai/_base_client.py:937) stream_cls=stream_cls,
[938](https://vscode-remote+amlext-002b2f737562736372697074696f6e732f31653065376137362d306236362d343632622d613831352d3733363863633464363832312f7265736f7572636547726f7570732f47656e657261746976655f41492f70726f7669646572732f4d6963726f736f66742e4d616368696e654c6561726e696e6753657276696365732f776f726b7370616365732f6c6c616d612f636f6d70757465732f7368616d75732d637075.vscode-resource.vscode-cdn.net/anaconda/envs/langchain/lib/python3.10/site-packages/openai/_base_client.py:938) )
BadRequestError: Error code: 400 - {'error': {'message': "'$.messages[1].content' is invalid. Please check the API reference: https://platform.openai.com/docs/api-reference.", 'type': 'invalid_request_error', 'param': None, 'code': None}}`
### Expected behavior
Formatted normal output | Error Code 400. Can ask follow up questions with agent_executor | https://api.github.com/repos/langchain-ai/langchain/issues/15050/comments | 4 | 2023-12-22T07:06:47Z | 2024-04-11T16:17:10Z | https://github.com/langchain-ai/langchain/issues/15050 | 2,053,484,432 | 15,050 |
[
"langchain-ai",
"langchain"
] | ### Feature request
Microsoft's new model called [Phi](https://www.microsoft.com/en-us/research/blog/phi-2-the-surprising-power-of-small-language-models/) seems interesting...
### Motivation
Performance of smaller models 2-3B params can compare to models with 7B params.
### Your contribution
i don't know enough to so, otherwise i would submit a PR... | add Microsoft/Phi support | https://api.github.com/repos/langchain-ai/langchain/issues/15049/comments | 1 | 2023-12-22T06:28:54Z | 2024-03-29T16:07:55Z | https://github.com/langchain-ai/langchain/issues/15049 | 2,053,449,472 | 15,049 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
When I use the agent tool, I need to verify whether there are parameters in the problem. If there are no parameters, I will remind the user to input them. How can I implement this?
### Suggestion:
_No response_ | When I use the agent tool, I need to verify whether there are parameters in the problem. If there are no parameters, I will remind the user to input them. How can I implement this? | https://api.github.com/repos/langchain-ai/langchain/issues/15048/comments | 2 | 2023-12-22T06:18:00Z | 2024-03-29T16:07:50Z | https://github.com/langchain-ai/langchain/issues/15048 | 2,053,440,880 | 15,048 |
[
"langchain-ai",
"langchain"
] | ### System Info
Langchain==0.0.352
MacOS intel version
python==3.8.3
networkx==2.7.1
### Who can help?
@hwchase17 @agola11 I have an issue when initializing GraphQAChain using networkx graph.
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
I made a networkx graph and tried to initialize GraphQAChain but got the below error.
<img width="1010" alt="image" src="https://github.com/langchain-ai/langchain/assets/41593805/d4ba54bc-bf43-4303-8550-8ef75fb4b7e0">
<img width="1482" alt="image" src="https://github.com/langchain-ai/langchain/assets/41593805/c4286425-7df7-4c3a-bbc5-3e6e92a69786">
### Expected behavior
I would like to initialize GraphQAChain and chat with the network.
Please help! | GraphQAChain not working when using Networkx graphs !! | https://api.github.com/repos/langchain-ai/langchain/issues/15046/comments | 6 | 2023-12-22T03:28:50Z | 2024-07-27T15:00:45Z | https://github.com/langchain-ai/langchain/issues/15046 | 2,053,324,243 | 15,046 |
[
"langchain-ai",
"langchain"
] | ### System Info
LangChain 0.0.348
langchain-nvidia-trt 0.0.1rc0
Python 3.11
### Who can help?
@jdye64
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
You can set up the triton server as in the quick start guide here using a nemotron model:
https://github.com/fciannella/langchain-fciannella/blob/master/libs/partners/nvidia-trt/README.md
Then you can just try sending a request from the LangChain plugin and it will not work.
Same thing if you follow the more complex setup.
The input and output parameters need to be discovered by the client. One option is to use pytriton as a client, or just pull the parameters from the server using an API call to get the triton configuration?
### Expected behavior
The client should provide a list of the mandatory parameters in the error code in case there is a missing parameter in the request. | NVIDIA Triton+TRT-LLM connector needs to handle dynamic model parameters | https://api.github.com/repos/langchain-ai/langchain/issues/15045/comments | 2 | 2023-12-22T02:17:48Z | 2024-06-08T16:08:15Z | https://github.com/langchain-ai/langchain/issues/15045 | 2,053,274,172 | 15,045 |
[
"langchain-ai",
"langchain"
] | https://github.com/langchain-ai/langchain/blob/2460f977c5c20073b41803c41fd08945be34cd60/libs/langchain/langchain/agents/output_parsers/openai_functions.py#L49
Eeven though you pass your customized function, gpt-3.5 will often return a function call which name is "python" and the arguments are not in json format.
I would suggest to check the function name here. (Idea from Open Interpreter Source Code)
Thanks,
ZD | BUG! The arguments of function calling returned by gpt 3.5 might not be a dict | https://api.github.com/repos/langchain-ai/langchain/issues/15043/comments | 3 | 2023-12-22T01:48:52Z | 2024-04-10T16:14:54Z | https://github.com/langchain-ai/langchain/issues/15043 | 2,053,256,211 | 15,043 |
[
"langchain-ai",
"langchain"
] | ### System Info
The example found [here](https://python.langchain.com/docs/integrations/vectorstores/azuresearch) and in particular this code fragment
```
embeddings: OpenAIEmbeddings = OpenAIEmbeddings(deployment=model, chunk_size=1)
index_name: str = "langchain-vector-demo"
vector_store: AzureSearch = AzureSearch(
azure_search_endpoint=vector_store_address,
azure_search_key=vector_store_password,
index_name=index_name,
embedding_function=embeddings.embed_query,
)
```
fails with a message that at the top is:
```
vector_search_configuration is not a known attribute of class <class 'azure.search.documents.indexes.models._index.SearchField'> and will be ignored
semantic_settings is not a known attribute of class <class 'azure.search.documents.indexes.models._index.SearchIndex'> and will be ignored
```
and culminates with:
```
HttpResponseError: (InvalidRequestParameter) The request is invalid. Details: definition : The vector field 'content_vector' must have the property 'vectorSearchConfiguration' set.
Code: InvalidRequestParameter
Message: The request is invalid. Details: definition : The vector field 'content_vector' must have the property 'vectorSearchConfiguration' set.
Exception Details: (InvalidField) The vector field 'content_vector' must have the property 'vectorSearchConfiguration' set. Parameters: definition
Code: InvalidField
Message: The vector field 'content_vector' must have the property 'vectorSearchConfiguration' set. Parameters: definition
```
I am running with `python 3.10`, `openai 1.5.0`, `langchain 0.0.351` and `azure-search-documents 11.4.0`. If I revert to `azure-search-documents 11.4.0b8` the code works.
This appears to be related to the November 2023 Microsoft API change which introduced the concept of a "profile" which aggregates various vector search settings under one name. As a result of this API change, the old "vector_search_configuration" was deprecated and a new "vector_search_profile" was added, along with a new "profiles" object. It seems that the Langchain extension has not been updated for this change and expects the old "vector_search_configuration" property which doesn't exist on newer SDK releases.
See the discussion [here](https://stackoverflow.com/questions/77682544/vector-search-configuration-is-not-a-known-attribute-of-class-class-azure-sear/77694628#77694628).
### Who can help?
@hwchase17
@bas
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
The following sample code fragment from [here:(https://python.langchain.com/docs/integrations/vectorstores/azuresearch) fails.
```
import os
from langchain.embeddings import OpenAIEmbeddings
from langchain.vectorstores.azuresearch import AzureSearch
os.environ["OPENAI_API_TYPE"] = "azure"
os.environ["OPENAI_API_BASE"] = "YOUR_OPENAI_ENDPOINT"
os.environ["OPENAI_API_KEY"] = "YOUR_OPENAI_API_KEY"
os.environ["OPENAI_API_VERSION"] = "2023-05-15"
model: str = "text-embedding-ada-002"
vector_store_address: str = "YOUR_AZURE_SEARCH_ENDPOINT"
vector_store_password: str = "YOUR_AZURE_SEARCH_ADMIN_KEY"
embeddings: OpenAIEmbeddings = OpenAIEmbeddings(deployment=model, chunk_size=1)
index_name: str = "langchain-vector-demo"
vector_store: AzureSearch = AzureSearch(
azure_search_endpoint=vector_store_address,
azure_search_key=vector_store_password,
index_name=index_name,
embedding_function=embeddings.embed_query,
)
```
### Expected behavior
The code should run without failure and return a vector store. | AzureSearch Bug -- langchain.vectorstores.azuresearch | https://api.github.com/repos/langchain-ai/langchain/issues/15039/comments | 5 | 2023-12-21T23:45:19Z | 2024-06-01T00:07:40Z | https://github.com/langchain-ai/langchain/issues/15039 | 2,053,180,711 | 15,039 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
Seeing an issue in my code that appeared out of nowhere, hoping for some support here. The error message I am seeing is `ValueError: Could not parse output: Answer to inquiry from OpenAI. Score: 90` from the output_parsers/regex.py (https://github.com/langchain-ai/langchain/blob/v0.0.257/libs/langchain/langchain/output_parsers/regex.py#L35)
Langchain version: v0.0.257
LLM OpenAI: https://github.com/langchain-ai/langchain/blob/v0.0.257/libs/langchain/langchain/llms/openai.py
Model: text-davinci-003
Chain type: map_rerank (https://github.com/langchain-ai/langchain/blob/v0.0.257/libs/langchain/langchain/chains/combine_documents/map_rerank.py)
Vector store: Qdrant
Code to reproduce:
```python
import os
from pathlib import Path
from langchain import OpenAI, PromptTemplate
from langchain.chains import RetrievalQAWithSourcesChain
from langchain.memory import FileChatMessageHistory
import logging
from service import db
logging.basicConfig(level=logging.DEBUG)
def conversation(question):
collection_name = "qdrant_collection_name"
llm = OpenAI(model='text-davinci-003', openai_api_key=os.environ['OPENAI_API_KEY'], temperature=0.0)
combine_prompt_template = """
'DOCUMENTS:
{summaries}
QUESTION:
{question}
### Response:
"""
COMBINE_PROMPT = PromptTemplate(
template=combine_prompt_template, input_variables=["summaries", "question"]
)
qa_chain = RetrievalQAWithSourcesChain.from_chain_type(
llm,
chain_type='map_rerank',
retriever=db.vectorstore(collection_name).as_retriever(),
return_source_documents=True,
reduce_k_below_max_tokens=True,
)
return qa_chain({"question": question})['answer']
if __name__ == "__main__":
print(conversation('How do I add sales tax to a transaction in avatax?'))
```
Also posting on Stackoverflow
### Suggestion:
N/A | Issue: ValueError: Could not parse output: | https://api.github.com/repos/langchain-ai/langchain/issues/15037/comments | 1 | 2023-12-21T23:20:12Z | 2024-03-28T16:08:43Z | https://github.com/langchain-ai/langchain/issues/15037 | 2,053,166,082 | 15,037 |
[
"langchain-ai",
"langchain"
] | ## Describe the problem
When do inference (with the llm or chat model) we pass a empty list to POST request, when the "stop" attribute of `_create_stream` is not set.
This create a problem when using model, bc ollama override the stop sequence list of the model with the list we pass in the request
## Solution
I tried in local, and by deleting this [line](https://github.com/langchain-ai/langchain/blob/1b01ee0e3c7f0df5855c7440d471ddab4f0efc7e/libs/community/langchain_community/llms/ollama.py#L162C1-L162C1) , the stop list will be set by ollama when we don't have either a "global stop" (the one we define in the costructor of the ollama or chatollama class) or a stop sequence from the calling of the function.
Also ollama know support a new endpoint to retrive information about the model (using GET /api/show), but need parsing so Maybe it's a idea for the future...
Yes, i know
"if you are working with one model, just put the stop sequences and ez" but i think is very bad...
| Ollama integration: The stop sequence is empty when do inference | https://api.github.com/repos/langchain-ai/langchain/issues/15024/comments | 3 | 2023-12-21T19:04:00Z | 2024-03-28T16:08:37Z | https://github.com/langchain-ai/langchain/issues/15024 | 2,052,931,048 | 15,024 |
[
"langchain-ai",
"langchain"
] | ### Feature request
I am developing an application using Langchain with the following code
```
from langchain.agents.agent_types import AgentType
from langchain.chat_models import ChatOpenAI
from langchain_experimental.agents.agent_toolkits import create_pandas_dataframe_agent
import pandas as pd
from langchain.llms import OpenAI
df = pd.read_csv("titanic.csv")
agent = create_pandas_dataframe_agent(OpenAI(temperature=0), df, verbose=True)
agent.run("how many rows are there?")
```
I would like to know if the agent can return a DataFrame instead of the response as text. For example, if I ask it to add a column, it would return a DataFrame as a response with the added column.
### Motivation
A new feature would help in automation.
### Your contribution
What can I do to help with the project, count me in! | It's possible agent Langchain return a Pandas Dataframe? | https://api.github.com/repos/langchain-ai/langchain/issues/15020/comments | 1 | 2023-12-21T18:54:13Z | 2024-03-28T16:08:32Z | https://github.com/langchain-ai/langchain/issues/15020 | 2,052,920,653 | 15,020 |
[
"langchain-ai",
"langchain"
] | ### System Info
I am using below packages.
Python 3.12.1
langchain 0.0.352
pydantic 2.5.2
openai 1.4.0
huggingface-hub 0.19.4
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
While executing the below code from deeplearning.ai
from langchain.chains import RetrievalQA
from langchain.chat_models import ChatOpenAI
from langchain.document_loaders import CSVLoader
from langchain.vectorstores import DocArrayInMemorySearch
from IPython.display import display, Markdown
file = 'OutdoorClothingCatalog_1000.csv'
loader = CSVLoader(file_path=file, encoding='utf8')
from langchain.indexes import VectorstoreIndexCreator
index = VectorstoreIndexCreator(
vectorstore_cls=DocArrayInMemorySearch
).from_loaders([loader])
### Expected behavior
Code should successfully execute | AttributeError while executing VectorstoreIndexCreator & DocArrayInMemorySearch | https://api.github.com/repos/langchain-ai/langchain/issues/15016/comments | 10 | 2023-12-21T16:34:48Z | 2024-04-05T16:07:25Z | https://github.com/langchain-ai/langchain/issues/15016 | 2,052,739,238 | 15,016 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
# below is my code
`def generate_custom_prompt(query):
# Create the custom prompt template
custom_prompt_template = f"""You are a chatbot designed to provide helpful answers to user questions. If you encounter a question for which you don't know the answer, please respond with 'I don't know' and refrain from making up an answer.
User's Question: {query}
In your response, aim for clarity and conciseness. Provide information that directly addresses the user's query. If the answer requires additional details, feel free to ask clarifying questions.
Remember, your goal is to assist the user in the best way possible. If the question is unclear or ambiguous, you can ask for clarification.
Your Helpful Answer:"""
# Create the PromptTemplate
custom_prompt = PromptTemplate(
template=custom_prompt_template, input_variables=["query"]
)
# Format the prompt
formatted_prompt = custom_prompt.format(query=query)
return formatted_prompt`
#below is the error i am getting
response = wrapped_callback(request, *callback_args, **callback_kwargs)
File "/home/hs/CustomBot/chatbot/views.py", line 360, in GetChatResponse
custom_message=generate_custom_prompt(query)
File "/home/hs/CustomBot/accounts/common_langcain_qa.py", line 40, in generate_custom_prompt
formatted_prompt = custom_prompt.format(query=query)
File "/home/hs/env/lib/python3.8/site-packages/langchain_core/prompts/prompt.py", line 132, in format
return DEFAULT_FORMATTER_MAPPING[self.template_format](self.template, **kwargs)
File "/usr/lib/python3.8/string.py", line 163, in format
return self.vformat(format_string, args, kwargs)
File "/home/hs/env/lib/python3.8/site-packages/langchain_core/utils/formatting.py", line 29, in vformat
return super().vformat(format_string, args, kwargs)
File "/usr/lib/python3.8/string.py", line 168, in vformat
self.check_unused_args(used_args, args, kwargs)
File "/home/hs/env/lib/python3.8/site-packages/langchain_core/utils/formatting.py", line 18, in check_unused_args
raise KeyError(extra)
KeyError: {'query'}
### Suggestion:
_No response_ | Issue: not getting output as per my prompt template | https://api.github.com/repos/langchain-ai/langchain/issues/15014/comments | 5 | 2023-12-21T15:28:03Z | 2024-04-18T16:34:57Z | https://github.com/langchain-ai/langchain/issues/15014 | 2,052,630,688 | 15,014 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
I am following the documentation on https://python.langchain.com/docs/modules/agents/, but as I only have access to an Azure deployment of OpenAI, there is a small deviance from the tutorial. When running the code:
`from langchain.chat_models import AzureChatOpenAI
from langchain.agents import tool
from langchain.agents.output_parsers import OpenAIFunctionsAgentOutputParser
import os
os.environ["OPENAI_API_TYPE"] = "azure"
os.environ["OPENAI_API_VERSION"] = "2023-07-01-preview"
# Define the AzureChatOpenAI model
llm = AzureChatOpenAI(
azure_endpoint="https://ENDPOINT.openai.azure.com",
deployment_name="NAME",
api_key="KEY",
temperature=0.5
)
### Create tool
from langchain.agents import tool
@tool
def get_word_length(word: str) -> int:
"""Returns the length of a word."""
return len(word)
tools = [get_word_length]
### create prompt
from langchain.prompts import ChatPromptTemplate, MessagesPlaceholder
prompt = ChatPromptTemplate.from_messages(
[
(
"system",
"You are very powerful assistant, but bad at calculating lengths of words.",
),
("user", "{input}"),
MessagesPlaceholder(variable_name="agent_scratchpad"),
]
)
### llm with tool
from langchain.tools.render import format_tool_to_openai_function
llm_with_tools = llm.bind(functions=[format_tool_to_openai_function(t) for t in tools])
#### create agent
from langchain.agents.format_scratchpad import format_to_openai_function_messages
from langchain.agents.output_parsers import OpenAIFunctionsAgentOutputParser
agent = (
{
"input": lambda x: x["input"],
"agent_scratchpad": lambda x: format_to_openai_function_messages(
x["intermediate_steps"]
),
}
| prompt
| llm_with_tools
| OpenAIFunctionsAgentOutputParser()
)
agent.invoke({"input": "how many letters in the word educa?", "intermediate_steps": []})`
I get the following error message, which arises from agent.invoke(...):
openai.NotFoundError: Error code: 404 - {'error': {'message': 'Unrecognized request argument supplied: functions', 'type': 'invalid_request_error', 'param': None, 'code': None}}
### Suggestion:
_No response_ | Error message when adhering to Agents Langchain documentation only substituting OpenAI with AzureOpenAI: "openai.NotFoundError: Error code: 404 - {'error': {'message': 'Unrecognized request argument supplied: functions', 'type': 'invalid_request_error', 'param': None, 'code': None}}" | https://api.github.com/repos/langchain-ai/langchain/issues/15012/comments | 2 | 2023-12-21T14:47:39Z | 2024-04-24T16:40:51Z | https://github.com/langchain-ai/langchain/issues/15012 | 2,052,564,532 | 15,012 |
[
"langchain-ai",
"langchain"
] | ### System Info
Everything is latest
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
from langchain.document_loaders import PyPDFLoader
loader = PyPDFLoader("path/to/document")
pages = loader.load_and_split()
### Expected behavior
Documents should not have "/n" | In Pypdf loader "/n " is not removed before creating documents | https://api.github.com/repos/langchain-ai/langchain/issues/15011/comments | 7 | 2023-12-21T14:05:31Z | 2024-08-03T07:40:18Z | https://github.com/langchain-ai/langchain/issues/15011 | 2,052,488,253 | 15,011 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
To fit the prompt from openai to some other local LLM, what's the best way to update the prompts in langchain.chains.query_constructor.prompt while keeping the rest code the same?
### Suggestion:
_No response_ | Issue: update prompts for local LLM | https://api.github.com/repos/langchain-ai/langchain/issues/15008/comments | 8 | 2023-12-21T12:27:35Z | 2024-04-23T17:06:13Z | https://github.com/langchain-ai/langchain/issues/15008 | 2,052,333,901 | 15,008 |
[
"langchain-ai",
"langchain"
] | ### System Info
Traceback (most recent call last):
File "C:\Users\vivek\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\langchain_community\vectorstores\azuresearch.py", line 97, in _get_search_client
from azure.search.documents.indexes.models import (
ImportError: cannot import name 'HnswAlgorithmConfiguration' from 'azure.search.documents.indexes.models' (C:\Users\vivek\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\azure\search\documents\indexes\models\__init__.py)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "c:\Users\vivek\OneDrive\Desktop\Hackathon\userdoc1.py", line 47, in <module>
vector_store: AzureSearch = AzureSearch(
^^^^^^^^^^^^
File "C:\Users\vivek\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\langchain_community\vectorstores\azuresearch.py", line 299, in __init__
self.client = _get_search_client(
^^^^^^^^^^^^^^^^^^^
File "C:\Users\vivek\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\langchain_community\vectorstores\azuresearch.py", line 105, in _get_search_client
from azure.search.documents.indexes.models import (
ImportError: cannot import name 'HnswVectorSearchAlgorithmConfiguration' from 'azure.search.documents.indexes.models' (C:\Users\vivek\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\azure\search\documents\indexes\models\__init__.py)
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
embedding_model="BAAI/bge-small-en-v1.5"
embeddings= HuggingFaceEmbeddings(model_name=embedding_model)
# embeddings = AzureOpenAIEmbeddings(deployment=model, chunk_size=100)
# embeddings = AzureOpenAIEmbeddings(azure_deployment="T-ada-002",openai_api_version="2023-05-15")
index_name: str = "i17"
vector_store: AzureSearch = AzureSearch(
azure_search_endpoint=vector_store_address,
azure_search_key=os.environ.get("AZURE_COGNITIVE_SEARCH_API_KEY"),
index_name=index_name,
embedding_function=embeddings.embed_query,
)
loader = AzureBlobStorageContainerLoader(
conn_str=os.environ.get("AZURE_CONN_STRING"),
container=os.environ.get("CONTAINER_NAME"),
)
documents = loader.load()
text_splitter = CharacterTextSplitter(chunk_size=150, chunk_overlap=20)
docs = text_splitter.split_documents(documents)
vector_store.add_documents(documents=docs)
### Expected behavior
data from azure cognitive service | Azure | https://api.github.com/repos/langchain-ai/langchain/issues/15007/comments | 1 | 2023-12-21T11:43:02Z | 2024-03-28T16:08:28Z | https://github.com/langchain-ai/langchain/issues/15007 | 2,052,264,900 | 15,007 |
[
"langchain-ai",
"langchain"
] | ### System Info
aiohttp==3.9.1
aiosignal==1.3.1
annotated-types==0.6.0
anyio==4.2.0
attrs==23.1.0
cachetools==5.3.2
certifi==2023.11.17
charset-normalizer==3.3.2
colorama==0.4.6
dataclasses-json==0.6.3
distro==1.8.0
frozenlist==1.4.1
google-ai-generativelanguage==0.4.0
google-api-core==2.15.0
google-auth==2.25.2
google-generativeai==0.3.2
googleapis-common-protos==1.62.0
greenlet==3.0.2
grpcio==1.60.0
grpcio-status==1.60.0
h11==0.14.0
httpcore==1.0.2
httpx==0.26.0
idna==3.6
jsonpatch==1.33
jsonpointer==2.4
langchain==0.0.352
langchain-community==0.0.5
langchain-core==0.1.2
langchain-google-genai==0.0.5
langsmith==0.0.72
marshmallow==3.20.1
multidict==6.0.4
mypy-extensions==1.0.0
numpy==1.26.2
openai==1.6.0
packaging==23.2
Pillow==10.1.0
proto-plus==1.23.0
protobuf==4.25.1
pyasn1==0.5.1
pyasn1-modules==0.3.0
pydantic==2.5.2
pydantic_core==2.14.5
PyYAML==6.0.1
requests==2.31.0
rsa==4.9
sniffio==1.3.0
SQLAlchemy==2.0.23
tenacity==8.2.3
tqdm==4.66.1
typing-inspect==0.9.0
typing_extensions==4.9.0
urllib3==2.1.0
yarl==1.9.4
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```
class StreamingLLMCallbackHandler(BaseCallbackHandler):
"""Callback handler for streaming LLM responses to a queue."""
def __init__(self):
"""
Initialize the StreamingLLMCallbackHandler.
"""
self._is_done = False
self._queue = Queue()
def on_llm_new_token(self, token: str, **kwargs: Any) -> None:
"""
Run on new LLM token. Only available when streaming is enabled.
Args:
token (str): The new LLM token.
**kwargs (Any): Additional keyword arguments.
"""
if is_dev_mode():
print(token, end='')
self._queue.put(EventData(content=token))
def on_llm_end(self, response: LLMResult, *, run_id: UUID, parent_run_id: Optional[UUID] = None, **kwargs: Any):
"""
Run when LLM ends running.
Args:
response (LLMResult): The LLM processing result.
run_id (UUID): The unique identifier for the current run.
parent_run_id (Optional[UUID]): The unique identifier for the parent run.
**kwargs (Any): Additional keyword arguments.
"""
self._queue.put(EventData(content=None, finish_reason='done'))
self._is_done = True
class LangChainChatService(ChatBaseService):
"""
Service class for handling chat operations using LangChain models.
This class extends the ChatBaseService and provides methods for streaming chat and one-time HTTP chat using
LangChain models.
Methods:
stream_chat_async: Stream chat using similarity search and AI.
http_chat_async: Perform one-time HTTP chat using similarity search and AI.
_qa_task: Helper method to execute the QA task in a separate thread.
_get_qa_chain: Get the conversational retrieval chain for handling chat.
_publish_chat_history: Publish chat history to Kafka.
Attributes:
model (LangchainChatModel): The LangChain chat model.
request_manager: The API request manager.
tool_query: The tool query for chat operations.
"""
def __init__(self, model: LangchainChatModel, tool_query=None):
"""
Initializes the LangChainChatService.
Args:
model (LangchainChatModel): The LangChain chat model.
tool_query (Optional[str]): The tool query for chat operations.
"""
super().__init__()
self.model = model
self.request_manager = api_request_manager_var.get()
self.tool_query = tool_query
async def http_chat_async(self) -> dict:
"""
Perform one-time HTTP chat using similarity search and AI.
Returns:
dict: The response from the chat operation.
"""
formatted_chat_history = [] if self.tool_query else self.get_formatted_chat_history(self.model.chat_history)
qa = self._get_qa_chain(callbacks=[streaming_handler])
qa.return_source_documents = self.model.return_source_documents
qa.return_generated_question = True
query_start = time.time()
question = self.tool_query or self.model.query
qa_response = await qa.ainvoke({"question": question, "chat_history": formatted_chat_history})
query_end = time.time()
result = {
'query_result': qa_response.get("answer"),
'query_time': int((query_end - query_start) * 1000),
'generated_question': qa_response.get('generated_question'),
'source_documents': [document.__dict__ for document in qa_response.get("source_documents", [])],
}
self._publish_chat_history(result)
return result
def _get_qa_chain(self, callbacks: Callbacks = None) -> BaseConversationalRetrievalChain:
"""
Get the conversational retrieval chain for handling chat.
Args:
callbacks (Callbacks): The callbacks to be used.
Returns:
BaseConversationalRetrievalChain: The conversational retrieval chain.
"""
collection_name = get_langchain_collection_name(self.model.client_id)
connection_args = {"host": AppConfig.vector_db_host, "port": AppConfig.vector_db_port}
embeddings = LLMSelector(self.model).get_embeddings()
vector_store = Milvus(embeddings, collection_name=collection_name, connection_args=connection_args)
expression = get_expression_to_fetch_db_text_from_ids(**self.model.model_dump())
# this LLMSelector class return ChatGoogleGenerativeAI instance with streaming
qa_llm = LLMSelector(self.model).get_language_model(streaming=self.model.stream_response, callbacks=callbacks)
condense_question_llm = LLMSelector(self.model).get_language_model()
prompt_selector = get_prompt_selector(human_context=self.model.human_context, system_context=self.model.system_context)
qa = ConversationalRetrievalChain.from_llm(
llm=qa_llm,
retriever=vector_store.as_retriever(search_type="similarity", search_kwargs={"k": self.model.similarity_top_k, 'expr': expression}),
condense_question_llm=condense_question_llm,
combine_docs_chain_kwargs={"prompt": prompt_selector.get_prompt(qa_llm)}
)
return qa
```
### Expected behavior
Chain should trigger `on_llm_new_token` method of callback handler when streaming true for ChatGoogleGenerativeAI. | ConversationalRetrievalChain not working with stream callback handler for ChatGoogleGenerativeAI | https://api.github.com/repos/langchain-ai/langchain/issues/15006/comments | 1 | 2023-12-21T11:26:22Z | 2024-02-27T04:48:21Z | https://github.com/langchain-ai/langchain/issues/15006 | 2,052,241,949 | 15,006 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
Traceback (most recent call last):
File "c:\Users\vivek\OneDrive\Desktop\SOPPOC\version.py", line 57, in <module>
openAIEmbedd = FAISS.from_documents(texts, embeddings)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\vivek\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\langchain_core\vectorstores.py", line 510, in from_documents
return cls.from_texts(texts, embedding, metadatas=metadatas, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\vivek\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\langchain_community\vectorstores\faiss.py", line 914, in from_texts
embeddings = embedding.embed_documents(texts)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\vivek\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\langchain_community\embeddings\openai.py", line 667, in embed_documents
return self._get_len_safe_embeddings(texts, engine=engine)
^^^^^^^^^^^
ValueError: not enough values to unpack (expected 2, got 1)
why i am getting this error
### Suggestion:
_No response_ | ValueError: not enough values to unpack (expected 2, got 1) | https://api.github.com/repos/langchain-ai/langchain/issues/15005/comments | 2 | 2023-12-21T11:13:46Z | 2024-04-03T16:08:09Z | https://github.com/langchain-ai/langchain/issues/15005 | 2,052,224,049 | 15,005 |
[
"langchain-ai",
"langchain"
] | ### System Info
Python 3.10.10
langchain 0.0.350
langchain-community 0.0.3
langchain-core 0.1.1
Windows Machine
VSCode
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Here I am pretty much following the official tutorial with a modification where I used Pinecone Vector DB instead of Chroma.
Tutorial link- https://python.langchain.com/docs/use_cases/question_answering/conversational_retrieval_agents#the-prompt-template
Code
```
loader = DirectoryLoader('/home/azureuser/cloudfiles/code/Users/shamus/Chat With Your Data /docs/CPG', glob="./*.pdf", loader_cls=PyPDFLoader)
documents = loader.load()
text_splitter = RecursiveCharacterTextSplitter(chunk_size=1000, chunk_overlap=200)
texts = text_splitter.split_documents(documents)
pinecone.init(
api_key = PINECONE_API_KEY,
environment = PINECONE_ENV
)
#Use Pinecone
index_name = 'openai-ada'
from langchain.embeddings import AzureOpenAIEmbeddings
embeddings = AzureOpenAIEmbeddings(
azure_deployment="embedding",
openai_api_version="2023-05-15", #or 2023-12-01-preview
)
docsearch = Pinecone.from_documents(texts, embeddings, index_name = index_name)
retriever = docsearch.as_retriever(include_metadata=True, metadata_key = 'source', search_type="mmr")
from langchain.agents.agent_toolkits.conversational_retrieval.tool import create_retriever_tool
tool = create_retriever_tool(
retriever,
name="Cardiology_CPG", #has some naming convention:1 to 64 characters long,only contain alphanumeric characters, (_), (-)
description="Searches and returns documents regarding CPG Cardiology in Malaysia.",
)
tools = [tool]
from langchain.agents.agent_toolkits import create_conversational_retrieval_agent
chat_llm = AzureChatOpenAI(
openai_api_version="2023-12-01-preview",
azure_deployment="Test1",
temperature=0
)
agent_executor = create_conversational_retrieval_agent(chat_llm, tools=tools, verbose=True)
result = agent_executor({"input": "Hi, what are the key latest updates of Clinical Practice Guidelines in Cardiology this year?"})
```
As part of my sanity check. I ran the following which all returned intelligible outputs.
```
docs = retriever.get_relevant_documents("What are the latest guidelines in CPG Cardiology in Malaysia?")
print(docs)
# Check the properties of the first tool (assuming only one tool is added)
if tools:
first_tool = tools[0]
print(f"Tool Name: {first_tool.name}")
print(f"Tool Description: {first_tool.description}")
chat_llm.predict("Hello world!")
```
Error
```
> Entering new AgentExecutor chain...
---------------------------------------------------------------------------
NotFoundError Traceback (most recent call last)
Cell In[129], [line 1](vscode-notebook-cell:?execution_count=129&line=1)
----> [1](vscode-notebook-cell:?execution_count=129&line=1) result = agent_executor({"input": "Hi, what are the key latest updates of Clinical Practice Guidelines in Cardiology this year?"})
File /anaconda/envs/langchain/lib/python3.10/site-packages/langchain/chains/base.py:312, in Chain.__call__(self, inputs, return_only_outputs, callbacks, tags, metadata, run_name, include_run_info)
[310](https://vscode-remote+amlext-002b2f737562736372697074696f6e732f31653065376137362d306236362d343632622d613831352d3733363863633464363832312f7265736f7572636547726f7570732f47656e657261746976655f41492f70726f7669646572732f4d6963726f736f66742e4d616368696e654c6561726e696e6753657276696365732f776f726b7370616365732f6c6c616d612f636f6d70757465732f7368616d75732d637075.vscode-resource.vscode-cdn.net/anaconda/envs/langchain/lib/python3.10/site-packages/langchain/chains/base.py:310) except BaseException as e:
[311](https://vscode-remote+amlext-002b2f737562736372697074696f6e732f31653065376137362d306236362d343632622d613831352d3733363863633464363832312f7265736f7572636547726f7570732f47656e657261746976655f41492f70726f7669646572732f4d6963726f736f66742e4d616368696e654c6561726e696e6753657276696365732f776f726b7370616365732f6c6c616d612f636f6d70757465732f7368616d75732d637075.vscode-resource.vscode-cdn.net/anaconda/envs/langchain/lib/python3.10/site-packages/langchain/chains/base.py:311) run_manager.on_chain_error(e)
--> [312](https://vscode-remote+amlext-002b2f737562736372697074696f6e732f31653065376137362d306236362d343632622d613831352d3733363863633464363832312f7265736f7572636547726f7570732f47656e657261746976655f41492f70726f7669646572732f4d6963726f736f66742e4d616368696e654c6561726e696e6753657276696365732f776f726b7370616365732f6c6c616d612f636f6d70757465732f7368616d75732d637075.vscode-resource.vscode-cdn.net/anaconda/envs/langchain/lib/python3.10/site-packages/langchain/chains/base.py:312) raise e
[313](https://vscode-remote+amlext-002b2f737562736372697074696f6e732f31653065376137362d306236362d343632622d613831352d3733363863633464363832312f7265736f7572636547726f7570732f47656e657261746976655f41492f70726f7669646572732f4d6963726f736f66742e4d616368696e654c6561726e696e6753657276696365732f776f726b7370616365732f6c6c616d612f636f6d70757465732f7368616d75732d637075.vscode-resource.vscode-cdn.net/anaconda/envs/langchain/lib/python3.10/site-packages/langchain/chains/base.py:313) run_manager.on_chain_end(outputs)
[314](https://vscode-remote+amlext-002b2f737562736372697074696f6e732f31653065376137362d306236362d343632622d613831352d3733363863633464363832312f7265736f7572636547726f7570732f47656e657261746976655f41492f70726f7669646572732f4d6963726f736f66742e4d616368696e654c6561726e696e6753657276696365732f776f726b7370616365732f6c6c616d612f636f6d70757465732f7368616d75732d637075.vscode-resource.vscode-cdn.net/anaconda/envs/langchain/lib/python3.10/site-packages/langchain/chains/base.py:314) final_outputs: Dict[str, Any] = self.prep_outputs(
[315](https://vscode-remote+amlext-002b2f737562736372697074696f6e732f31653065376137362d306236362d343632622d613831352d3733363863633464363832312f7265736f7572636547726f7570732f47656e657261746976655f41492f70726f7669646572732f4d6963726f736f66742e4d616368696e654c6561726e696e6753657276696365732f776f726b7370616365732f6c6c616d612f636f6d70757465732f7368616d75732d637075.vscode-resource.vscode-cdn.net/anaconda/envs/langchain/lib/python3.10/site-packages/langchain/chains/base.py:315) inputs, outputs, return_only_outputs
[316](https://vscode-remote+amlext-002b2f737562736372697074696f6e732f31653065376137362d306236362d343632622d613831352d3733363863633464363832312f7265736f7572636547726f7570732f47656e657261746976655f41492f70726f7669646572732f4d6963726f736f66742e4d616368696e654c6561726e696e6753657276696365732f776f726b7370616365732f6c6c616d612f636f6d70757465732f7368616d75732d637075.vscode-resource.vscode-cdn.net/anaconda/envs/langchain/lib/python3.10/site-packages/langchain/chains/base.py:316) )
File /anaconda/envs/langchain/lib/python3.10/site-packages/langchain/chains/base.py:306, in Chain.__call__(self, inputs, return_only_outputs, callbacks, tags, metadata, run_name, include_run_info)
[299](https://vscode-remote+amlext-002b2f737562736372697074696f6e732f31653065376137362d306236362d343632622d613831352d3733363863633464363832312f7265736f7572636547726f7570732f47656e657261746976655f41492f70726f7669646572732f4d6963726f736f66742e4d616368696e654c6561726e696e6753657276696365732f776f726b7370616365732f6c6c616d612f636f6d70757465732f7368616d75732d637075.vscode-resource.vscode-cdn.net/anaconda/envs/langchain/lib/python3.10/site-packages/langchain/chains/base.py:299) run_manager = callback_manager.on_chain_start(
[300](https://vscode-remote+amlext-002b2f737562736372697074696f6e732f31653065376137362d306236362d343632622d613831352d3733363863633464363832312f7265736f7572636547726f7570732f47656e657261746976655f41492f70726f7669646572732f4d6963726f736f66742e4d616368696e654c6561726e696e6753657276696365732f776f726b7370616365732f6c6c616d612f636f6d70757465732f7368616d75732d637075.vscode-resource.vscode-cdn.net/anaconda/envs/langchain/lib/python3.10/site-packages/langchain/chains/base.py:300) dumpd(self),
[301](https://vscode-remote+amlext-002b2f737562736372697074696f6e732f31653065376137362d306236362d343632622d613831352d3733363863633464363832312f7265736f7572636547726f7570732f47656e657261746976655f41492f70726f7669646572732f4d6963726f736f66742e4d616368696e654c6561726e696e6753657276696365732f776f726b7370616365732f6c6c616d612f636f6d70757465732f7368616d75732d637075.vscode-resource.vscode-cdn.net/anaconda/envs/langchain/lib/python3.10/site-packages/langchain/chains/base.py:301) inputs,
[302](https://vscode-remote+amlext-002b2f737562736372697074696f6e732f31653065376137362d306236362d343632622d613831352d3733363863633464363832312f7265736f7572636547726f7570732f47656e657261746976655f41492f70726f7669646572732f4d6963726f736f66742e4d616368696e654c6561726e696e6753657276696365732f776f726b7370616365732f6c6c616d612f636f6d70757465732f7368616d75732d637075.vscode-resource.vscode-cdn.net/anaconda/envs/langchain/lib/python3.10/site-packages/langchain/chains/base.py:302) name=run_name,
[303](https://vscode-remote+amlext-002b2f737562736372697074696f6e732f31653065376137362d306236362d343632622d613831352d3733363863633464363832312f7265736f7572636547726f7570732f47656e657261746976655f41492f70726f7669646572732f4d6963726f736f66742e4d616368696e654c6561726e696e6753657276696365732f776f726b7370616365732f6c6c616d612f636f6d70757465732f7368616d75732d637075.vscode-resource.vscode-cdn.net/anaconda/envs/langchain/lib/python3.10/site-packages/langchain/chains/base.py:303) )
[304](https://vscode-remote+amlext-002b2f737562736372697074696f6e732f31653065376137362d306236362d343632622d613831352d3733363863633464363832312f7265736f7572636547726f7570732f47656e657261746976655f41492f70726f7669646572732f4d6963726f736f66742e4d616368696e654c6561726e696e6753657276696365732f776f726b7370616365732f6c6c616d612f636f6d70757465732f7368616d75732d637075.vscode-resource.vscode-cdn.net/anaconda/envs/langchain/lib/python3.10/site-packages/langchain/chains/base.py:304) try:
[305](https://vscode-remote+amlext-002b2f737562736372697074696f6e732f31653065376137362d306236362d343632622d613831352d3733363863633464363832312f7265736f7572636547726f7570732f47656e657261746976655f41492f70726f7669646572732f4d6963726f736f66742e4d616368696e654c6561726e696e6753657276696365732f776f726b7370616365732f6c6c616d612f636f6d70757465732f7368616d75732d637075.vscode-resource.vscode-cdn.net/anaconda/envs/langchain/lib/python3.10/site-packages/langchain/chains/base.py:305) outputs = (
--> [306](https://vscode-remote+amlext-002b2f737562736372697074696f6e732f31653065376137362d306236362d343632622d613831352d3733363863633464363832312f7265736f7572636547726f7570732f47656e657261746976655f41492f70726f7669646572732f4d6963726f736f66742e4d616368696e654c6561726e696e6753657276696365732f776f726b7370616365732f6c6c616d612f636f6d70757465732f7368616d75732d637075.vscode-resource.vscode-cdn.net/anaconda/envs/langchain/lib/python3.10/site-packages/langchain/chains/base.py:306) self._call(inputs, run_manager=run_manager)
[307](https://vscode-remote+amlext-002b2f737562736372697074696f6e732f31653065376137362d306236362d343632622d613831352d3733363863633464363832312f7265736f7572636547726f7570732f47656e657261746976655f41492f70726f7669646572732f4d6963726f736f66742e4d616368696e654c6561726e696e6753657276696365732f776f726b7370616365732f6c6c616d612f636f6d70757465732f7368616d75732d637075.vscode-resource.vscode-cdn.net/anaconda/envs/langchain/lib/python3.10/site-packages/langchain/chains/base.py:307) if new_arg_supported
[308](https://vscode-remote+amlext-002b2f737562736372697074696f6e732f31653065376137362d306236362d343632622d613831352d3733363863633464363832312f7265736f7572636547726f7570732f47656e657261746976655f41492f70726f7669646572732f4d6963726f736f66742e4d616368696e654c6561726e696e6753657276696365732f776f726b7370616365732f6c6c616d612f636f6d70757465732f7368616d75732d637075.vscode-resource.vscode-cdn.net/anaconda/envs/langchain/lib/python3.10/site-packages/langchain/chains/base.py:308) else self._call(inputs)
...
(...)
[937](https://vscode-remote+amlext-002b2f737562736372697074696f6e732f31653065376137362d306236362d343632622d613831352d3733363863633464363832312f7265736f7572636547726f7570732f47656e657261746976655f41492f70726f7669646572732f4d6963726f736f66742e4d616368696e654c6561726e696e6753657276696365732f776f726b7370616365732f6c6c616d612f636f6d70757465732f7368616d75732d637075.vscode-resource.vscode-cdn.net/anaconda/envs/langchain/lib/python3.10/site-packages/openai/_base_client.py:937) stream_cls=stream_cls,
[938](https://vscode-remote+amlext-002b2f737562736372697074696f6e732f31653065376137362d306236362d343632622d613831352d3733363863633464363832312f7265736f7572636547726f7570732f47656e657261746976655f41492f70726f7669646572732f4d6963726f736f66742e4d616368696e654c6561726e696e6753657276696365732f776f726b7370616365732f6c6c616d612f636f6d70757465732f7368616d75732d637075.vscode-resource.vscode-cdn.net/anaconda/envs/langchain/lib/python3.10/site-packages/openai/_base_client.py:938) )
NotFoundError: Error code: 404 - {'error': {'message': 'Unrecognized request argument supplied: functions', 'type': 'invalid_request_error', 'param': None, 'code': None}}
```
My understanding of the error is that something is not working with the 'tools'. Although I cant work out why because the retriever is working just fine. Appreciate any advise on this.
### Expected behavior
Expect 'normal' chatbot output | Agent Executor Error code: 404 | https://api.github.com/repos/langchain-ai/langchain/issues/15004/comments | 2 | 2023-12-21T09:52:59Z | 2023-12-22T07:08:41Z | https://github.com/langchain-ai/langchain/issues/15004 | 2,052,095,894 | 15,004 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
_No response_
### Suggestion:
_No response_ | Does ConversationalRetrievalChain Support Streaming Replies?how to use this streaming with custom pre-trained models? | https://api.github.com/repos/langchain-ai/langchain/issues/15002/comments | 1 | 2023-12-21T08:49:32Z | 2024-03-28T16:08:17Z | https://github.com/langchain-ai/langchain/issues/15002 | 2,051,982,197 | 15,002 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
i passed my vectorestore as an retriver on the context but in rag_chai response will shows an openai response not from my vectorstore data so give some support to fix it
| Issue: prompt template response will be come's from openai not from my vectorestore | https://api.github.com/repos/langchain-ai/langchain/issues/15001/comments | 1 | 2023-12-21T08:43:50Z | 2023-12-22T04:58:46Z | https://github.com/langchain-ai/langchain/issues/15001 | 2,051,972,351 | 15,001 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
I am not getting output as per prompt, what is neccessary modification or changes i need to do.
def chat_langchain(new_project_qa, query, not_uuid):
check = query.lower()
relevant_document = result['source_documents']
user_experience_inst = UserExperience.objects.get(not_uuid=not_uuid)
custom_prompt_template = f"""You are a Chatbot answering questions. Use the following pieces of context to answer the question at the end. If you don't know the answer, say that you don't know, don't try to make up an answer.
{relevant_document}
Question: {check}
Helpful Answer:"""
CUSTOMPROMPT = PromptTemplate(
template=custom_prompt_template, input_variables=["context", "question"]
)
print(CUSTOMPROMPT,"------------------")
new_project_qa.combine_documents_chain.llm_chain.prompt = CUSTOMPROMPT
result = new_project_qa(query)
if relevant_document:
source = relevant_document[0].metadata.get('source', '')
# Check if the file extension is ".pdf"
file_extension = os.path.splitext(source)[1]
if file_extension.lower() == ".pdf":
source = os.path.basename(source)
# Retrieve the UserExperience instance using the provided not_uuid
user_experience_inst = UserExperience.objects.get(not_uuid=not_uuid)
bot_ending = user_experience_inst.bot_ending_msg if user_experience_inst.bot_ending_msg is not None else ""
# Create the list_json dictionary
if bot_ending != '':
list_json = {
'bot_message': result['result'] + '\n\n' + str(bot_ending),
"citation": source
}
else:
list_json = {
'bot_message': result['result'] + str(bot_ending),
"citation": source
}
else:
# Handle the case when relevant_document is empty
list_json = {
'bot_message': result['result'],
'citation': ''
}
# Return the list_json dictionary
return list_json
### Suggestion:
_No response_ | Issue: not getting output as per prompt,what is neccessary changes i need to do? | https://api.github.com/repos/langchain-ai/langchain/issues/15000/comments | 3 | 2023-12-21T08:28:42Z | 2024-03-28T16:08:13Z | https://github.com/langchain-ai/langchain/issues/15000 | 2,051,951,382 | 15,000 |
[
"langchain-ai",
"langchain"
] | ### Issue with current documentation:
404 page: https://python.langchain.com/docs/contributing/integration
referenced by: https://python.langchain.com/docs/contributing/

### Idea or request for content:
_No response_ | DOC: Document PAGE NOT FOUND | https://api.github.com/repos/langchain-ai/langchain/issues/14998/comments | 5 | 2023-12-21T07:49:36Z | 2023-12-24T09:09:50Z | https://github.com/langchain-ai/langchain/issues/14998 | 2,051,897,510 | 14,998 |
[
"langchain-ai",
"langchain"
] | ### Feature request
Add the implementation of '_create_chat_result()' method for MiniMax's current implementation to allow it being accepted as one of the chat models
### Motivation
Currently MiniMax's chat functionality does not work properly with Langchain, as described in this issue:
https://github.com/langchain-ai/langchain/issues/14796
The investigation to this bug suggests a missing implementation of method '_create_chat_result()'. With a proper implementation of this method, the `_generate` method will be able to return `ChatResult` objects instead of unaccepted `str`.
### Your contribution
I am currently investigating on how to implement it myself, and I am happy to provide any support, including discussion, testing, etc. | Add implementation of '_create_chat_result()' method for MiniMax's current implementation | https://api.github.com/repos/langchain-ai/langchain/issues/14996/comments | 2 | 2023-12-21T05:02:10Z | 2024-03-28T16:08:07Z | https://github.com/langchain-ai/langchain/issues/14996 | 2,051,702,953 | 14,996 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
I am having a problem adding function calling to a chain of type ConversationalRetrievalChain. I need help finding a solution.
Here is my code, which creates a ConversationalRetrievalChain to retrieve local knowledge and generate chat history information in a summary format. It works fine. However, when I try to add a call to weather_function, I don't know where to add it. I have browsed most of the documentation and couldn't find a solution. Can anyone help me? Thank you!
```python
documents = TextLoader("./file/text.txt").load()
text_splitter = CharacterTextSplitter(chunk_size=300, chunk_overlap=50)
docs = text_splitter.split_documents(documents)
embeddings = OpenAIEmbeddings(openai_api_key=APIKEY, openai_api_base=OPENAI_API_BASE)
db = FAISS.from_documents(docs, embeddings)
retriever = db.as_retriever()
Template = """You are a good man and happy to chat with everyone:
{context}
history chat information in summary:
{chat_history}
Question: {question}
"""
prompt = PromptTemplate(
input_variables=["context", "chat_history", "question"], template=Template
)
output_parser = StrOutputParser()
model = ChatOpenAI(
model_name=DEFAULT_MODEL,
openai_api_key=APIKEY,
openai_api_base=OPENAI_API_BASE,
temperature=0.9,
)
memory = ConversationSummaryMemory(
llm=model, memory_key="chat_history", return_messages=True
)
conversation_chain = ConversationalRetrievalChain.from_llm(
llm=model,
retriever=db.as_retriever(),
memory=memory,
combine_docs_chain_kwargs={'prompt': prompt},
verbose=False,
)
```
function calling : weather_function
```python
class WeatherSearch(BaseModel):
"""Call this with an airport code to get the weather at that airport"""
airport_code: str = Field(description="airport code to get weather for")
weather_function = convert_pydantic_to_openai_function(WeatherSearch)
```
### Suggestion:
_No response_ | Issue: Adding function calling to ConversationalRetrievalChain | https://api.github.com/repos/langchain-ai/langchain/issues/14988/comments | 1 | 2023-12-21T02:57:09Z | 2024-03-28T16:08:02Z | https://github.com/langchain-ai/langchain/issues/14988 | 2,051,606,338 | 14,988 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
I'm looking to use a HuggingFace pipeline using Mistral 7b. I am attempting to pass this into an AgentExectutor and use a retriever based tool.
```python
from langchain.agents.agent_toolkits import create_retriever_tool
from langchain_core.pydantic_v1 import BaseModel, Field
class RetrieverInput(BaseModel):
query: str = Field(description="query to look up in retriever")
fantasy_football_tool = Tool(
name="search_fantasy_football_articles",
description="Searches and returns documents regarding fantasy football.",
func=retriever.get_relevant_documents,
# coroutine=retriever.aget_relevant_documents,
args_schema=RetrieverInput,
)
fantasy_football_tool.run("how is trevor lawrence doing?")
[Document(page_content='Trevor Lawrence\n\nStill in concussion protocol Wednesday\n\nC.J. Stroud', metadata={'source': 'https://www.fantasypros.com/2023/11/rival-fantasy-nfl-week-10/'}),
Document(page_content='Trevor Lawrence\n\nStill in concussion protocol Wednesday\n\nC.J. Stroud', metadata={'source': 'https://www.fantasypros.com/2023/11/nfl-week-10-sleeper-picks-player-predictions-2023/'}),
```
This shows that my tool is working as expected. Now to construct the the agent.
```oython
prompt_template = """
### [INST]
Assistant is a large language model trained by Mistral.
Assistant is designed to be able to assist with a wide range of tasks, from answering simple questions to providing in-depth explanations and discussions on a wide range of topics. As a language model, Assistant is able to generate human-like text based on the input it receives, allowing it to engage in natural-sounding conversations and provide responses that are coherent and relevant to the topic at hand.
Assistant is constantly learning and improving, and its capabilities are constantly evolving. It is able to process and understand large amounts of text, and can use this knowledge to provide accurate and informative responses to a wide range of questions. Additionally, Assistant is able to generate its own text based on the input it receives, allowing it to engage in discussions and provide explanations and descriptions on a wide range of topics.
Overall, Assistant is a powerful tool that can help with a wide range of tasks and provide valuable insights and information on a wide range of topics. Whether you need help with a specific question or just want to have a conversation about a particular topic, Assistant is here to assist.
Context:
------
Assistant has access to the following tools:
{tools}
To use a tool, please use the following format:
```
Thought: Do I need to use a tool? Yes
Action: the action to take, should be one of [{tool_names}]
Action Input: the input to the action
Observation: the result of the action
```
When you have a response to say to the Human, or if you do not need to use a tool, you MUST use the format:
```
Thought: Do I need to use a tool? No
Final Answer: [your response here]
```
Begin!
Previous conversation history:
{chat_history}
New input: {input}
Current Scratchpad:
{agent_scratchpad}
[/INST]
"""
# Create prompt from prompt template
prompt = PromptTemplate(
input_variables=['agent_scratchpad', 'chat_history', 'input', 'tool_names', 'tools'],
template=prompt_template,
)
prompt = prompt.partial(
tools=render_text_description(tools),
tool_names=", ".join([t.name for t in tools]),
)
# Create llm chain
# This is a hugging face pipeline.
llm_chain = LLMChain(llm=mistral_llm, prompt=prompt)
from langchain.agents.conversational.output_parser import ConvoOutputParser
from langchain.output_parsers.json import parse_json_markdown
from langchain_core.exceptions import OutputParserException
class CustomOutputParser(ConvoOutputParser):
def parse(self, text: str) -> Union[AgentAction, AgentFinish]:
"""Attempts to parse the given text into an AgentAction or AgentFinish.
Raises:
OutputParserException if parsing fails.
"""
try:
# If the response contains an 'action' and 'action_input'
print(text)
if "Action" in text or "Action Input" in text:
# If the action indicates a final answer, return an AgentFinish
if "Final Answer" in text:
return AgentFinish({"output": text.split('Final Answer:')[1]}, text)
else:
# Otherwise, return an AgentAction with the specified action and
# input
return AgentAction(action, action_input, text)
else:
# If the necessary keys aren't present in the response, raise an
# exception
raise OutputParserException(
f"Missing 'action' or 'action_input' in LLM output: {text}"
)
except Exception as e:
# If any other exception is raised during parsing, also raise an
# OutputParserException
raise OutputParserException(f"Could not parse LLM output: {text}") from e
output_parser = CustomOutputParser()
# Create an agent with your LLMChain
agent = ConversationalAgent(llm_chain=llm_chain, output_parser=output_parser)
memory = ConversationBufferMemory(memory_key="chat_history")
agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True, memory=memory)
```
I've tested my `agent_executor` using the same question and get this:
```python
Thought: Do I need to use a tool? Yes
Action: search_fantasy_football_articles
Action Input: "trevor lawrence"
Observation: The search returned several articles discussing Trevor Lawrence's performance in fantasy football this week.
Final Answer: According to the articles I found, Trevor Lawrence had a strong performance in fantasy football this week.
```
So it seems like it is pinging the tool but its not actually grabbing or using the documents. Any ideas on what I need to change?
### Suggestion:
_No response_ | Issue: Unable to return documents in my custom llm / agent executor implementation | https://api.github.com/repos/langchain-ai/langchain/issues/14987/comments | 2 | 2023-12-21T02:56:10Z | 2024-04-30T16:37:55Z | https://github.com/langchain-ai/langchain/issues/14987 | 2,051,605,621 | 14,987 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
I used AsyncCallbackHandler for callback. When I pushed the content to the front end through on_llm_new_token, I found that the markdown code block was missing a newline character, which caused the front end to be unable to render the markdown format normally. However, when I retrieved the final response and returned the overall answer content, I found that this newline character existed.
<img width="155" alt="image" src="https://github.com/langchain-ai/langchain/assets/14210962/214581c0-f427-4e83-b06e-f1ded11efe20">
I want to ask for help, how should I solve the problem I am currently facing?
### Suggestion:
_No response_ | Issue: Obtain the content output by AsyncCallbackHandler on_llm_new_token and send it to the front end to find that the newline character is missing. | https://api.github.com/repos/langchain-ai/langchain/issues/14986/comments | 1 | 2023-12-21T02:13:25Z | 2023-12-25T09:16:45Z | https://github.com/langchain-ai/langchain/issues/14986 | 2,051,569,869 | 14,986 |
[
"langchain-ai",
"langchain"
] | ### System Info
langchain version: v0.0.352
python version: 3.11
Hi there! After that PR https://github.com/langchain-ai/langchain/pull/14713 was merged, I started getting errors in stream() method:
```
File .../lib/python3.11/site-packages/langchain_core/_api/deprecation.py:295, in warn_deprecated(since, message, name, alternative, pending, obj_type, addendum, removal)
293 if not removal:
294 removal = f"in {removal}" if removal else "within ?? minor releases"
--> 295 raise NotImplementedError(
296 f"Need to determine which default deprecation schedule to use. "
297 f"{removal}"
298 )
299 else:
300 removal = f"in {removal}"
NotImplementedError: Need to determine which default deprecation schedule to use. within ?? minor releases
```
I guess this decorator must have a `pending=True` argument.
### Who can help?
@hwchase17 @agola11
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```
from langchain.chat_models import ChatOllama
llm = ChatOllama(
model="openchat:7b-v3.5-1210-q4_K_M",
)
for chunk in llm.stream("Where were the Olympics held?"):
print(chunk, end="", flush=True)
```
### Expected behavior
successful streaming output from llm | ChatOllama stream method raises warn_deprecated NotImplementedError | https://api.github.com/repos/langchain-ai/langchain/issues/14980/comments | 5 | 2023-12-20T23:51:39Z | 2024-04-26T16:13:31Z | https://github.com/langchain-ai/langchain/issues/14980 | 2,051,462,141 | 14,980 |
[
"langchain-ai",
"langchain"
] | ### Feature request
## Context:
I am currently developing a custom scraper using the LangChain tools, following the provided documentation. The core functionality involves extracting paragraphs from a list of URLs using the AsyncHtmlLoader and the Beautiful Soup transformer:
loader = AsyncHtmlLoader(urls)
docs = loader.load()
docs_transformed = self.bs_transformer.transform_documents(docs, tags_to_extract=["p"])
return docs_transformed
## Problem:
The code successfully extracts all paragraphs from the provided URLs. However, in the case of web pages like https://www.aha.org/news/chairpersons-file/2023-12-18-chair-file-leadership-dialogue-reflecting-whats-next-health-care-joanne-conroy-md-dartmouth, there is a recurring issue. At the end of each blog or news article, there is a disclaimer message paragraph:
"Noncommercial use of original content on www.aha.org is granted to AHA Institutional Members, their employees and State, Regional and Metro Hospital Associations unless otherwise indicated. AHA does not claim ownership of any content, including content incorporated by permission into AHA produced materials, created by any third party and cannot grant permission to use, distribute or otherwise reproduce such third party content. To request permission to reproduce AHA content, please [click here](https://askrc.libraryresearch.info/reft100.aspx?key=ExtPerm).
"
## Proposed Solution:
To address this, I explored options and realized that excluding specific parts of the HTML could be a viable solution. Typically, using Beautiful Soup, I can delete specific paragraphs within a div by targeting the class parameter, as demonstrated here:
soup.find('div', class_='aha-footer')
## Issue with LangChain Implementation:
Upon inspecting the beautiful_soup_transformer.py in the LangChain repository, particularly the remove_unwanted_tags method, I observed that it is currently implemented to remove unwanted tags in a general sense:
soup = BeautifulSoup(html_content, "html.parser")
for tag in unwanted_tags:
for element in soup.find_all(tag):
element.decompose()
return str(soup)
This implementation makes it impossible to selectively eliminate specific divs from the HTML.
## Request for Guidance:
I seek guidance on how to ignore specific paragraphs or divs during web scraping with LangChain, particularly to exclude the recurring disclaimer paragraph mentioned above. I would appreciate any recommendations on the recommended approach or if there are plans to enhance the beautiful_soup_transformer.py to accommodate more granular exclusion of HTML elements.
### Motivation
I am performing web scrapping over this specific web page:
https://www.aha.org/news/chairpersons-file/2023-12-18-chair-file-leadership-dialogue-reflecting-whats-next-health-care-joanne-conroy-md-dartmouth
I am extracting all the paragraphs , but at the end of all blogs, news there is a disclaimer message paragraph:
Noncommercial use of original content on www.aha.org is granted to AHA Institutional Members, their employees and State, Regional and Metro Hospital Associations unless otherwise indicated. AHA does not claim ownership of any content, including content incorporated by permission into AHA produced materials, created by any third party and cannot grant permission to use, distribute or otherwise reproduce such third party content. To request permission to reproduce AHA content, please [click here](https://askrc.libraryresearch.info/reft100.aspx?key=ExtPerm).
so I want to ignore that specific paragraph
### Your contribution
not yet | Ignoring Specific Paragraphs or Divs in Web Scraping with LangChain | https://api.github.com/repos/langchain-ai/langchain/issues/14979/comments | 2 | 2023-12-20T23:46:42Z | 2023-12-21T00:27:04Z | https://github.com/langchain-ai/langchain/issues/14979 | 2,051,459,162 | 14,979 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
Proposing updates to the SyntheticDataGenerator interface to create a cleaner foundation for building community integrations for synthetic tabular models like [Gretel](https://gretel.ai/tabular-llm) [[docs](https://docs.gretel.ai/reference/tabular-llm)]
### Suggestion:
# Existing Interface
The current `SyntheticDataGenerator` interface requires:
```python
def generate(
subject: str,
runs: int,
extra: Optional[str] = None
) -> List[str]
```
Where:
* `subject`: Subject the synthetic data is about
* `runs`: Number of times to generate the data
* `extra`: Extra instructions for steering
## Proposed Update
I propose changing this to the following:
```python
def generate(
prompt: str,
num_records: int,
optional_dataset: Optional[str, Path, DataFrame]
) -> List[str]
```
## Where:
* `prompt`: User prompt to create synthetic data
* `num_records`: Number of rows to generate
* `optional_dataset`: Dataset to edit/augment
I believe this creates a cleaner interface for synthetic tabular data flows, by combining the `subject` and `extra` parameters into a single field, and allowing the user to clearly specify the number of results they want `num_records`, vs the number of `runs` of the LLM which could generate more than one record each time. The `optional_dataset` arg lets the user prompt the model with a dataset to edit or augment with new synthetic data.
# Requesting Feedback
I would appreciate any thoughts on this proposed update, and happy to open a PR! Before I get started, please let me know:
* If you see any issues with changing the interface
* If an alternative integration approach would be better
* Any other API details to consider
https://python.langchain.com/docs/use_cases/data_generation
My goal is have an intuitive integration for Gretel and future synthetic data models | Issue: Requesting Feedback on Integrating Gretel for Synthetic Tabular Generation | https://api.github.com/repos/langchain-ai/langchain/issues/14975/comments | 1 | 2023-12-20T22:29:45Z | 2024-03-27T16:09:32Z | https://github.com/langchain-ai/langchain/issues/14975 | 2,051,385,822 | 14,975 |
[
"langchain-ai",
"langchain"
] | ### System Info
Currently, I am using OpenAI LLM and Gemini Pro all being used my LangChain. I am also using Google's embedding-001 model and Cohere base model (tested each embedding and both either reply back in english first then another language or straight to another language).
Here is my prompt template:
<code>
def doc_question_prompt_template():
template = """
You are a helpful assistant that has the ability to answer all users questions to the best of your ability.
Your answers should come from the context you are provided. Provide an answer with detail and not short answers.
Your only response should be in the English langeuage.
Context:
{context}
User: {question}
"""
return PromptTemplate(
input_variables=["question"],
template=template
)
def doc_question_command(body, conversation_contexts):
llmlibrary = LLMLibrary()
channel_id = body['channel_id']
user_id = body['user_id']
context_key = f"{channel_id}-{user_id}"
prompt = ChatPromptTemplate.doc_question_prompt_template()
if context_key not in conversation_contexts:
conversation_contexts[context_key] = {
"memory": ConversationBufferMemory(memory_key="chat_history", output_key="answer", return_messages=True, max_token_limit=1024),
"history": "",
}
user_memory = conversation_contexts[context_key]["memory"]
question = body['text']
conversation = llmlibrary.doc_question(user_memory, prompt, question)
#print(f"Conversation: {conversation}")
return question, conversation
def doc_question(self, user_memory, prompt, question)
llm = ChatGoogleGenerativeAI(model="gemini-pro",temperature=0.0, convert_system_message_to_human=True)
vectordb = self.vectorstore.get_vectordb()
print(f"Vector DB: {vectordb}\n")
retriever = vectordb.as_retriever(
search_type="similarity_score_threshold",
search_kwargs={'score_threshold': 0.8}
)
docs = retriever.get_relevant_documents(question)
print(f"Docs: {docs}\n")
print(f"Initiating chat conversation memory\n")
#print(f"Conversation Memory: {memory}\n")
conversation_chain= ConversationalRetrievalChain.from_llm(
llm,
retriever=retriever,
memory=user_memory,
combine_docs_chain_kwargs={'prompt': prompt},
return_source_documents=True,
verbose=False,
)
#print(f"Conversation chain: {conversation_chain}\n")
return conversation_chain
@app.command("/doc_question")
def handle_doc_question_command(ack, body, say):
# Acknowledge the command request
ack()
print(body)
say(f"🤨 {body['text']}")
question, conversation = ChatHandler.doc_question_command(body, conversation_contexts)
response = conversation({'question': question})
print(f"(INFO) Doc Question Response: {response} {time.time()}")
print(f"(INFO) Doc Question Response answer: {response['answer']} {time.time()}")
say(f"🤖 {response['answer']}")
</code>
Logs:
[output.txt](https://github.com/langchain-ai/langchain/files/13732990/output.txt)

### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [X] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
1. user sends a message through Slack
2. The message is received by @app.command("/doc_question)
3. ChatHandler.doc_question_command gets called passing the body and conversation_contexts
4. doc_question_command gets information about the message that was sent and gets the doc_question_prompt_template from ChatPromptTemplate module
5. conversation_contexts gets a context key of memory and history
6. llmlibrary.doc question is then called passing user_member, prompt, question
7. the doc_question function uses the ChatGoogleGenerativeAI module and gets the vectordb which is Pinecone
8. uses the ConversationRetrievalChain.from_llm and passes it back to the handler and the handler passes question and conversation back to @app.command("/doc_question")
9. The question is then submitted to the LLM and the response is spat out within Slack (sometimes english, Spanish, or other)
### Expected behavior
Only reply in english | LLMs start replying in other languages | https://api.github.com/repos/langchain-ai/langchain/issues/14974/comments | 3 | 2023-12-20T22:14:57Z | 2024-03-29T16:07:40Z | https://github.com/langchain-ai/langchain/issues/14974 | 2,051,361,571 | 14,974 |
[
"langchain-ai",
"langchain"
] | ### System Info
Langchain Version: langchain 0.0.350, langchain-community 0.0.3, langchain-core 0.1.1
Python Version: 3.10.6
Operating System: macOs
Additional Libraries: boto 2.49.0, boto3 1.34.1
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [X] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Steps to Reproduce:
- Create an instance of `DynamoDBChatMessageHistory` with a specified table name and session ID.
- Initialize `ConversationTokenBufferMemory` with a `max_token_limit`.
- Attach the memory to a `ConversationChain`.
- Call predict on the `ConversationChain` with some input.
Code sample:
```
import boto3
from langchain.llms import Bedrock
from langchain.memory.chat_message_histories import DynamoDBChatMessageHistory
from langchain.memory import ConversationTokenBufferMemory
session = boto3.Session(
aws_access_key_id=os.environ.get('AWS_ACCESS_KEY_ID'),
aws_secret_access_key=os.environ.get('AWS_SECRET_ACCESS_KEY'),
aws_session_token=os.environ.get('AWS_SESSION_TOKEN'),
region_name='us-east-1'
)
dynamodb = session.resource('dynamodb')
chat_sessions_table = dynamodb.Table('SessionTable')
boto3_bedrock = session.client(service_name="bedrock-runtime")
max_tokens_to_sample = 100
temperature = 0
modelId = "anthropic.claude-instant-v1"
top_k = 250
top_p = 0.999
model_kwargs = {
"temperature": temperature,
"max_tokens_to_sample": max_tokens_to_sample,
"top_k": top_k,
"top_p": top_p
}
llm = Bedrock(
client=boto3_bedrock,
model_id=modelId,
region_name='us-east-1',
model_kwargs=model_kwargs,
streaming=True,callbacks=[StreamingStdOutCallbackHandler()]
)
message_history = DynamoDBChatMessageHistory(table_name="SessionTable", session_id="10", boto3_session=session)
memory = ConversationTokenBufferMemory(
llm=llm, # Use the Bedrock instance
max_token_limit=100,
return_messages=True,
chat_memory=message_history,
ai_prefix="A",
human_prefix="H"
)
#add the memory to the Chain
conversation = ConversationChain(
llm=llm, verbose=True, memory=memory
)
conversation.predict(input="Hello!")
memory.load_memory_variables({})
```
### Expected behavior
Expected Behavior:
- The `DynamoDBChatMessageHistory` should respect the `max_token_limit` set in `ConversationTokenBufferMemory`, limiting the token count accordingly.
Actual Behavior:
The `DynamoDBChatMessageHistory` does not limit the token count as per the `max_token_limit` set in `ConversationTokenBufferMemory` and keeps saving all the items in memory on the DynamoDB table. | Issue with DynamoDBChatMessageHistory Not Respecting max_token_limit in ConversationTokenBufferMemory | https://api.github.com/repos/langchain-ai/langchain/issues/14957/comments | 8 | 2023-12-20T14:15:02Z | 2024-06-08T16:08:05Z | https://github.com/langchain-ai/langchain/issues/14957 | 2,050,634,701 | 14,957 |
[
"langchain-ai",
"langchain"
] | ### Feature request
**Issue Title:** Enhance NLATool Authentication in ChatGPT OpenAPI
**Description:**
I have identified a feature gap in the current implementation of ChatGPT OpenAPI when using NLATool as a proxy for authentication. The existing logic does not fully meet downstream requirements, and I intend to propose a modification to address this issue.
**Proposed Modification:**
I suggest adding a new attribute within the NLATool implementation. During initialization, this attribute should be passed to `NLAToolkit.from_llm_and_ai_plugin`. The subsequent call chain is as follows: `from_llm_and_spec -> _get_http_operation_tools -> NLATool.from_llm_and_method`. The responsibility of `NLATool.from_llm_and_method` is to construct the NLATool component, which includes an underlying `OpenAPIEndpointChain` base package.
The challenge lies in the fact that the `OpenAPIEndpointChain` base package currently lacks support for authentication. To address this, it is essential to load the created attribute into the `OpenAPIEndpointChain`. During the execution of the `_call` method, the authentication logic should be executed.
**Implementation Steps:**
1. Modify the initialization and execution code of the `OpenAPIEndpointChain` class to support authentication.
2. Ensure that the newly added attribute is properly integrated into the `OpenAPIEndpointChain` during its initialization.
3. Implement the authentication logic in the `_call` method of the `OpenAPIEndpointChain`.
Thank you for your consideration.
### Motivation
The current implementation of ChatGPT OpenAPI using NLATool as a proxy for authentication falls short of meeting downstream requirements. By enhancing the NLATool authentication logic, we aim to improve the overall functionality and responsiveness of the system, ensuring it aligns more effectively with user needs. This modification seeks to bridge the existing feature gap and enhance the usability and versatility of the ChatGPT OpenAPI.
### Your contribution
**Expected Challenges:**
While I have not yet started the implementation, I anticipate challenges during the process. One notable challenge is that the `langchain` module does not currently define the core for the authentication class. Consequently, addressing this issue may require changes across multiple modules. A pull request spanning multiple modules may encounter challenges during the review process.
**Request for Feedback:**
Before I commence with the implementation, I would appreciate your insights and guidance on the proposed modification. Your feedback on potential challenges and recommendations for an effective implementation would be invaluable.
| Feature Request - ChatGPT OpenAPI NLATool Authentication Implementation Logic | https://api.github.com/repos/langchain-ai/langchain/issues/14956/comments | 2 | 2023-12-20T13:50:27Z | 2024-03-20T09:55:40Z | https://github.com/langchain-ai/langchain/issues/14956 | 2,050,590,779 | 14,956 |
[
"langchain-ai",
"langchain"
] | ### System Info
langchain: 0.0.351
langchain-community: 0.0.4
langchain-core: 0.1.1
langchain-experimental: 0.0.47
python: 3.10.4
### Who can help?
@hwchase17 , @agola11 , @eyurtsev
### Information
- [X] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [X] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [X] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
1. Create an agent and which is working as expected.
2. create an agent_executor by using above agent.
3. If i try to use agent_execuor, getting error "TypeError: Agent.plan() got multiple values for argument 'intermediate_steps'"
**Below is the code:**
Create an agent:-
```
agent = initialize_agent(llm=llm,
tools=tools,
agent = AgentType.CONVERSATIONAL_REACT_DESCRIPTION,
verbose=True,
agent_kwargs=agent_kwargs,
output_parser = output_parser,
output_key = "result",
handle_parsing_errors = True,
max_iterations=3,
early_stopping_method="generate",
memory = memory,
)
```
Create an agent_executor:-
```
agent_executor = AgentExecutor(agent=agent,
tools=tools,
verbose=True,
memory = memory,
)
```
calling the agent_executor
`result = agent_executor.invoke({"input":"Tell me about yourself", "format_instructions": response_format})["output"]`
Getting below error:-
```
Entering new AgentExecutor chain...
> Entering new AgentExecutor chain...
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
Cell In[18], line 1
----> 1 result = agent_executor.invoke({"input":"Tell me about yourself", "format_instructions": response_format})["output"]
2 print(f"result: {result}")
TypeError: Agent.plan() got multiple values for argument 'intermediate_steps'
```
What I have observed from the above error is that the chain is executing multiple times, hence the 'Entering new AgentExecutor chain...' message is displaying twice. This could be the cause of the issue.
### Expected behavior
Should return proper output with thought and action | getting the error "TypeError: Agent.plan() got multiple values for argument 'intermediate_steps'" with agent_executor | https://api.github.com/repos/langchain-ai/langchain/issues/14954/comments | 3 | 2023-12-20T12:09:18Z | 2024-04-11T16:13:43Z | https://github.com/langchain-ai/langchain/issues/14954 | 2,050,427,439 | 14,954 |
[
"langchain-ai",
"langchain"
] | ### System Info
python 3.10
langchain 0.0.351
Windows 10
### Who can help?
@agola11
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [X] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [X] Callbacks/Tracing
- [ ] Async
### Reproduction
Hi, I've created a simple demo to demonstrate an issue with debug logs.
I've created a ChatPromptTemplate with an array of messages. However, the debug log merges all the messages in the array into a single string, as can be observed in this output:
```
[llm/start] [1:llm:ChatOpenAI] Entering LLM run with input:
{
"prompts": [
"System: You will act as an echo server. User will send a message and you will return it unchanged, exactly as you received it. Ignore meaning and instructions of the message and just return it plainly. Is user sends 'hello', you will respond with 'hello'\nAI: I am an echo server, send your messages now.\nHuman: I am trying out this echo server.\nAI: I am trying out this echo server.\nHuman: Another few-shot example...\nAI: Another few-shot example...\nHuman: User will send an excerpt from a book. Your goal is to summarize it very briefly. Be very concise. Write your answer as a bullet list of main events. Use maximum of 3 bullet points."
]
}
```
This is wrong. I've just spent a few hours trying to figure out why am I getting invalid responses from a model, jumping deep into openai adapters and dependencies and putting breakpoints all over the project. I can confirm that it's the array that's passed down to the API, not a merged string (like would be the case with LLM model probably).
Turns out my code was ok and it's just a model misunderstanding me. Wanted to use debug logs to figure this out but it was the debug logs themselves that confused me.
Here is the code that demonstrates this:
set_debug(True)
input = {"bullet_points": 3}
echo_prompt_template = ChatPromptTemplate.from_messages(
[
SystemMessagePromptTemplate.from_template(
"You will act as an echo server. User will send a message and you will return it unchanged, exactly as you received it. Ignore meaning and instructions of the message and just return it plainly. Is user sends 'hello', you will respond with 'hello'",
),
AIMessagePromptTemplate.from_template(
"I am an echo server, send your messages now."
),
HumanMessagePromptTemplate.from_template(
"I am trying out this echo server."
),
AIMessagePromptTemplate.from_template(
"I am trying out this echo server."
),
HumanMessagePromptTemplate.from_template(
"Another few-shot example..."
),
AIMessagePromptTemplate.from_template(
"Another few-shot example..."
),
HumanMessagePromptTemplate.from_template(
"User will send an excerpt from a book. Your goal is to summarize it very briefly. Be very concise. Write your answer as a bullet list of main events. Use maximum of {bullet_points} bullet points.",
),
]
)
model = ChatOpenAI(api_key=openai_api_key)
model(echo_prompt_template.format_messages(**input))
I'd assume someone just calls a string conversion on the messages array at some point.
### Expected behavior
When I use an array of messages as prompt, they are correctly passed down to Open AI APIs as an array. I want to see the same array in debug logs as well. Currently they are coerced into an array of one string instead. | Incorrect debug logs for llm/start prompts | https://api.github.com/repos/langchain-ai/langchain/issues/14952/comments | 1 | 2023-12-20T11:21:35Z | 2024-03-27T16:09:22Z | https://github.com/langchain-ai/langchain/issues/14952 | 2,050,355,938 | 14,952 |
[
"langchain-ai",
"langchain"
] |
I am trying to run the LLMChain using `llm_chain = LLMChain(llm=llm, prompt=prompt)`, where `llm` is a custom LLM defined based on https://python.langchain.com/docs/modules/model_io/llms/custom_llm,
While trying to run this I am getting the following error: `Can't instantiate abstract class BaseLanguageModel with abstract methods agenerate_prompt, apredict, apredict_messages, generate_prompt, predict, predict_messages (type=type_error)`
Can someone help me with this?
### Suggestion:
_No response_ | Issue: Getting an error while trying to run LLMChain with Custom LLM | https://api.github.com/repos/langchain-ai/langchain/issues/14951/comments | 3 | 2023-12-20T10:28:46Z | 2024-05-08T16:06:50Z | https://github.com/langchain-ai/langchain/issues/14951 | 2,050,271,687 | 14,951 |
[
"langchain-ai",
"langchain"
] | ### System Info
unable to resolve below issue

### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [X] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction

### Expected behavior
ImportError: cannot import name 'AzureOpenAIEmbeddings' from 'langchain.embeddings' (/opt/conda/lib/python3.10/site-packages/langchain/embeddings/__init__.py) | ImportError: cannot import name 'AzureOpenAIEmbeddings' from 'langchain.embeddings' (/opt/conda/lib/python3.10/site-packages/langchain/embeddings/__init__.py) | https://api.github.com/repos/langchain-ai/langchain/issues/14950/comments | 7 | 2023-12-20T10:26:52Z | 2024-02-01T18:40:51Z | https://github.com/langchain-ai/langchain/issues/14950 | 2,050,268,609 | 14,950 |
[
"langchain-ai",
"langchain"
] | ### System Info
langchain: 0.0.338
python: 3.9
### Who can help?
@hwchase17
@eyurtsev
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [X] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
In the following code:
``` python
if self.distance_strategy == DistanceStrategy.MAX_INNER_PRODUCT:
return self._max_inner_product_relevance_score_fn
elif self.distance_strategy == DistanceStrategy.EUCLIDEAN_DISTANCE:
# Default behavior is to use Euclidean distance for relevancy
return self._euclidean_relevance_score_fn
elif self.distance_strategy == DistanceStrategy.COSINE:
return self._cosine_relevance_score_fn
```
When I use MAX_INNER_PRODUCT, the score calculation method is `_max_inner_product_relevance_score_fn`:
``` python
def _max_inner_product_relevance_score_fn(distance: float) -> float:
"""Normalize the distance to a score on a scale of [0, 1]."""
if distance > 0:
return 1.0 - distance
return -1.0 * distance
```
However, if I use MAX_INNER_PRODUCT, the index must be FlatIP:
``` python
if distance_strategy == DistanceStrategy.MAX_INNER_PRODUCT:
index = faiss.IndexFlatIP(len(embeddings[0]))
else:
# Default to L2, currently other metric types not initialized.
index = faiss.IndexFlatL2(len(embeddings[0]))
```
Thus, the distance represents the cosine similarity, meaning the distance should be equivalent to similarity. However, in the method `_max_inner_product_relevance_score_fn`, a larger distance results in a lower score.
Is this a bug?
### Expected behavior
I think the distance sholud be equivalent to similarity。 | The calculated score is wrong when using DistanceStrategy.MAX_INNER_PRODUCT (Faiss) | https://api.github.com/repos/langchain-ai/langchain/issues/14948/comments | 3 | 2023-12-20T09:29:25Z | 2024-03-27T16:09:17Z | https://github.com/langchain-ai/langchain/issues/14948 | 2,050,173,519 | 14,948 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
I am using langchain with gpt 4 model. Im using the create pandas dataframe agent for my use case. For 60 % of the time i run the code, im getting below error-
An output parsing error occurred. In order to pass this error back to the agent and have it try again, pass `handle_parsing_errors=True` to the AgentExecutor. This is the error: Could not parse LLM output: `Thought: To answer the question about what the "EffectiveDate" column represents, I need to use common sense based on the column name and the data provided. For the "exists" part, I need to check if there are any standardization issues in the "EffectiveDate" column. I will look at the data provided and think about the possible issues listed.
Now i have already passed the argument- `handle_parsing_errors=True' while creating the agent, but still it gives me the above error, suggesting me to pass this argument.
I have also tried giving other values to the handle_parsing_errors like - passing custom error message or function, still im stuck with this error most of the time.
### Suggestion:
_No response_ | Issue: Getting 'An output parsing error occurred' error even after passing 'handle_parsing_errors=True' to the agent | https://api.github.com/repos/langchain-ai/langchain/issues/14947/comments | 6 | 2023-12-20T09:28:03Z | 2024-07-03T16:04:51Z | https://github.com/langchain-ai/langchain/issues/14947 | 2,050,171,291 | 14,947 |
[
"langchain-ai",
"langchain"
] | ### System Info
loader1 = CSVLoader(file_path='/home/calvin/下载/test.csv')
Doc = loader1.load()
text_splitter = CharacterTextSplitter(chunk_size=100,chunk_overlap=0)
texts = text_splitter.split_documents(Doc)
embeddings = OpenAIEmbeddings()
db = Chroma.from_documents(texts, embeddings)
retriever = db.as_retriever()
qa = RetrievalQA.from_chain_type(llm=OpenAI(mode="gpt-3.5-turbo"), chain_type="stuff", retriever=retriever)
query = "1501475820"
print(qa.run(query))
I run this code,but i can not use chat-gpt-3.5-turbo,so i try to opanAI MIGRATE,but i exit it ,then i found

then always tell me

@hwcha
### Who can help?
from langchain.chains import RetrievalQA
### Information
- [X] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
loader1 = CSVLoader(file_path='/home/calvin/下载/test.csv')
Doc = loader1.load()
text_splitter = CharacterTextSplitter(chunk_size=100,chunk_overlap=0)
texts = text_splitter.split_documents(Doc)
embeddings = OpenAIEmbeddings()
db = Chroma.from_documents(texts, embeddings)
retriever = db.as_retriever()
qa = RetrievalQA.from_chain_type(llm=OpenAI(mode="gpt-3.5-turbo"), chain_type="stuff", retriever=retriever)
query = "1501475820"
print(qa.run(query))
### Expected behavior
i want use gpt-3.5-turbo to query | openai migrate | https://api.github.com/repos/langchain-ai/langchain/issues/14946/comments | 2 | 2023-12-20T08:08:13Z | 2024-03-27T16:09:12Z | https://github.com/langchain-ai/langchain/issues/14946 | 2,050,054,174 | 14,946 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
Hi, I can't find from how to deploy gpt-4-turbo in langchain.
Could anyone please tell me throuhg which module gpt-4-turbo can be deployed?
Seems that the langchain.llm has already been removed from new version of langchain.
### Suggestion:
_No response_ | Issue: how to deploy gpt-4-turbo through langchain | https://api.github.com/repos/langchain-ai/langchain/issues/14945/comments | 3 | 2023-12-20T05:36:23Z | 2024-05-04T14:21:13Z | https://github.com/langchain-ai/langchain/issues/14945 | 2,049,869,786 | 14,945 |
[
"langchain-ai",
"langchain"
] | ### System Info
Langchain == 0.0.351
Python == 3.10.6
Running in AWS sagemaker notebook, issue occurred on multiple kernels.
Code worked perfectly yesterday, error occurred upon starting up this morning (12/19/23)
Code worked again upon reverting to 0.0.349
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
pip install -U langchain
from langchain.llms.sagemaker_endpoint import LLMContentHandler
### Expected behavior
Expected behavior is that the import works | ImportError: cannot import name 'LLMContentHandler' from 'langchain.llms.sagemaker_endpoint' occurring with 0.0.351 | https://api.github.com/repos/langchain-ai/langchain/issues/14944/comments | 1 | 2023-12-20T05:20:21Z | 2024-03-27T16:09:07Z | https://github.com/langchain-ai/langchain/issues/14944 | 2,049,856,660 | 14,944 |
[
"langchain-ai",
"langchain"
] | How does AgentExecutor make LLM on_llm_new_token most streaming output instead of AgentExecutorIterator?
The current effect is that it will stream out AgentExecutorIterator
Desired effect: streams LLM on_llm_new_token | How does AgentExecutor make LLM on_llm_new_token most streaming output instead of AgentExecutorIterator? | https://api.github.com/repos/langchain-ai/langchain/issues/14943/comments | 5 | 2023-12-20T03:56:17Z | 2024-03-27T16:09:02Z | https://github.com/langchain-ai/langchain/issues/14943 | 2,049,786,926 | 14,943 |
[
"langchain-ai",
"langchain"
] | ### Feature request
There is a new implementation of function call which I think isn't supported by langchain yet.
https://learn.microsoft.com/en-us/azure/ai-services/openai/how-to/function-calling
### Motivation
AzureChatOpenAI models can't be used by OpenAIFunctionAgent due to the implementation issue.
### Your contribution
I've implemented a workaround here. Hoping for a full solution.
```python
from langchain.chat_models import AzureChatOpenAI
class AzureChatOpenAIWithTooling(AzureChatOpenAI):
"""AzureChatOpenAI with a patch to support functions.
Function calling: https://learn.microsoft.com/en-us/azure/ai-services/openai/how-to/function-calling
Currently only a single function call is supported.
If multiple function calls are returned by the model, only the first one is used.
"""
def _generate(self, messages, stop=None, run_manager=None, stream=None, **kwargs):
if "functions" in kwargs:
kwargs["tools"] = [
{"type": "function", "function": f} for f in kwargs.pop("functions")
]
return super()._generate(messages, stop, run_manager, stream, **kwargs)
def _create_message_dicts(self, messages, stop):
dicts, params = super()._create_message_dicts(messages, stop)
latest_call_id = {}
for d in dicts:
if "function_call" in d:
# Record the ID for future use
latest_call_id[d["function_call"]["name"]] = d["function_call"]["id"]
# Convert back to tool call
d["tool_calls"] = [
{
"id": d["function_call"]["id"],
"function": {
k: v for k, v in d["function_call"].items() if k != "id"
},
"type": "function",
}
]
d.pop("function_call")
if d["role"] == "function":
# Renaming as tool
d["role"] = "tool"
d["tool_call_id"] = latest_call_id[d["name"]]
return dicts, params
def _create_chat_result(self, response):
result = super()._create_chat_result(response)
for generation in result.generations:
if generation.message.additional_kwargs.get("tool_calls"):
function_calls = [
{**t["function"], "id": t["id"]}
for t in generation.message.additional_kwargs.pop("tool_calls")
]
# Only consider the first one.
generation.message.additional_kwargs["function_call"] = function_calls[
0
]
return result
```
Test code:
```python
def test_azure_chat_openai():
from scripts.aoai_llm import AzureChatOpenAIWithTooling
agent = OpenAIFunctionsAgent.from_llm_and_tools(
llm=AzureChatOpenAIWithTooling(azure_deployment="gpt-35-turbo", api_version="2023-12-01-preview", temperature=0.),
tools=[
StructuredTool.from_function(get_current_weather)
],
)
action = agent.plan([], input="What's the weather like in San Francisco?")
print(action)
tool_output = get_current_weather(**action.tool_input)
result = agent.plan([
(action, tool_output)
], input="What's the weather like in San Francisco?")
print(result)
# Example function hard coded to return the same weather
# In production, this could be your backend API or an external API
def get_current_weather(location: str, unit: str = "fahrenheit"):
"""Get the current weather in a given location"""
if "tokyo" in location.lower():
return json.dumps({"location": "Tokyo", "temperature": "10", "unit": unit})
elif "san francisco" in location.lower():
return json.dumps(
{"location": "San Francisco", "temperature": "72", "unit": unit}
)
elif "paris" in location.lower():
return json.dumps({"location": "Paris", "temperature": "22", "unit": unit})
else:
return json.dumps({"location": location, "temperature": "unknown"})
```
(Note: the original example to ask about weather in three countries simultaneously doesn't work here.)
| Support tool for AzureChatOpenAI | https://api.github.com/repos/langchain-ai/langchain/issues/14941/comments | 1 | 2023-12-20T03:10:10Z | 2024-03-27T16:08:57Z | https://github.com/langchain-ai/langchain/issues/14941 | 2,049,755,836 | 14,941 |
[
"langchain-ai",
"langchain"
] | ### System Info
Traceback (most recent call last):
File "c:\Users\vivek\OneDrive\Desktop\Hackathon\doc.py", line 43, in <module>
db = FAISS.from_documents(documents=pages, embedding=embeddings)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\vivek\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\langchain\schema\vectorstore.py", line 510, in from_documents
return cls.from_texts(texts, embedding, metadatas=metadatas, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\vivek\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\langchain\vectorstores\faiss.py", line 911, in from_texts
embeddings = embedding.embed_documents(texts)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\vivek\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\langchain\embeddings\openai.py", line 549, in embed_documents
return self._get_len_safe_embeddings(texts, engine=engine)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\vivek\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\langchain\embeddings\openai.py", line 392, in _get_len_safe_embeddings
encoding = tiktoken.encoding_for_model(model_name)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\vivek\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\tiktoken\model.py", line 75, in encoding_for_model
return get_encoding(encoding_name)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\vivek\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\tiktoken\registry.py", line 63, in get_encoding
enc = Encoding(**constructor())
^^^^^^^^^^^^^
File "C:\Users\vivek\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\tiktoken_ext\openai_public.py", line 64, in cl100k_base
mergeable_ranks = load_tiktoken_bpe(
^^^^^^^^^^^^^^^^^^
File "C:\Users\vivek\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\tiktoken\load.py", line 115, in load_tiktoken_bpe
return {
^
File "C:\Users\vivek\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\tiktoken\load.py", line 117, in <dictcomp>
for token, rank in (line.split() for line in contents.splitlines() if line)
^^^^^^^^^^^
ValueError: not enough values to unpack (expected 2, got 1)
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [X] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [X] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [X] Callbacks/Tracing
- [ ] Async
### Reproduction
from langchain.document_loaders import PyPDFLoader
from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.embeddings import AzureOpenAIEmbeddings
from langchain.vectorstores import FAISS
from dotenv import load_dotenv
import openai
import os
#load environment variables
load_dotenv()
OPENAI_API_KEY = os.getenv("OPENAI_API_KEY")
OPENAI_DEPLOYMENT_ENDPOINT = os.getenv("OPENAI_DEPLOYMENT_ENDPOINT")
OPENAI_DEPLOYMENT_NAME = os.getenv("OPENAI_DEPLOYMENT_NAME")
OPENAI_MODEL_NAME = os.getenv("OPENAI_MODEL_NAME")
OPENAI_DEPLOYMENT_VERSION = os.getenv("OPENAI_DEPLOYMENT_VERSION")
OPENAI_ADA_EMBEDDING_DEPLOYMENT_NAME = os.getenv("OPENAI_ADA_EMBEDDING_DEPLOYMENT_NAME")
OPENAI_ADA_EMBEDDING_MODEL_NAME = os.getenv("OPENAI_ADA_EMBEDDING_MODEL_NAME")
#init Azure OpenAI
openai.api_type = "azure"
openai.api_version = OPENAI_DEPLOYMENT_VERSION
openai.api_base = OPENAI_DEPLOYMENT_ENDPOINT
openai.api_key = OPENAI_API_KEY
# if __name__ == "__main__":
embeddings=AzureOpenAIEmbeddings(deployment=OPENAI_ADA_EMBEDDING_DEPLOYMENT_NAME,
model=OPENAI_ADA_EMBEDDING_MODEL_NAME,
azure_endpoint=OPENAI_DEPLOYMENT_ENDPOINT,
openai_api_type="azure",
chunk_size=100)
# dataPath = "./data/documentation/"
fileName = r'C:\Users\vivek\OneDrive\Desktop\Hackathon\data\FAQ For LTO Hotels.pdf'
#use langchain PDF loader
loader = PyPDFLoader(fileName)
#split the document into chunks
pages = loader.load_and_split()
#Use Langchain to create the embeddings using text-embedding-ada-002
db = FAISS.from_documents(documents=pages, embedding=embeddings)
#save the embeddings into FAISS vector store
db.save_local(r"C:\Users\vivek\OneDrive\Desktop\Hackathon\index")
from dotenv import load_dotenv
import os
import openai
from langchain.chains import RetrievalQA
from langchain.vectorstores import FAISS
from langchain.chains.question_answering import load_qa_chain
from langchain.chat_models import AzureChatOpenAI
from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.vectorstores import FAISS
from langchain.chains import ConversationalRetrievalChain
from langchain.prompts import PromptTemplate
#load environment variables
load_dotenv()
OPENAI_API_KEY = os.getenv("OPENAI_API_KEY")
OPENAI_DEPLOYMENT_ENDPOINT = os.getenv("OPENAI_DEPLOYMENT_ENDPOINT")
OPENAI_DEPLOYMENT_NAME = os.getenv("OPENAI_DEPLOYMENT_NAME")
OPENAI_MODEL_NAME = os.getenv("OPENAI_MODEL_NAME")
OPENAI_DEPLOYMENT_VERSION = os.getenv("OPENAI_DEPLOYMENT_VERSION")
OPENAI_ADA_EMBEDDING_DEPLOYMENT_NAME = os.getenv("OPENAI_ADA_EMBEDDING_DEPLOYMENT_NAME")
OPENAI_ADA_EMBEDDING_MODEL_NAME = os.getenv("OPENAI_ADA_EMBEDDING_MODEL_NAME")
def ask_question(qa, question):
result = qa({"query": question})
print("Question:", question)
print("Answer:", result["result"])
def ask_question_with_context(qa, question, chat_history):
query = "what is Azure OpenAI Service?"
result = qa({"question": question, "chat_history": chat_history})
print("answer:", result["answer"])
chat_history = [(query, result["answer"])]
return chat_history
if __name__ == "__main__":
# Configure OpenAI API
openai.api_type = "azure"
openai.api_base = os.getenv('OPENAI_API_BASE')
openai.api_key = os.getenv("OPENAI_API_KEY")
openai.api_version = os.getenv('OPENAI_API_VERSION')
llm = AzureChatOpenAI(deployment_name=OPENAI_DEPLOYMENT_NAME,
model_name=OPENAI_MODEL_NAME,
openai_api_base=OPENAI_DEPLOYMENT_ENDPOINT,
openai_api_version=OPENAI_DEPLOYMENT_VERSION,
openai_api_key=OPENAI_API_KEY,
openai_api_type="azure")
embeddings=AzureOpenAIEmbeddings(deployment=OPENAI_ADA_EMBEDDING_DEPLOYMENT_NAME,
model=OPENAI_ADA_EMBEDDING_MODEL_NAME,
azure_endpoint=OPENAI_DEPLOYMENT_ENDPOINT,
openai_api_type="azure",
chunk_size=100)
# Initialize gpt-35-turbo and our embedding model
#load the faiss vector store we saved into memory
vectorStore = FAISS.load_local(r"C:\Users\vivek\OneDrive\Desktop\Hackathon\index", embeddings)
#use the faiss vector store we saved to search the local document
retriever = vectorStore.as_retriever(search_type="similarity", search_kwargs={"k":2})
QUESTION_PROMPT = PromptTemplate.from_template("""Given the following conversation and a follow up question, rephrase the follow up question to be a standalone question.
Chat History:
{chat_history}
Follow Up Input: {question}
Standalone question:""")
qa = ConversationalRetrievalChain.from_llm(llm=llm,
retriever=retriever,
condense_question_prompt=QUESTION_PROMPT,
return_source_documents=True,
verbose=False)
chat_history = []
while True:
query = input('you: ')
if query == 'q':
break
chat_history = ask_question_with_context(qa, query, chat_history)
### Expected behavior
QA | Embeddings | https://api.github.com/repos/langchain-ai/langchain/issues/14939/comments | 1 | 2023-12-20T02:50:23Z | 2024-03-27T16:08:52Z | https://github.com/langchain-ai/langchain/issues/14939 | 2,049,741,707 | 14,939 |
[
"langchain-ai",
"langchain"
] | 假设我基于langchain分别实现了用于数据库查询的mysqlagent、用于访问外部链接apichain、以及用于rag的知识查询agent,我如何通过用户输入,将用户的请求分发到不同的agent
| agent意图识别 | https://api.github.com/repos/langchain-ai/langchain/issues/14937/comments | 1 | 2023-12-20T02:23:37Z | 2024-03-27T16:08:47Z | https://github.com/langchain-ai/langchain/issues/14937 | 2,049,722,601 | 14,937 |
[
"langchain-ai",
"langchain"
] | ### System Info
C:\Users\vivek\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\langchain\embeddings\azure_openai.py:101: UserWarning: As of openai>=1.0.0, Azure endpoints should be specified via the `azure_endpoint` param not `openai_api_base` (or alias `base_url`). Updating `openai_api_base` from <your openai endpoint> to <your openai endpoint>/openai.
warnings.warn(
C:\Users\vivek\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\langchain\embeddings\azure_openai.py:108: UserWarning: As of openai>=1.0.0, if `deployment` (or alias `azure_deployment`) is specified then `openai_api_base` (or alias `base_url`) should not be. Instead use `deployment` (or alias `azure_deployment`) and `azure_endpoint`.
warnings.warn(
C:\Users\vivek\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\langchain\embeddings\azure_openai.py:116: UserWarning: As of openai>=1.0.0, if `openai_api_base` (or alias `base_url`) is specified it is expected to be of the form https://example-resource.azure.openai.com/openai/deployments/example-deployment. Updating <your openai endpoint> to <your openai endpoint>/openai.
warnings.warn(
Traceback (most recent call last):
File "c:\Users\vivek\OneDrive\Desktop\Hackathon\doc.py", line 28, in <module>
embeddings=AzureOpenAIEmbeddings(deployment=OPENAI_ADA_EMBEDDING_DEPLOYMENT_NAME,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "pydantic\main.py", line 341, in pydantic.main.BaseModel.__init__
pydantic.error_wrappers.ValidationError: 1 validation error for AzureOpenAIEmbeddings
__root__
base_url and azure_endpoint are mutually exclusive (type=value_error)
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [X] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
from langchain.document_loaders import PyPDFLoader
from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.embeddings import AzureOpenAIEmbeddings
from langchain.vectorstores import FAISS
from dotenv import load_dotenv
import openai
import os
#load environment variables
load_dotenv()
OPENAI_API_KEY = os.getenv("OPENAI_API_KEY")
OPENAI_DEPLOYMENT_ENDPOINT = os.getenv("OPENAI_DEPLOYMENT_ENDPOINT")
OPENAI_DEPLOYMENT_NAME = os.getenv("OPENAI_DEPLOYMENT_NAME")
OPENAI_MODEL_NAME = os.getenv("OPENAI_MODEL_NAME")
OPENAI_DEPLOYMENT_VERSION = os.getenv("OPENAI_DEPLOYMENT_VERSION")
OPENAI_ADA_EMBEDDING_DEPLOYMENT_NAME = os.getenv("OPENAI_ADA_EMBEDDING_DEPLOYMENT_NAME")
OPENAI_ADA_EMBEDDING_MODEL_NAME = os.getenv("OPENAI_ADA_EMBEDDING_MODEL_NAME")
#init Azure OpenAI
openai.api_type = "azure"
openai.api_version = OPENAI_DEPLOYMENT_VERSION
openai.api_base = OPENAI_DEPLOYMENT_ENDPOINT
openai.api_key = OPENAI_API_KEY
# if __name__ == "__main__":
embeddings=AzureOpenAIEmbeddings(deployment=OPENAI_ADA_EMBEDDING_DEPLOYMENT_NAME,
model=OPENAI_ADA_EMBEDDING_MODEL_NAME,
openai_api_base=OPENAI_DEPLOYMENT_ENDPOINT,
openai_api_type="azure",
chunk_size=100)
# dataPath = "./data/documentation/"
fileName = r'C:\Users\vivek\OneDrive\Desktop\Hackathon\data\FAQ For LTO Hotels.pdf'
#use langchain PDF loader
loader = PyPDFLoader(fileName)
#split the document into chunks
pages = loader.load_and_split()
#Use Langchain to create the embeddings using text-embedding-ada-002
db = FAISS.from_documents(documents=pages, embedding=embeddings)
#save the embeddings into FAISS vector store
db.save_local(r"C:\Users\vivek\OneDrive\Desktop\Hackathon\index")
from dotenv import load_dotenv
import os
import openai
from langchain.chains import RetrievalQA
from langchain.vectorstores import FAISS
from langchain.chains.question_answering import load_qa_chain
from langchain.chat_models import AzureChatOpenAI
from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.vectorstores import FAISS
from langchain.chains import ConversationalRetrievalChain
from langchain.prompts import PromptTemplate
#load environment variables
load_dotenv()
OPENAI_API_KEY = os.getenv("OPENAI_API_KEY")
OPENAI_DEPLOYMENT_ENDPOINT = os.getenv("OPENAI_DEPLOYMENT_ENDPOINT")
OPENAI_DEPLOYMENT_NAME = os.getenv("OPENAI_DEPLOYMENT_NAME")
OPENAI_MODEL_NAME = os.getenv("OPENAI_MODEL_NAME")
OPENAI_DEPLOYMENT_VERSION = os.getenv("OPENAI_DEPLOYMENT_VERSION")
OPENAI_ADA_EMBEDDING_DEPLOYMENT_NAME = os.getenv("OPENAI_ADA_EMBEDDING_DEPLOYMENT_NAME")
OPENAI_ADA_EMBEDDING_MODEL_NAME = os.getenv("OPENAI_ADA_EMBEDDING_MODEL_NAME")
def ask_question(qa, question):
result = qa({"query": question})
print("Question:", question)
print("Answer:", result["result"])
def ask_question_with_context(qa, question, chat_history):
query = "what is Azure OpenAI Service?"
result = qa({"question": question, "chat_history": chat_history})
print("answer:", result["answer"])
chat_history = [(query, result["answer"])]
return chat_history
if __name__ == "__main__":
# Configure OpenAI API
openai.api_type = "azure"
openai.api_base = os.getenv('OPENAI_API_BASE')
openai.api_key = os.getenv("OPENAI_API_KEY")
openai.api_version = os.getenv('OPENAI_API_VERSION')
llm = AzureChatOpenAI(deployment_name=OPENAI_DEPLOYMENT_NAME,
model_name=OPENAI_MODEL_NAME,
openai_api_base=OPENAI_DEPLOYMENT_ENDPOINT,
openai_api_version=OPENAI_DEPLOYMENT_VERSION,
openai_api_key=OPENAI_API_KEY,
openai_api_type="azure")
embeddings=OpenAIEmbeddings(deployment=OPENAI_ADA_EMBEDDING_DEPLOYMENT_NAME,
model=OPENAI_ADA_EMBEDDING_MODEL_NAME,
openai_api_base=OPENAI_DEPLOYMENT_ENDPOINT,
openai_api_type="azure",
chunk_size=1)
# Initialize gpt-35-turbo and our embedding model
#load the faiss vector store we saved into memory
vectorStore = FAISS.load_local(r"C:\Users\vivek\OneDrive\Desktop\Hackathon\index", embeddings)
#use the faiss vector store we saved to search the local document
retriever = vectorStore.as_retriever(search_type="similarity", search_kwargs={"k":2})
QUESTION_PROMPT = PromptTemplate.from_template("""Given the following conversation and a follow up question, rephrase the follow up question to be a standalone question.
Chat History:
{chat_history}
Follow Up Input: {question}
Standalone question:""")
qa = ConversationalRetrievalChain.from_llm(llm=llm,
retriever=retriever,
condense_question_prompt=QUESTION_PROMPT,
return_source_documents=True,
verbose=False)
chat_history = []
while True:
query = input('you: ')
if query == 'q':
break
chat_history = ask_question_with_context(qa, query, chat_history)
### Expected behavior
QA | AZure Openai Embeddings | https://api.github.com/repos/langchain-ai/langchain/issues/14934/comments | 8 | 2023-12-20T01:40:55Z | 2024-05-23T16:34:06Z | https://github.com/langchain-ai/langchain/issues/14934 | 2,049,674,847 | 14,934 |
[
"langchain-ai",
"langchain"
] | ### System Info
File "C:\Users\vivek\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\flask\app.py", line 2190, in wsgi_app
response = self.full_dispatch_request()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\vivek\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\flask\app.py", line 1486, in full_dispatch_request
rv = self.handle_user_exception(e)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\vivek\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\flask\app.py", line 1484, in full_dispatch_request
rv = self.dispatch_request()
^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\vivek\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\flask\app.py", line 1469, in dispatch_request
return self.ensure_sync(self.view_functions[rule.endpoint])(**view_args)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "c:\Users\vivek\OneDrive\Desktop\SOPPOC\flask_app.py", line 43, in chat
return RCXStreakanswer(input)
^^^^^^^^^^^^^^^^^^^^^^
File "c:\Users\vivek\OneDrive\Desktop\SOPPOC\RCX_Streak.py", line 53, in RCXStreakanswer
openAIEmbedd = FAISS.from_documents(texts, embeddings)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\vivek\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\langchain\schema\vectorstore.py", line 510, in from_documents
return cls.from_texts(texts, embedding, metadatas=metadatas, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\vivek\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\langchain\vectorstores\faiss.py", line 911, in from_texts
embeddings = embedding.embed_documents(texts)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\vivek\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\langchain\embeddings\openai.py", line 549, in embed_documents
return self._get_len_safe_embeddings(texts, engine=engine)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\vivek\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\langchain\embeddings\openai.py", line 392, in _get_len_safe_embeddings
encoding = tiktoken.encoding_for_model(model_name)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\vivek\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\tiktoken\model.py", line 97, in encoding_for_model
return get_encoding(encoding_name_for_model(model_name))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\vivek\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\tiktoken\registry.py", line 73, in get_encoding
enc = Encoding(**constructor())
^^^^^^^^^^^^^
File "C:\Users\vivek\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\tiktoken_ext\openai_public.py", line 64, in cl100k_base
mergeable_ranks = load_tiktoken_bpe(
^^^^^^^^^^^^^^^^^^
File "C:\Users\vivek\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\tiktoken\load.py", line 124, in load_tiktoken_bpe
return {
^
File "C:\Users\vivek\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages\tiktoken\load.py", line 126, in <dictcomp>
for token, rank in (line.split() for line in contents.splitlines() if line)
^^^^^^^^^^^
ValueError: not enough values to unpack (expected 2, got 1)
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [X] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
loader = Docx2txtLoader(doc_path)
documents.extend(loader.load())
content = documents
text_splitter = RecursiveCharacterTextSplitter(
chunk_size = 100,
chunk_overlap = 20,
separators=["\n\n", "\n", "."]
)
texts = text_splitter.split_documents(content)
print(texts)
print()
embeddings = OpenAIEmbeddings()
openAIEmbedd = FAISS.from_documents(texts, embeddings)
print(openAIEmbedd)
prompt_template = """Given the following context and a question, generate an answer.
Based on user input extract only data for the given question from context. \
CONTEXT: {context}
QUESTION: {question}"""
PROMPT = PromptTemplate(
template=prompt_template, input_variables=["context", "question"]
)
llm = ChatOpenAI(model_name="gpt-3.5-turbo", temperature=0)
retriever_openai = openAIEmbedd.as_retriever(search_kwargs={"k": 2})
print(retriever_openai)
chain = RetrievalQA.from_chain_type(llm=llm,
chain_type="stuff",
retriever=retriever_openai,
return_source_documents=True,
chain_type_kwargs={"prompt": PROMPT})
ans=chain(user_message)
output= ans['result']
return output
### Expected behavior
should return answer | ValueError: not enough values to unpack (expected 2, got 1) | https://api.github.com/repos/langchain-ai/langchain/issues/14918/comments | 1 | 2023-12-19T17:30:51Z | 2024-03-26T16:08:41Z | https://github.com/langchain-ai/langchain/issues/14918 | 2,049,130,491 | 14,918 |
[
"langchain-ai",
"langchain"
] | ### System Info
Langchain: v0.0.350
OS: Linux
### Who can help?
@agola11
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [X] Callbacks/Tracing
- [ ] Async
### Reproduction
The problem occurs when you use Azure with an GPT 4 Model because the Azure API will always respond with `gpt-4` as the Model name. You can also see this in the official Microsoft documentation. https://learn.microsoft.com/en-us/azure/ai-services/openai/how-to/gpt-with-vision#output. It will therefore calculate the wrong price → if you use Turbo will the price will be `x3` as it actually should be.
Code to Reproduce:
```python
llm = AzureChatOpenAI(
deployment_name="GPT4-TURBO"
)
with get_openai_callback() as cb:
# Run LLM
print((cb.total_tokens / 1000) * 0.01, "is instead", cb.total_cost)
```
### Expected behavior
It should return the correct price. | Issue when working with Azure and OpenAI Callback | https://api.github.com/repos/langchain-ai/langchain/issues/14912/comments | 4 | 2023-12-19T15:33:15Z | 2024-06-14T23:28:11Z | https://github.com/langchain-ai/langchain/issues/14912 | 2,048,902,467 | 14,912 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
I have a document, which contains general text and tables.
I embedded this document using LangChain to build a bot with Node.js.
The bot answers correctly for general text in the document, but gives incorrect answers for table data.
How do I update?
### Suggestion:
_No response_ | How to embed the table data? | https://api.github.com/repos/langchain-ai/langchain/issues/14911/comments | 2 | 2023-12-19T15:11:34Z | 2024-03-26T16:08:36Z | https://github.com/langchain-ai/langchain/issues/14911 | 2,048,861,237 | 14,911 |
[
"langchain-ai",
"langchain"
] | ### System Info
python: 3.11.4
langchain: 0.0.351
requests: 2.31.0
### Who can help?
@agola11
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [X] Callbacks/Tracing
- [ ] Async
### Reproduction
We have enabled the LangSmith tracing and after upgrading LangChain from `0.0.266` to `0.0.351` we started getting the following warnings:
```
Retrying (Retry(total=2, connect=None, read=None, redirect=None, status=None)) after connection broken by 'SSLError(SSLEOFError(8, 'EOF occurred in violation of protocol (_ssl.c:2423)'))': /runs
```
We also get the same warnings when we try to update the feedback from the LangSmith client.
```
Retrying (Retry(total=2, connect=None, read=None, redirect=None, status=None)) after connection broken by 'RemoteDisconnected('Remote end closed connection without response')': /sessions?limit=1&name=mirror
```
Unfortunately, there is no additional stack trace.
This behavior is not consistent but it occurs randomly.
### Expected behavior
The expected behavior is all the runs to be propagated to the LangSmith and does not have this kind of warning. | `urllib3.connectionpool` warnings after upgrading to LangChain 0.0.351 | https://api.github.com/repos/langchain-ai/langchain/issues/14909/comments | 1 | 2023-12-19T14:53:36Z | 2024-03-26T16:08:31Z | https://github.com/langchain-ai/langchain/issues/14909 | 2,048,823,526 | 14,909 |
[
"langchain-ai",
"langchain"
] | ### Issue with current documentation:
Ref: https://python.langchain.com/docs/integrations/providers/wandb_tracking
> Note: the WandbCallbackHandler is being deprecated in favour of the WandbTracer . In future please use the WandbTracer as it is more flexible and allows for more granular logging. To know more about the WandbTracer refer to the [agent_with_wandb_tracing.html](https://python.langchain.com/en/latest/integrations/agent_with_wandb_tracing.html) notebook or use the following [colab notebook](http://wandb.me/prompts-quickstart). To know more about Weights & Biases Prompts refer to the following [prompts documentation](https://docs.wandb.ai/guides/prompts).
The link to the `agent_with_wandb_tracing.html` results in a HTTP 404
### Idea or request for content:
_No response_ | Link to agent_with_wandb_tracing.html notebook is broken | https://api.github.com/repos/langchain-ai/langchain/issues/14905/comments | 1 | 2023-12-19T14:22:07Z | 2024-03-26T16:08:26Z | https://github.com/langchain-ai/langchain/issues/14905 | 2,048,761,131 | 14,905 |
[
"langchain-ai",
"langchain"
] | Cannot import LLMContentHandler
langchain: 0.0.351
python: 3.9
To reproduce:
``` python
from langchain.llms.sagemaker_endpoint import LLMContentHandler
```
Issue could be resolved by updating
https://github.com/langchain-ai/langchain/blob/583696732cbaa3d1cf3a3a9375539a7e8785850c/libs/langchain/langchain/llms/sagemaker_endpoint.py#L1C5-L7
as follow:
``` python
from langchain_community.llms.sagemaker_endpoint import (
LLMContentHandler,
SagemakerEndpoint,
)
__all__ = [
"SagemakerEndpoint",
"LLMContentHandler"
]
```
| Issue: cannot import name 'LLMContentHandler' from 'langchain.llms.sagemaker_endpoint | https://api.github.com/repos/langchain-ai/langchain/issues/14904/comments | 1 | 2023-12-19T13:53:31Z | 2023-12-19T15:00:33Z | https://github.com/langchain-ai/langchain/issues/14904 | 2,048,707,893 | 14,904 |
[
"langchain-ai",
"langchain"
] | ### System Info
In chain.py, relative code as below:
```
def get_retriever(text):
_query = text
llm = ...
chroma_docs = [...]
_model_name, _embedding = get_embedding_HuggingFace()
chroma_vdb = Chroma.from_documents(chroma_docs, _embedding)
document_content_description = "..."
metadata_field_info = [...]
retriever = get_structured_retriever(llm, chroma_vdb, document_content_description, metadata_field_info, _query)
return retriever
chain = (
RunnableParallel({
"context": itemgetter("question") | RunnableLambda(get_retriever),
"question": RunnablePassthrough()
})
| prompt
| llm
| StrOutputParser()
)
```
When running the playground, there is no input frame (show as below):

But no error msg in langchain serve
### Who can help?
@hwchase17
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Show as the the code
### Expected behavior
Should have the input box | No input box show up when running the playground | https://api.github.com/repos/langchain-ai/langchain/issues/14902/comments | 1 | 2023-12-19T13:16:06Z | 2024-03-26T16:08:21Z | https://github.com/langchain-ai/langchain/issues/14902 | 2,048,640,189 | 14,902 |
[
"langchain-ai",
"langchain"
] | ### Issue with current documentation:
Current:
```
class AzureChatOpenAI(ChatOpenAI):
"""`Azure OpenAI` Chat Completion API.
To use this class you
must have a deployed model on Azure OpenAI. Use `deployment_name` in the
constructor to refer to the "Model deployment name" in the Azure portal.
In addition, you should have the ``openai`` python package installed, and the
following environment variables set or passed in constructor in lower case:
- ``AZURE_OPENAI_API_KEY``
- ``AZURE_OPENAI_API_ENDPOINT``
- ``AZURE_OPENAI_AD_TOKEN``
- ``OPENAI_API_VERSION``
- ``OPENAI_PROXY``
```
### Idea or request for content:
It should be
```
class AzureChatOpenAI(ChatOpenAI):
"""`Azure OpenAI` Chat Completion API.
To use this class you
must have a deployed model on Azure OpenAI. Use `deployment_name` in the
constructor to refer to the "Model deployment name" in the Azure portal.
In addition, you should have the ``openai`` python package installed, and the
following environment variables set or passed in constructor in lower case:
- ``AZURE_OPENAI_API_KEY``
- ``AZURE_OPENAI_ENDPOINT`` <---------- **Changed**
- ``AZURE_OPENAI_AD_TOKEN``
- ``OPENAI_API_VERSION``
- ``OPENAI_PROXY``
``` | DOC: Wrong parameter name in doc string | https://api.github.com/repos/langchain-ai/langchain/issues/14901/comments | 1 | 2023-12-19T12:54:07Z | 2024-03-26T16:08:16Z | https://github.com/langchain-ai/langchain/issues/14901 | 2,048,603,807 | 14,901 |
[
"langchain-ai",
"langchain"
] | ### System Info
The `description` attribute of the function parameters described in our Pydantic *v2* model are missing in the output of `convert_to_openai_function` because it does not recognize Pydantic v2 `BaseModel` as a valid v1 `BaseModel`.
https://github.com/langchain-ai/langchain/blob/16399fd61d7744c529cca46464489e467b4b7741/libs/langchain/langchain/chains/openai_functions/base.py#L156-L161
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```python
from langchain.chains.openai_functions.base import convert_to_openai_function
from pydantic.v1 import BaseModel as BaseModelV1, Field as FieldV1
from pydantic import BaseModel as BaseModelV2, Field as FieldV2
class FuncV1(BaseModelV1):
"Pydantic v1 model."
output: str = FieldV1(description="A output text")
class FuncV2(BaseModelV2):
"Pydantic v2 model."
output: str = FieldV2(description="A output text")
print(convert_to_openai_function(FuncV1))
{'name': 'FuncV1', 'description': 'Pydantic v1 model.', 'parameters': {'title': 'FuncV1', 'description': 'Pydantic v1 model.', 'type': 'object', 'properties': {'output': {'title': 'Output', 'description': 'A output text', 'type': 'string'}}, 'required': ['output']}}
print(convert_to_openai_function(FuncV2))
{'name': 'FuncV2', 'description': 'Pydantic v2 model.', 'parameters': {'type': 'object', 'properties': {'output': {'type': 'string'}}, 'required': ['output']}}
```
### Expected behavior
Add `description` attribute appeared in Pydantic v2 model. | `convert_to_openai_function` drops `description` for each parameter | https://api.github.com/repos/langchain-ai/langchain/issues/14899/comments | 9 | 2023-12-19T10:51:41Z | 2024-06-01T00:07:38Z | https://github.com/langchain-ai/langchain/issues/14899 | 2,048,405,277 | 14,899 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
I am working on implementing streaming for my ConversationRetrieval chain calls, and I plan to leverage the AsyncIteratorCallbackHandler along with its aiter method. While reviewing the source code, I noticed that the response from the on_llm_end method is not currently added to the queue. My goal is to enhance the aiter method so that the response is also included in the queue. This way, I can stream the final response to my client and use it to update cached data in my frontend. Additionally, I intend to leverage the on_llm_end method to update my database with the received response. Could you guide me on how to modify the aiter method within the AsyncIteratorCallbackHandler to align with these requirements?
### Suggestion:
_No response_ | Issue: Enhancing Streaming and Database Integration in ConversationRetrieval with AsyncIteratorCallbackHandler | https://api.github.com/repos/langchain-ai/langchain/issues/14898/comments | 5 | 2023-12-19T10:08:44Z | 2024-04-03T16:08:04Z | https://github.com/langchain-ai/langchain/issues/14898 | 2,048,330,418 | 14,898 |
[
"langchain-ai",
"langchain"
] | ### Feature request
Currently (0.0.350) the Xata integration always creates new records with `XataVectorStore.from_documents`.
Provide the option to update embeddings and column content of existing record ids.
### Motivation
This will provide the capability to update Xata Vector Stores.
### Your contribution
Xata development team plans to contribute the enhancement. | Update records in the Xata integration | https://api.github.com/repos/langchain-ai/langchain/issues/14897/comments | 1 | 2023-12-19T10:00:54Z | 2024-03-26T16:08:11Z | https://github.com/langchain-ai/langchain/issues/14897 | 2,048,316,746 | 14,897 |
[
"langchain-ai",
"langchain"
] | ### System Info
langchain 0.0.350
langchain-community 0.0.3
langchain-core 0.1.1
yandexcloud 0.248.0
Python 3.9.0 Windows 10
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Steps
```bash.
!pip install yandexcloud langchain
```
```python
from langchain.chains import LLMChain
from langchain.llms import YandexGPT
from langchain.prompts import PromptTemplate
import os
os.environ["YC_IAM_TOKEN"] = "xxxxxxxxxxxxxxxxxxxx"
os.environ["YC_FOLDER_ID"] = "yyyyyyyyyyyyyyyyyyyy"
llm = YandexGPT()
template = "What is the capital of {country}?"
prompt = PromptTemplate(template=template, input_variables=["country"])
llm_chain = LLMChain(prompt=prompt, llm=llm)
country = "Russia"
llm_chain.run(country)
```
Error
```
Requirement already satisfied: yandexcloud in [c:\users\achme\projects\configured-dialogs\tests\elma365-community\.venv\lib\site-packages](file:///C:/users/achme/projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages) (0.248.0)
Collecting langchain
Downloading langchain-0.0.350-py3-none-any.whl.metadata (13 kB)
Requirement already satisfied: cryptography>=2.8 in [c:\users\achme\projects\configured-dialogs\tests\elma365-community\.venv\lib\site-packages](file:///C:/users/achme/projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages) (from yandexcloud) (41.0.7)
Requirement already satisfied: grpcio>=1.56.2 in [c:\users\achme\projects\configured-dialogs\tests\elma365-community\.venv\lib\site-packages](file:///C:/users/achme/projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages) (from yandexcloud) (1.59.3)
Requirement already satisfied: protobuf>=4.23.4 in [c:\users\achme\projects\configured-dialogs\tests\elma365-community\.venv\lib\site-packages](file:///C:/users/achme/projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages) (from yandexcloud) (4.25.1)
Requirement already satisfied: googleapis-common-protos>=1.59.1 in [c:\users\achme\projects\configured-dialogs\tests\elma365-community\.venv\lib\site-packages](file:///C:/users/achme/projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages) (from yandexcloud) (1.62.0)
Requirement already satisfied: pyjwt>=1.7.1 in [c:\users\achme\projects\configured-dialogs\tests\elma365-community\.venv\lib\site-packages](file:///C:/users/achme/projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages) (from yandexcloud) (2.8.0)
Requirement already satisfied: requests>=2.22.0 in [c:\users\achme\projects\configured-dialogs\tests\elma365-community\.venv\lib\site-packages](file:///C:/users/achme/projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages) (from yandexcloud) (2.31.0)
Requirement already satisfied: six>=1.14.0 in [c:\users\achme\projects\configured-dialogs\tests\elma365-community\.venv\lib\site-packages](file:///C:/users/achme/projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages) (from yandexcloud) (1.16.0)
Requirement already satisfied: PyYAML>=5.3 in [c:\users\achme\projects\configured-dialogs\tests\elma365-community\.venv\lib\site-packages](file:///C:/users/achme/projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages) (from langchain) (6.0.1)
Requirement already satisfied: SQLAlchemy<3,>=1.4 in [c:\users\achme\projects\configured-dialogs\tests\elma365-community\.venv\lib\site-packages](file:///C:/users/achme/projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages) (from langchain) (2.0.23)
Requirement already satisfied: aiohttp<4.0.0,>=3.8.3 in [c:\users\achme\projects\configured-dialogs\tests\elma365-community\.venv\lib\site-packages](file:///C:/users/achme/projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages) (from langchain) (3.9.0)
Requirement already satisfied: async-timeout<5.0.0,>=4.0.0 in [c:\users\achme\projects\configured-dialogs\tests\elma365-community\.venv\lib\site-packages](file:///C:/users/achme/projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages) (from langchain) (4.0.3)
Requirement already satisfied: dataclasses-json<0.7,>=0.5.7 in [c:\users\achme\projects\configured-dialogs\tests\elma365-community\.venv\lib\site-packages](file:///C:/users/achme/projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages) (from langchain) (0.6.2)
Collecting jsonpatch<2.0,>=1.33 (from langchain)
Downloading jsonpatch-1.33-py2.py3-none-any.whl.metadata (3.0 kB)
Collecting langchain-community<0.1,>=0.0.2 (from langchain)
Downloading langchain_community-0.0.3-py3-none-any.whl.metadata (7.0 kB)
Collecting langchain-core<0.2,>=0.1 (from langchain)
Downloading langchain_core-0.1.1-py3-none-any.whl.metadata (4.0 kB)
Collecting langsmith<0.1.0,>=0.0.63 (from langchain)
Downloading langsmith-0.0.71-py3-none-any.whl.metadata (10 kB)
Requirement already satisfied: numpy<2,>=1 in [c:\users\achme\projects\configured-dialogs\tests\elma365-community\.venv\lib\site-packages](file:///C:/users/achme/projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages) (from langchain) (1.23.5)
Requirement already satisfied: pydantic<3,>=1 in [c:\users\achme\projects\configured-dialogs\tests\elma365-community\.venv\lib\site-packages](file:///C:/users/achme/projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages) (from langchain) (1.10.13)
...
---------------------------------------- 46.2/46.2 kB 2.2 MB/s eta 0:00:00
Downloading jsonpointer-2.4-py2.py3-none-any.whl (7.8 kB)
Installing collected packages: jsonpointer, langsmith, jsonpatch, langchain-core, langchain-community, langchain
Successfully installed jsonpatch-1.33 jsonpointer-2.4 langchain-0.0.350 langchain-community-0.0.3 langchain-core-0.1.1 langsmith-0.0.71
Output is truncated. View as a [scrollable element](command:cellOutput.enableScrolling?5f589b9d-b63a-4c46-84d4-8fb1ca1bc863) or open in a [text editor](command:workbench.action.openLargeOutput?5f589b9d-b63a-4c46-84d4-8fb1ca1bc863). Adjust cell output [settings](command:workbench.action.openSettings?%5B%22%40tag%3AnotebookOutputLayout%22%5D)...
[notice] A new release of pip is available: 23.3.1 -> 23.3.2
[notice] To update, run: python.exe -m pip install --upgrade pip
---------------------------------------------------------------------------
_MultiThreadedRendezvous Traceback (most recent call last)
Cell In[18], [line 5](vscode-notebook-cell:?execution_count=18&line=5)
[3](vscode-notebook-cell:?execution_count=18&line=3) llm_chain = LLMChain(prompt=prompt, llm=llm)
[4](vscode-notebook-cell:?execution_count=18&line=4) country = "Russia"
----> [5](vscode-notebook-cell:?execution_count=18&line=5) llm_chain.run(country)
File [c:\Users\achme\Projects\configured-dialogs\tests\elma365-community\.venv\lib\site-packages\langchain\chains\base.py:507](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain/chains/base.py:507), in Chain.run(self, callbacks, tags, metadata, *args, **kwargs)
[505](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain/chains/base.py:505) if len(args) != 1:
[506](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain/chains/base.py:506) raise ValueError("`run` supports only one positional argument.")
--> [507](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain/chains/base.py:507) return self(args[0], callbacks=callbacks, tags=tags, metadata=metadata)[
[508](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain/chains/base.py:508) _output_key
[509](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain/chains/base.py:509) ]
[511](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain/chains/base.py:511) if kwargs and not args:
[512](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain/chains/base.py:512) return self(kwargs, callbacks=callbacks, tags=tags, metadata=metadata)[
[513](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain/chains/base.py:513) _output_key
[514](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain/chains/base.py:514) ]
File [c:\Users\achme\Projects\configured-dialogs\tests\elma365-community\.venv\lib\site-packages\langchain\chains\base.py:312](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain/chains/base.py:312), in Chain.__call__(self, inputs, return_only_outputs, callbacks, tags, metadata, run_name, include_run_info)
[310](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain/chains/base.py:310) except BaseException as e:
[311](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain/chains/base.py:311) run_manager.on_chain_error(e)
--> [312](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain/chains/base.py:312) raise e
[313](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain/chains/base.py:313) run_manager.on_chain_end(outputs)
[314](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain/chains/base.py:314) final_outputs: Dict[str, Any] = self.prep_outputs(
[315](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain/chains/base.py:315) inputs, outputs, return_only_outputs
[316](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain/chains/base.py:316) )
File [c:\Users\achme\Projects\configured-dialogs\tests\elma365-community\.venv\lib\site-packages\langchain\chains\base.py:306](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain/chains/base.py:306), in Chain.__call__(self, inputs, return_only_outputs, callbacks, tags, metadata, run_name, include_run_info)
[299](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain/chains/base.py:299) run_manager = callback_manager.on_chain_start(
[300](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain/chains/base.py:300) dumpd(self),
[301](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain/chains/base.py:301) inputs,
[302](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain/chains/base.py:302) name=run_name,
[303](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain/chains/base.py:303) )
[304](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain/chains/base.py:304) try:
[305](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain/chains/base.py:305) outputs = (
--> [306](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain/chains/base.py:306) self._call(inputs, run_manager=run_manager)
[307](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain/chains/base.py:307) if new_arg_supported
[308](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain/chains/base.py:308) else self._call(inputs)
[309](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain/chains/base.py:309) )
[310](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain/chains/base.py:310) except BaseException as e:
[311](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain/chains/base.py:311) run_manager.on_chain_error(e)
File [c:\Users\achme\Projects\configured-dialogs\tests\elma365-community\.venv\lib\site-packages\langchain\chains\llm.py:103](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain/chains/llm.py:103), in LLMChain._call(self, inputs, run_manager)
[98](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain/chains/llm.py:98) def _call(
[99](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain/chains/llm.py:99) self,
[100](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain/chains/llm.py:100) inputs: Dict[str, Any],
[101](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain/chains/llm.py:101) run_manager: Optional[CallbackManagerForChainRun] = None,
[102](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain/chains/llm.py:102) ) -> Dict[str, str]:
--> [103](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain/chains/llm.py:103) response = self.generate([inputs], run_manager=run_manager)
[104](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain/chains/llm.py:104) return self.create_outputs(response)[0]
File [c:\Users\achme\Projects\configured-dialogs\tests\elma365-community\.venv\lib\site-packages\langchain\chains\llm.py:115](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain/chains/llm.py:115), in LLMChain.generate(self, input_list, run_manager)
[113](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain/chains/llm.py:113) callbacks = run_manager.get_child() if run_manager else None
[114](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain/chains/llm.py:114) if isinstance(self.llm, BaseLanguageModel):
--> [115](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain/chains/llm.py:115) return self.llm.generate_prompt(
[116](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain/chains/llm.py:116) prompts,
[117](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain/chains/llm.py:117) stop,
[118](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain/chains/llm.py:118) callbacks=callbacks,
[119](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain/chains/llm.py:119) **self.llm_kwargs,
[120](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain/chains/llm.py:120) )
[121](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain/chains/llm.py:121) else:
[122](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain/chains/llm.py:122) results = self.llm.bind(stop=stop, **self.llm_kwargs).batch(
[123](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain/chains/llm.py:123) cast(List, prompts), {"callbacks": callbacks}
[124](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain/chains/llm.py:124) )
File [c:\Users\achme\Projects\configured-dialogs\tests\elma365-community\.venv\lib\site-packages\langchain_core\language_models\llms.py:516](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain_core/language_models/llms.py:516), in BaseLLM.generate_prompt(self, prompts, stop, callbacks, **kwargs)
[508](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain_core/language_models/llms.py:508) def generate_prompt(
[509](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain_core/language_models/llms.py:509) self,
[510](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain_core/language_models/llms.py:510) prompts: List[PromptValue],
(...)
[513](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain_core/language_models/llms.py:513) **kwargs: Any,
[514](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain_core/language_models/llms.py:514) ) -> LLMResult:
[515](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain_core/language_models/llms.py:515) prompt_strings = [p.to_string() for p in prompts]
--> [516](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain_core/language_models/llms.py:516) return self.generate(prompt_strings, stop=stop, callbacks=callbacks, **kwargs)
File [c:\Users\achme\Projects\configured-dialogs\tests\elma365-community\.venv\lib\site-packages\langchain_core\language_models\llms.py:666](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain_core/language_models/llms.py:666), in BaseLLM.generate(self, prompts, stop, callbacks, tags, metadata, run_name, **kwargs)
[650](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain_core/language_models/llms.py:650) raise ValueError(
[651](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain_core/language_models/llms.py:651) "Asked to cache, but no cache found at `langchain.cache`."
[652](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain_core/language_models/llms.py:652) )
[653](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain_core/language_models/llms.py:653) run_managers = [
[654](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain_core/language_models/llms.py:654) callback_manager.on_llm_start(
[655](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain_core/language_models/llms.py:655) dumpd(self),
(...)
[664](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain_core/language_models/llms.py:664) )
[665](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain_core/language_models/llms.py:665) ]
--> [666](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain_core/language_models/llms.py:666) output = self._generate_helper(
[667](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain_core/language_models/llms.py:667) prompts, stop, run_managers, bool(new_arg_supported), **kwargs
[668](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain_core/language_models/llms.py:668) )
[669](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain_core/language_models/llms.py:669) return output
[670](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain_core/language_models/llms.py:670) if len(missing_prompts) > 0:
File [c:\Users\achme\Projects\configured-dialogs\tests\elma365-community\.venv\lib\site-packages\langchain_core\language_models\llms.py:553](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain_core/language_models/llms.py:553), in BaseLLM._generate_helper(self, prompts, stop, run_managers, new_arg_supported, **kwargs)
[551](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain_core/language_models/llms.py:551) for run_manager in run_managers:
[552](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain_core/language_models/llms.py:552) run_manager.on_llm_error(e, response=LLMResult(generations=[]))
--> [553](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain_core/language_models/llms.py:553) raise e
[554](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain_core/language_models/llms.py:554) flattened_outputs = output.flatten()
[555](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain_core/language_models/llms.py:555) for manager, flattened_output in zip(run_managers, flattened_outputs):
File [c:\Users\achme\Projects\configured-dialogs\tests\elma365-community\.venv\lib\site-packages\langchain_core\language_models\llms.py:540](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain_core/language_models/llms.py:540), in BaseLLM._generate_helper(self, prompts, stop, run_managers, new_arg_supported, **kwargs)
[530](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain_core/language_models/llms.py:530) def _generate_helper(
[531](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain_core/language_models/llms.py:531) self,
[532](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain_core/language_models/llms.py:532) prompts: List[str],
(...)
[536](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain_core/language_models/llms.py:536) **kwargs: Any,
[537](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain_core/language_models/llms.py:537) ) -> LLMResult:
[538](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain_core/language_models/llms.py:538) try:
[539](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain_core/language_models/llms.py:539) output = (
--> [540](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain_core/language_models/llms.py:540) self._generate(
[541](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain_core/language_models/llms.py:541) prompts,
[542](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain_core/language_models/llms.py:542) stop=stop,
[543](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain_core/language_models/llms.py:543) # TODO: support multiple run managers
[544](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain_core/language_models/llms.py:544) run_manager=run_managers[0] if run_managers else None,
[545](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain_core/language_models/llms.py:545) **kwargs,
[546](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain_core/language_models/llms.py:546) )
[547](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain_core/language_models/llms.py:547) if new_arg_supported
[548](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain_core/language_models/llms.py:548) else self._generate(prompts, stop=stop)
[549](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain_core/language_models/llms.py:549) )
[550](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain_core/language_models/llms.py:550) except BaseException as e:
[551](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain_core/language_models/llms.py:551) for run_manager in run_managers:
File [c:\Users\achme\Projects\configured-dialogs\tests\elma365-community\.venv\lib\site-packages\langchain_core\language_models\llms.py:1069](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain_core/language_models/llms.py:1069), in LLM._generate(self, prompts, stop, run_manager, **kwargs)
[1066](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain_core/language_models/llms.py:1066) new_arg_supported = inspect.signature(self._call).parameters.get("run_manager")
[1067](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain_core/language_models/llms.py:1067) for prompt in prompts:
[1068](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain_core/language_models/llms.py:1068) text = (
-> [1069](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain_core/language_models/llms.py:1069) self._call(prompt, stop=stop, run_manager=run_manager, **kwargs)
[1070](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain_core/language_models/llms.py:1070) if new_arg_supported
[1071](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain_core/language_models/llms.py:1071) else self._call(prompt, stop=stop, **kwargs)
[1072](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain_core/language_models/llms.py:1072) )
[1073](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain_core/language_models/llms.py:1073) generations.append([Generation(text=text)])
[1074](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain_core/language_models/llms.py:1074) return LLMResult(generations=generations)
File [c:\Users\achme\Projects\configured-dialogs\tests\elma365-community\.venv\lib\site-packages\langchain_community\llms\yandex.py:131](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain_community/llms/yandex.py:131), in YandexGPT._call(self, prompt, stop, run_manager, **kwargs)
[129](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain_community/llms/yandex.py:129) metadata = (("authorization", f"Api-Key {self.api_key}"),)
[130](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain_community/llms/yandex.py:130) res = stub.Instruct(request, metadata=metadata)
--> [131](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain_community/llms/yandex.py:131) text = list(res)[0].alternatives[0].text
[132](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain_community/llms/yandex.py:132) if stop is not None:
[133](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/langchain_community/llms/yandex.py:133) text = enforce_stop_tokens(text, stop)
File [c:\Users\achme\Projects\configured-dialogs\tests\elma365-community\.venv\lib\site-packages\grpc\_channel.py:541](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/grpc/_channel.py:541), in _Rendezvous.__next__(self)
[540](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/grpc/_channel.py:540) def __next__(self):
--> [541](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/grpc/_channel.py:541) return self._next()
File [c:\Users\achme\Projects\configured-dialogs\tests\elma365-community\.venv\lib\site-packages\grpc\_channel.py:967](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/grpc/_channel.py:967), in _MultiThreadedRendezvous._next(self)
[965](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/grpc/_channel.py:965) raise StopIteration()
[966](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/grpc/_channel.py:966) elif self._state.code is not None:
--> [967](file:///C:/Users/achme/Projects/configured-dialogs/tests/elma365-community/.venv/lib/site-packages/grpc/_channel.py:967) raise self
_MultiThreadedRendezvous: <_MultiThreadedRendezvous of RPC that terminated with:
status = StatusCode.UNAUTHENTICATED
details = "You have to specify folder ID for user account"
debug_error_string = "UNKNOWN:Error received from peer ipv4:158.160.54.160:443 {created_time:"2023-12-18T13:29:25.0934987+00:00", grpc_status:16, grpc_message:"You have to specify folder ID for user account"}"
>
```
### Expected behavior
the model responds successfully | YandexGPT crashes with error "You have to specify folder ID for user account" | https://api.github.com/repos/langchain-ai/langchain/issues/14896/comments | 3 | 2023-12-19T09:29:50Z | 2023-12-19T09:56:08Z | https://github.com/langchain-ai/langchain/issues/14896 | 2,048,263,486 | 14,896 |
[
"langchain-ai",
"langchain"
] | ### System Info
Hi folks,
I am getting one error in which sometime agent gives the exact same answer as it has output for the previous question. Here is the screen shot of my replies

Since it is giving the same exact string as answer I want put a manual check that whenever it given the answer which is exactly the same , then I will again query the agent for the new response.
I want to know how I can access the messages from the chat history
### Who can help?
@agola11
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [X] Memory
- [X] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
It is a rare incident so no exact method to catch it
### Expected behavior
Just a code piece to check the previous replies | Comparing the agent reply with the previous conversation | https://api.github.com/repos/langchain-ai/langchain/issues/14895/comments | 2 | 2023-12-19T09:29:35Z | 2024-03-26T16:08:06Z | https://github.com/langchain-ai/langchain/issues/14895 | 2,048,263,074 | 14,895 |
[
"langchain-ai",
"langchain"
] | ### System Info
python 3.10
langchain 0.0.350
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
output_parser = LineListOutputParser()
QUERY_PROMPT = PromptTemplate(
input_variables=["question"],
template="""You are an AI language model assistant. Your task is to generate five
different versions of the given user question to retrieve relevant documents from a vector
database. By generating multiple perspectives on the user question, your goal is to help
the user overcome some of the limitations of the distance-based similarity search.
Provide these alternative questions separated by newlines.
Original question: {question}""",
)
llm_chain = LLMChain(llm=self.llm, prompt=QUERY_PROMPT, output_parser=output_parser)
db = self.embeddings_dict[doc_id].as_retriever(search_kwargs={"k": 15})
multi_query_retriever = MultiQueryRetriever.from_llm(retriever=db, llm=self.llm)
relevant_documents = multi_query_retriever.get_relevant_documents(query)
### Expected behavior
limit the maximum number of parallel llm call, for example, 4 | MultiQueryRetriever consume too much GPU mem, request to limit the maximum llm call | https://api.github.com/repos/langchain-ai/langchain/issues/14894/comments | 1 | 2023-12-19T08:44:18Z | 2024-03-26T16:08:01Z | https://github.com/langchain-ai/langchain/issues/14894 | 2,048,190,743 | 14,894 |
[
"langchain-ai",
"langchain"
] | ### System Info
Python:3.11-slim-bullseye base docker image
Langchain version: 0.0.348
qdrant-client: 1.7.0
Qdrant database 1.7.1 (deployed on AWS cluster)
Reproduces regardless of prefer grpc true or false
### Who can help?
@agola11 @hwchase17
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [X] Async
### Reproduction
1. Use ConversationalRetrievalChain with Qdrant vectordb
2. Use async acall interface
3. Maybe wait some idle time (15 min?)
You will experience a lot of errors from Qdrant:
AioRpcError of RPC that terminated with: status = StatusCode.UNAVAILABLE details = "recvmsg:Connection reset by peer" debug_error_string = "UNKNOWN:Error received from peer {created_time}" >, type: AioRpcError
It's ok after retry of the request but this causes significant delay in response
### Expected behavior
Connection recovery for the vectordb should be handled by LangChain internally and ideally understand what causes the connection issues and resolve it | AIORpcError connection reset errors from Qdrant | https://api.github.com/repos/langchain-ai/langchain/issues/14891/comments | 5 | 2023-12-19T08:02:36Z | 2024-05-01T16:05:53Z | https://github.com/langchain-ai/langchain/issues/14891 | 2,048,126,563 | 14,891 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
I construct a Chain using retriever with a certain template, but now I want to use another template when the retriever retrieve nothing back.
Is it possible for langchain? Or maybe it can only be handmade?
### Suggestion:
_No response_ | Can I switch my template during my Chain working? | https://api.github.com/repos/langchain-ai/langchain/issues/14890/comments | 1 | 2023-12-19T07:25:34Z | 2024-03-26T16:07:57Z | https://github.com/langchain-ai/langchain/issues/14890 | 2,048,077,349 | 14,890 |
[
"langchain-ai",
"langchain"
] | I've deployed 'mistralai/Mistral-7B-v0.1' model to sagemaker and wanna use it as load_qa_chain
```
from langchain.chains.question_answering import load_qa_chain
from langchain.llms.sagemaker_endpoint import SagemakerEndpoint
content_handler = ContentHandler()
llm = SagemakerEndpoint(
endpoint_name=endpoint_name,
region_name="eu-west-2",
model_kwargs={
"temperature": 0,
"maxTokens": 1024,
"numResults": 2
},
content_handler=content_handler
)
chain = load_qa_chain(llm=llm, chain_type="stuff")
```
Now, running chain for Doc QA.
`chain.run(input_documents = docs, question = "what is dollarama")`
```
---------------------------------------------------------------------------
ValidationError Traceback (most recent call last)
Cell In[56], line 1
----> 1 chain.run(input_documents = docs, question = "what is dollarama")
File ~/anaconda3/envs/tensorflow2_p310/lib/python3.10/site-packages/langchain/chains/base.py:506, in Chain.run(self, callbacks, tags, metadata, *args, **kwargs)
501 return self(args[0], callbacks=callbacks, tags=tags, metadata=metadata)[
502 _output_key
503 ]
505 if kwargs and not args:
--> 506 return self(kwargs, callbacks=callbacks, tags=tags, metadata=metadata)[
507 _output_key
508 ]
510 if not kwargs and not args:
511 raise ValueError(
512 "`run` supported with either positional arguments or keyword arguments,"
513 " but none were provided."
514 )
File ~/anaconda3/envs/tensorflow2_p310/lib/python3.10/site-packages/langchain/chains/base.py:306, in Chain.__call__(self, inputs, return_only_outputs, callbacks, tags, metadata, run_name, include_run_info)
304 except BaseException as e:
305 run_manager.on_chain_error(e)
--> 306 raise e
307 run_manager.on_chain_end(outputs)
308 final_outputs: Dict[str, Any] = self.prep_outputs(
309 inputs, outputs, return_only_outputs
310 )
File ~/anaconda3/envs/tensorflow2_p310/lib/python3.10/site-packages/langchain/chains/base.py:300, in Chain.__call__(self, inputs, return_only_outputs, callbacks, tags, metadata, run_name, include_run_info)
293 run_manager = callback_manager.on_chain_start(
294 dumpd(self),
295 inputs,
296 name=run_name,
297 )
298 try:
299 outputs = (
--> 300 self._call(inputs, run_manager=run_manager)
301 if new_arg_supported
302 else self._call(inputs)
303 )
304 except BaseException as e:
305 run_manager.on_chain_error(e)
File ~/anaconda3/envs/tensorflow2_p310/lib/python3.10/site-packages/langchain/chains/combine_documents/base.py:119, in BaseCombineDocumentsChain._call(self, inputs, run_manager)
117 # Other keys are assumed to be needed for LLM prediction
118 other_keys = {k: v for k, v in inputs.items() if k != self.input_key}
--> 119 output, extra_return_dict = self.combine_docs(
120 docs, callbacks=_run_manager.get_child(), **other_keys
121 )
122 extra_return_dict[self.output_key] = output
123 return extra_return_dict
File ~/anaconda3/envs/tensorflow2_p310/lib/python3.10/site-packages/langchain/chains/combine_documents/stuff.py:171, in StuffDocumentsChain.combine_docs(self, docs, callbacks, **kwargs)
169 inputs = self._get_inputs(docs, **kwargs)
170 # Call predict on the LLM.
--> 171 return self.llm_chain.predict(callbacks=callbacks, **inputs), {}
File ~/anaconda3/envs/tensorflow2_p310/lib/python3.10/site-packages/langchain/chains/llm.py:257, in LLMChain.predict(self, callbacks, **kwargs)
242 def predict(self, callbacks: Callbacks = None, **kwargs: Any) -> str:
243 """Format prompt with kwargs and pass to LLM.
244
245 Args:
(...)
255 completion = llm.predict(adjective="funny")
256 """
--> 257 return self(kwargs, callbacks=callbacks)[self.output_key]
File ~/anaconda3/envs/tensorflow2_p310/lib/python3.10/site-packages/langchain/chains/base.py:306, in Chain.__call__(self, inputs, return_only_outputs, callbacks, tags, metadata, run_name, include_run_info)
304 except BaseException as e:
305 run_manager.on_chain_error(e)
--> 306 raise e
307 run_manager.on_chain_end(outputs)
308 final_outputs: Dict[str, Any] = self.prep_outputs(
309 inputs, outputs, return_only_outputs
310 )
File ~/anaconda3/envs/tensorflow2_p310/lib/python3.10/site-packages/langchain/chains/base.py:300, in Chain.__call__(self, inputs, return_only_outputs, callbacks, tags, metadata, run_name, include_run_info)
293 run_manager = callback_manager.on_chain_start(
294 dumpd(self),
295 inputs,
296 name=run_name,
297 )
298 try:
299 outputs = (
--> 300 self._call(inputs, run_manager=run_manager)
301 if new_arg_supported
302 else self._call(inputs)
303 )
304 except BaseException as e:
305 run_manager.on_chain_error(e)
File ~/anaconda3/envs/tensorflow2_p310/lib/python3.10/site-packages/langchain/chains/llm.py:93, in LLMChain._call(self, inputs, run_manager)
88 def _call(
89 self,
90 inputs: Dict[str, Any],
91 run_manager: Optional[CallbackManagerForChainRun] = None,
92 ) -> Dict[str, str]:
---> 93 response = self.generate([inputs], run_manager=run_manager)
94 return self.create_outputs(response)[0]
File ~/anaconda3/envs/tensorflow2_p310/lib/python3.10/site-packages/langchain/chains/llm.py:103, in LLMChain.generate(self, input_list, run_manager)
101 """Generate LLM result from inputs."""
102 prompts, stop = self.prep_prompts(input_list, run_manager=run_manager)
--> 103 return self.llm.generate_prompt(
104 prompts,
105 stop,
106 callbacks=run_manager.get_child() if run_manager else None,
107 **self.llm_kwargs,
108 )
File ~/anaconda3/envs/tensorflow2_p310/lib/python3.10/site-packages/langchain/llms/base.py:498, in BaseLLM.generate_prompt(self, prompts, stop, callbacks, **kwargs)
490 def generate_prompt(
491 self,
492 prompts: List[PromptValue],
(...)
495 **kwargs: Any,
496 ) -> LLMResult:
497 prompt_strings = [p.to_string() for p in prompts]
--> 498 return self.generate(prompt_strings, stop=stop, callbacks=callbacks, **kwargs)
File ~/anaconda3/envs/tensorflow2_p310/lib/python3.10/site-packages/langchain/llms/base.py:647, in BaseLLM.generate(self, prompts, stop, callbacks, tags, metadata, run_name, **kwargs)
632 raise ValueError(
633 "Asked to cache, but no cache found at `langchain.cache`."
634 )
635 run_managers = [
636 callback_manager.on_llm_start(
637 dumpd(self),
(...)
645 )
646 ]
--> 647 output = self._generate_helper(
648 prompts, stop, run_managers, bool(new_arg_supported), **kwargs
649 )
650 return output
651 if len(missing_prompts) > 0:
File ~/anaconda3/envs/tensorflow2_p310/lib/python3.10/site-packages/langchain/llms/base.py:535, in BaseLLM._generate_helper(self, prompts, stop, run_managers, new_arg_supported, **kwargs)
533 for run_manager in run_managers:
534 run_manager.on_llm_error(e)
--> 535 raise e
536 flattened_outputs = output.flatten()
537 for manager, flattened_output in zip(run_managers, flattened_outputs):
File ~/anaconda3/envs/tensorflow2_p310/lib/python3.10/site-packages/langchain/llms/base.py:522, in BaseLLM._generate_helper(self, prompts, stop, run_managers, new_arg_supported, **kwargs)
512 def _generate_helper(
513 self,
514 prompts: List[str],
(...)
518 **kwargs: Any,
519 ) -> LLMResult:
520 try:
521 output = (
--> 522 self._generate(
523 prompts,
524 stop=stop,
525 # TODO: support multiple run managers
526 run_manager=run_managers[0] if run_managers else None,
527 **kwargs,
528 )
529 if new_arg_supported
530 else self._generate(prompts, stop=stop)
531 )
532 except BaseException as e:
533 for run_manager in run_managers:
File ~/anaconda3/envs/tensorflow2_p310/lib/python3.10/site-packages/langchain/llms/base.py:1048, in LLM._generate(self, prompts, stop, run_manager, **kwargs)
1042 for prompt in prompts:
1043 text = (
1044 self._call(prompt, stop=stop, run_manager=run_manager, **kwargs)
1045 if new_arg_supported
1046 else self._call(prompt, stop=stop, **kwargs)
1047 )
-> 1048 generations.append([Generation(text=text)])
1049 return LLMResult(generations=generations)
File ~/anaconda3/envs/tensorflow2_p310/lib/python3.10/site-packages/langchain/load/serializable.py:97, in Serializable.__init__(self, **kwargs)
96 def __init__(self, **kwargs: Any) -> None:
---> 97 super().__init__(**kwargs)
98 self._lc_kwargs = kwargs
File ~/anaconda3/envs/tensorflow2_p310/lib/python3.10/site-packages/pydantic/main.py:341, in pydantic.main.BaseModel.__init__()
ValidationError: 1 validation error for Generation
text
str type expected (type=type_error.str)
```
### Suggestion:
When I'm using openai api, it works but, only sagemaker endpoint is problem.
| Issue: can't use llm sagemaker endpoint as load_qa_chain | https://api.github.com/repos/langchain-ai/langchain/issues/14886/comments | 4 | 2023-12-19T04:04:06Z | 2024-04-16T16:20:14Z | https://github.com/langchain-ai/langchain/issues/14886 | 2,047,879,728 | 14,886 |
[
"langchain-ai",
"langchain"
] | I have developed a Flutter app with a chatroom feature and have successfully implemented chatting in the chatroom interface using the OpenAI API with ChatGPT. Now, I am looking to utilize an API to establish a connection between LangChain and my mobile app. How or what should i do to achieve this? I am new to this, any help will be thankful. | How to connect LangChain application to a mobile app using an API | https://api.github.com/repos/langchain-ai/langchain/issues/14885/comments | 1 | 2023-12-19T03:58:57Z | 2024-03-26T16:07:51Z | https://github.com/langchain-ai/langchain/issues/14885 | 2,047,875,945 | 14,885 |
[
"langchain-ai",
"langchain"
] | I am working in project, where i have to use multiple pdf docs to give respose to the user query.
I have a load method to load pdf from directory.
```
def loadFiles():
loader = DirectoryLoader('./static/upload/', glob="./*.pdf", loader_cls=PyPDFLoader)
documents = loader.load()
text_splitter = RecursiveCharacterTextSplitter(chunk_size=1000, chunk_overlap=50)
texts = text_splitter.split_documents(documents)
return texts
```
I am creating chromadb by below code:
```
def createDb(load,embeddings,persist_directory):
max_input_size = 3000
num_output = 256
chunk_size_limit = 1000 # token window size per document
max_chunk_overlap = 80 # overlap for each token fragment
vectordb = Chroma.from_documents(documents=load, embedding=embeddings, persist_directory=persist_directory)
vectordb.persist()
return vectordb
```
now , I am quering chromadb ,
```
qa_chain = RetrievalQA.from_chain_type(
llm=OpenAI(temperature=0,model_name = "text-davinci-003"),
retriever=vectordb.as_retriever(),chain_type="stuff",
chain_type_kwargs=chain_type_kwargs,
return_source_documents=True
)
```
However, i am getting the response , but not full response in some cases, like below :
My source pdf has following contents:
[source file](https://i.stack.imgur.com/Xaz7U.png)
while my response is showing only some parts as shown below:
[chromadb response](https://i.stack.imgur.com/CY4dL.png)
I tried incresing the chunk_ovarlap size as shown in createdb(), but it does not worked. I am expecting from chromadb full response and response should be comming from given pdf.
I am new to this, any help will be thankful. | how to increase the response size of chromadb | https://api.github.com/repos/langchain-ai/langchain/issues/14880/comments | 3 | 2023-12-19T01:59:00Z | 2024-03-29T16:07:30Z | https://github.com/langchain-ai/langchain/issues/14880 | 2,047,779,892 | 14,880 |
[
"langchain-ai",
"langchain"
] | ### System Info
Platform: Ubuntu 22.04
Python: 3.11.6
Langchain: 0.0.351
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
When the program is first initialized with `__setup_client()` and `__should_reingest()` returns `True`, `__get_new_client()` works as intended. However, if `reingest()` is called afterward, `__get_new_client()` returns the error below.
Relevant code:
```python
def __setup_client(self) -> None:
if self.__should_reingest():
self.db = self.__get_new_client()
else:
self.db = self.__get_existing_client()
def reingest(self) -> None:
self.db = self.__get_new_client()
def __get_new_client(self):
if os.path.exists(self.persist_directory):
shutil.rmtree(self.persist_directory)
docs = self.__get_docs()
client = Chroma.from_documents(
docs, self.embedding_function, persist_directory=self.persist_directory)
with open(f'{self.persist_directory}/date.txt', 'w') as f:
f.write(f'{datetime.date.today()}')
return client
```
Error:
```
Traceback (most recent call last):
...
File line 26, in reingest
self.cauldron.reingest()
File line 19, in reingest
self.db = self.__get_new_client()
^^^^^^^^^^^^^^^^^^^^^^^
File line 51, in __get_new_client
client = Chroma.from_documents(
^^^^^^^^^^^^^^^^^^^^^^
File ".../.pyenv/versions/3.11.6/lib/python3.11/site-packages/langchain_community/vectorstores/chroma.py", line 771, in from_documents
return cls.from_texts(
^^^^^^^^^^^^^^^
File ".../.pyenv/versions/3.11.6/lib/python3.11/site-packages/langchain_community/vectorstores/chroma.py", line 729, in from_texts
chroma_collection.add_texts(
File ".../.pyenv/versions/3.11.6/lib/python3.11/site-packages/langchain_community/vectorstores/chroma.py", line 297, in add_texts
self._collection.upsert(
File ".../.pyenv/versions/3.11.6/lib/python3.11/site-packages/chromadb/api/models/Collection.py", line 459, in upsert
self._client._upsert(
File ".../.pyenv/versions/3.11.6/lib/python3.11/site-packages/chromadb/telemetry/opentelemetry/__init__.py", line 127, in wrapper
return f(*args, **kwargs)
^^^^^^^^^^^^^^^^^^
File ".../.pyenv/versions/3.11.6/lib/python3.11/site-packages/chromadb/api/segment.py", line 446, in _upsert
self._producer.submit_embeddings(coll["topic"], records_to_submit)
File ".../.pyenv/versions/3.11.6/lib/python3.11/site-packages/chromadb/telemetry/opentelemetry/__init__.py", line 127, in wrapper
return f(*args, **kwargs)
^^^^^^^^^^^^^^^^^^
File ".../.pyenv/versions/3.11.6/lib/python3.11/site-packages/chromadb/db/mixins/embeddings_queue.py", line 172, in submit_embeddings
results = cur.execute(sql, params).fetchall()
^^^^^^^^^^^^^^^^^^^^^^^^
sqlite3.OperationalError: attempt to write a readonly database
```
### Expected behavior
No error returned | Calling Chroma.from_documents() returns sqlite3.OperationalError: attempt to write a readonly database, but only sometimes | https://api.github.com/repos/langchain-ai/langchain/issues/14872/comments | 24 | 2023-12-19T00:02:10Z | 2024-07-21T21:44:48Z | https://github.com/langchain-ai/langchain/issues/14872 | 2,047,680,474 | 14,872 |
[
"langchain-ai",
"langchain"
] | ### System Info
I'm using Langchain version '0.0.350' in Databricks
using the following libraries:
`from langchain_experimental.sql import SQLDatabaseChain`
`from langchain import PromptTemplate`
`from langchain.sql_database import SQLDatabase`
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [X] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
I'm using Langchain version '0.0.350'
`from langchain_experimental.sql import SQLDatabaseChain`
`from langchain import PromptTemplate`
`from langchain.sql_database import SQLDatabase`
mytemplate = """
You are a SQL expert. Given an input question, first create a syntactically correct sql query run on then look at the results of the query and return the answer.
Use the following format:
Question: Question here
SQLQuery: SQL Query to run
SQLResult: Result of the SQLQuery
Answer: Final answer here
Only use the following tables:
{question}
"""
`dbs = SQLDatabase.from_uri(conn_str)`
`db_prompt= PromptTemplate( input_variables = [ 'question'], template = mytemplate)`
`db_chain = SQLDatabaseChain.from_llm(llm = llms , db = dbs, prompt = db_prompt, verbose=True)`
`db_chain.run(question = 'question here')`
ValueError: Missing some input keys: {'query'}
### Expected behavior
I expect to get results of a query | SQLDatabaseChain.from_llm returning ValueError: Missing some input keys: {'query'} when custom template is used | https://api.github.com/repos/langchain-ai/langchain/issues/14865/comments | 1 | 2023-12-18T21:04:10Z | 2024-03-25T16:08:37Z | https://github.com/langchain-ai/langchain/issues/14865 | 2,047,451,496 | 14,865 |
[
"langchain-ai",
"langchain"
] | ### System Info
platform: Vagrant - Ubuntu 2204
python: 3.9.18
langchain: 0.0.350
langchain-core: 0.1.1
langchain-community: 0.0.3
litellm: 1.15.1
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
1. Run the code:
``` python
from langchain.chat_models import ChatLiteLLM
from langchain.schema import HumanMessage
# Initialize ChatLiteLLM
chat = ChatLiteLLM(
model="together_ai/mistralai/Mixtral-8x7B-Instruct-v0.1",
verbose=True,
temperature=0.0,
)
text = "Write me a poem about the blue sky"
messages = [HumanMessage(content=text)]
print(chat(messages).content)
```
2: Error message:
``` bash
AttributeError: 'ValueError' object has no attribute 'status_code'
```
### Expected behavior
I can't get ChatLiteLLM to work with Together AI. I expect **ChatLiteLLM** to work correctly and to output the result. | (ChatLiteLLM - Together AI) AttributeError: 'ValueError' object has no attribute 'status_code' | https://api.github.com/repos/langchain-ai/langchain/issues/14863/comments | 2 | 2023-12-18T20:50:19Z | 2024-03-31T16:05:45Z | https://github.com/langchain-ai/langchain/issues/14863 | 2,047,431,121 | 14,863 |
[
"langchain-ai",
"langchain"
] | ### System Info
**Below is my Linux box:**
Linux 5.15.0-1014-azure #17~20.04.1-Ubuntu SMP Thu Jun 23 20:01:51 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
**Below is libraries within my conda environment:**
google-auth 2.24.0
google-search-results 2.4.2
googleapis-common-protos 1.61.0
langchain 0.0.349
langchain-cli 0.0.19
langchain-community 0.0.1
langchain-core 0.0.13
langchainhub 0.1.14
requests 2.31.0
requests-oauthlib 1.3.1
types-requests 2.31.0.10
### Who can help?
@hwchase17
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [X] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [X] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
**Below is source code:**
```
import requests
import json
import os
SERPAPI_API_KEY=os.environ["SERPAPI_API_KEY"]
print(f"{SERPAPI_API_KEY=}")
url = "https://google.serper.dev/search"
payload = json.dumps({
"q": "apple inc"
})
headers = {
'X-API-KEY': SERPAPI_API_KEY,
'Content-Type': 'application/json'
}
response = requests.request("POST", url, headers=headers, data=payload)
print("************ result from request ***************")
print(response.text)
from langchain.utilities import SerpAPIWrapper
search = SerpAPIWrapper(serpapi_api_key=SERPAPI_API_KEY)
res = search.run("apple inc")
print("************ result from langchain ***************")
print(f"{res=}")
```
### Expected behavior
if the KEY works for request API, then SerpAPIWrapper shouldn't fails with invalid KEY error. | SerpAPIWrapper doesn't work | https://api.github.com/repos/langchain-ai/langchain/issues/14855/comments | 2 | 2023-12-18T18:36:20Z | 2024-03-25T16:08:27Z | https://github.com/langchain-ai/langchain/issues/14855 | 2,047,220,983 | 14,855 |
[
"langchain-ai",
"langchain"
] | ### System Info
langchain: latest (0.0.350)
python: 3.10.12
### Who can help?
@hwchase17
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Code to reproduce (based on [code from docs](https://python.langchain.com/docs/modules/agents/agent_types/openai_tools))
```
import openai
from langchain.agents import AgentExecutor
from langchain.agents.format_scratchpad.openai_tools import (
format_to_openai_tool_messages,
)
from langchain.agents.output_parsers.openai_tools import OpenAIToolsAgentOutputParser
from langchain.chat_models import AzureChatOpenAI, ChatOpenAI
from langchain.prompts import ChatPromptTemplate, MessagesPlaceholder
from langchain.tools import BearlyInterpreterTool, DuckDuckGoSearchRun
from langchain.tools.render import format_tool_to_openai_tool
from settings import AppSettings
openai.api_type = AppSettings.OPENAI_API_TYPE or "azure"
openai.api_version = AppSettings.OPENAI_API_VERSION or "2023-03-15-preview"
openai.api_base = AppSettings.AZURE_OPENAI_API_ENDPOINT
openai.api_key = AppSettings.AZURE_OPENAI_API_KEY
lc_tools = [DuckDuckGoSearchRun()]
oai_tools = [format_tool_to_openai_tool(tool) for tool in lc_tools]
prompt = ChatPromptTemplate.from_messages(
[
("system", "You are a helpful assistant"),
("user", "{input}"),
MessagesPlaceholder(variable_name="agent_scratchpad"),
]
)
llm = AzureChatOpenAI(
openai_api_version=AppSettings.OPENAI_API_VERSION, # type: ignore TODO: I don't know why this is an error despite being in the class
azure_deployment=AppSettings.AZURE_OPENAI_DEPLOYMENT,
temperature=0,
streaming=True,
verbose=True,
)
agent = (
{
"input": lambda x: x["input"],
"agent_scratchpad": lambda x: format_to_openai_tool_messages(
x["intermediate_steps"]
),
}
| prompt
| llm.bind(tools=oai_tools)
| OpenAIToolsAgentOutputParser()
)
agent_executor = AgentExecutor(agent=agent, tools=lc_tools, verbose=True)
agent_executor.invoke(
{"input": "What's the average of the temperatures in LA, NYC, and SF today?"}
)
```
Logs:
```
> Entering new AgentExecutor chain...
ic| merged[k]: {'arguments': '{"qu', 'name': 'duckduckgo_search'}
v: <OpenAIObject at 0x7fdb750c7920> JSON: {
"arguments": "ery\":"
}
type(merged[k]): <class 'dict'>
type(v): <class 'openai.openai_object.OpenAIObject'>
isinstance(merged[k], dict): True
isinstance(v, dict): True
Traceback (most recent call last):
File "/home/jonatan-medinilla/dev/team-macanudo-ai/backend/test-issue.py", line 56, in <module>
agent_executor.invoke(
File "/home/jonatan-medinilla/dev/team-macanudo-ai/backend/.venv/lib/python3.10/site-packages/langchain/chains/base.py", line 89, in invoke
return self(
File "/home/jonatan-medinilla/dev/team-macanudo-ai/backend/.venv/lib/python3.10/site-packages/langchain/chains/base.py", line 312, in __call__
raise e
File "/home/jonatan-medinilla/dev/team-macanudo-ai/backend/.venv/lib/python3.10/site-packages/langchain/chains/base.py", line 306, in __call__
self._call(inputs, run_manager=run_manager)
File "/home/jonatan-medinilla/dev/team-macanudo-ai/backend/.venv/lib/python3.10/site-packages/langchain/agents/agent.py", line 1312, in _call
next_step_output = self._take_next_step(
File "/home/jonatan-medinilla/dev/team-macanudo-ai/backend/.venv/lib/python3.10/site-packages/langchain/agents/agent.py", line 1038, in _take_next_step
[
File "/home/jonatan-medinilla/dev/team-macanudo-ai/backend/.venv/lib/python3.10/site-packages/langchain/agents/agent.py", line 1038, in <listcomp>
[
File "/home/jonatan-medinilla/dev/team-macanudo-ai/backend/.venv/lib/python3.10/site-packages/langchain/agents/agent.py", line 1066, in _iter_next_step
output = self.agent.plan(
File "/home/jonatan-medinilla/dev/team-macanudo-ai/backend/.venv/lib/python3.10/site-packages/langchain/agents/agent.py", line 461, in plan
output = self.runnable.invoke(inputs, config={"callbacks": callbacks})
File "/home/jonatan-medinilla/dev/team-macanudo-ai/backend/.venv/lib/python3.10/site-packages/langchain_core/runnables/base.py", line 1514, in invoke
input = step.invoke(
File "/home/jonatan-medinilla/dev/team-macanudo-ai/backend/.venv/lib/python3.10/site-packages/langchain_core/runnables/base.py", line 2937, in invoke
return self.bound.invoke(
File "/home/jonatan-medinilla/dev/team-macanudo-ai/backend/.venv/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py", line 160, in invoke
self.generate_prompt(
File "/home/jonatan-medinilla/dev/team-macanudo-ai/backend/.venv/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py", line 491, in generate_prompt
return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs)
File "/home/jonatan-medinilla/dev/team-macanudo-ai/backend/.venv/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py", line 378, in generate
raise e
File "/home/jonatan-medinilla/dev/team-macanudo-ai/backend/.venv/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py", line 368, in generate
self._generate_with_cache(
File "/home/jonatan-medinilla/dev/team-macanudo-ai/backend/.venv/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py", line 524, in _generate_with_cache
return self._generate(
File "/home/jonatan-medinilla/dev/team-macanudo-ai/backend/.venv/lib/python3.10/site-packages/langchain_community/chat_models/openai.py", line 428, in _generate
return generate_from_stream(stream_iter)
File "/home/jonatan-medinilla/dev/team-macanudo-ai/backend/.venv/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py", line 65, in generate_from_stream
generation += chunk
File "/home/jonatan-medinilla/dev/team-macanudo-ai/backend/.venv/lib/python3.10/site-packages/langchain_core/outputs/chat_generation.py", line 62, in __add__
message=self.message + other.message,
File "/home/jonatan-medinilla/dev/team-macanudo-ai/backend/.venv/lib/python3.10/site-packages/langchain_core/messages/ai.py", line 52, in __add__
additional_kwargs=self._merge_kwargs_dict(
File "/home/jonatan-medinilla/dev/team-macanudo-ai/backend/.venv/lib/python3.10/site-packages/langchain_core/messages/base.py", line 128, in _merge_kwargs_dict
merged[k][i] = self._merge_kwargs_dict(merged[k][i], e)
File "/home/jonatan-medinilla/dev/team-macanudo-ai/backend/.venv/lib/python3.10/site-packages/langchain_core/messages/base.py", line 114, in _merge_kwargs_dict
raise TypeError(
TypeError: additional_kwargs["function"] already exists in this message, but with a different type.
```
### Expected behavior
No errors and the same result as without streaming=True.
Last week there was a PR [#14613](https://github.com/langchain-ai/langchain/pull/14613) that fixed the issue #13442. I tested the fix using the same scenario that I shared and it worked as expected. However, today I tested it again and the merge kwargs fails because the types don't match though both values are instances of **dict**
| BaseMessageChunk cannot merge function key when using open ai tools and streaming. | https://api.github.com/repos/langchain-ai/langchain/issues/14853/comments | 3 | 2023-12-18T17:36:47Z | 2023-12-18T19:28:23Z | https://github.com/langchain-ai/langchain/issues/14853 | 2,047,135,178 | 14,853 |
[
"langchain-ai",
"langchain"
] | This GitHub App is used by Google employees to monitor GitHub repositories. It sends notifications in Chrome for Reviews, updates to Pull Requests, Mentions, etc.
https://github.com/apps/g3n-github
Could it be added to the LangChain repository to make it easier for Google engineers to contribute in a timely manner? (Must be done by an organization administrator)
@baskaryan @hwchase17
Thanks! | Add g3n-github app to repository | https://api.github.com/repos/langchain-ai/langchain/issues/14851/comments | 4 | 2023-12-18T16:20:31Z | 2024-06-08T16:08:00Z | https://github.com/langchain-ai/langchain/issues/14851 | 2,047,010,816 | 14,851 |
[
"langchain-ai",
"langchain"
] | ### Feature request
There is a new feature with Azure Open AI to get deterministic output.
To use this feature we need to pass extra parameters when we iniative AzureOpenAI
https://learn.microsoft.com/en-us/azure/ai-services/openai/how-to/reproducible-output?tabs=pyton
### Motivation
Getting deterministic feature is a very important for most of the Generative AI applications. This feature would be a huge help.
https://learn.microsoft.com/en-us/azure/ai-services/openai/how-to/reproducible-output?tabs=pyton
### Your contribution
Yes, depends on the change what needed here can help | Reproducible output feature with Azure Open AI (support for seed parameter) | https://api.github.com/repos/langchain-ai/langchain/issues/14850/comments | 1 | 2023-12-18T15:45:57Z | 2024-03-25T16:08:22Z | https://github.com/langchain-ai/langchain/issues/14850 | 2,046,949,854 | 14,850 |
[
"langchain-ai",
"langchain"
] | ### Feature request
I'm thinking about adding a pre-commit configuration file (`.pre-commit-config.yaml `) to the codebase as it helps improve the code quality and maintain consistency within the project. It will allow all the developers/future contributors to maintain a consistent code style. Adding this will help us by running some automated checks before anyone tries to make a commit.
1. [trailing-whitespace](https://github.com/pre-commit/pre-commit-hooks?tab=readme-ov-file#trailing-whitespace) (Removes trailing whitespace at the end of lines)
2. [end-of-file-fixer](https://github.com/pre-commit/pre-commit-hooks?tab=readme-ov-file#end-of-file-fixer) (Ensures that files end with a newline character)
3. [check-yaml](https://github.com/pre-commit/pre-commit-hooks?tab=readme-ov-file#check-yaml) (Attempts to load all yaml files to verify syntax)
### Motivation
These checks are very common and used in many big `Python` based organizations including [scikit-learn](https://github.com/scikit-learn/scikit-learn/blob/main/.pre-commit-config.yaml), [jax](https://github.com/Sai-Suraj-27/jax/blob/main/.pre-commit-config.yaml), [pandas](https://github.com/Sai-Suraj-27/pandas/blob/main/.pre-commit-config.yaml#L72), etc.
They help in maintaining consistent style across the repository and ensures same formatting and quality for all the contributors.
### Your contribution
I have a good experience in adding/modifying this file in large codebases (https://github.com/unifyai/ivy/pull/22220, https://github.com/unifyai/ivy/pull/22974, https://github.com/gprMax/gprMax/pull/354, https://github.com/pandas-dev/pandas/pull/55277)
So, if you think it is useful let me know I will make a PR, Thank you. | Adding a `.pre-commit-config.yaml` file for maintaining consistent style and code quality | https://api.github.com/repos/langchain-ai/langchain/issues/14845/comments | 1 | 2023-12-18T14:33:36Z | 2024-03-25T16:08:18Z | https://github.com/langchain-ai/langchain/issues/14845 | 2,046,801,582 | 14,845 |
[
"langchain-ai",
"langchain"
] | ### System Info
langchain: 0.0.350
Python: 3.8.8
### Who can help?
@hwchase17 @agola11
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [X] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Run code below,
```python
from langchain.embeddings.azure_openai import AzureOpenAIEmbeddings
from azure.identity import AzureCliCredential
credential = AzureCliCredential()
token = credential.get_token("https://cognitiveservices.azure.com/.default")
embeddings = AzureOpenAIEmbeddings(
deployment=model_name_retriever,
chunk_size=16,
azure_endpoint=azure_endpoint,
azure_ad_token=token,
openai_api_version=api_version,
http_client=http_client
)
```
Then,
```
---------------------------------------------------------------------------
KeyError Traceback (most recent call last)
<ipython-input-4-b617513b6ef1> in <module>
----> 6 embeddings = AzureOpenAIEmbeddings(
7 deployment=model_name_retriever,
8 chunk_size=16,
/opt/conda/lib/python3.8/site-packages/pydantic/main.cpython-38-x86_64-linux-gnu.so in pydantic.main.BaseModel.__init__()
/opt/conda/lib/python3.8/site-packages/pydantic/main.cpython-38-x86_64-linux-gnu.so in pydantic.main.validate_model()
/opt/conda/lib/python3.8/site-packages/langchain_community/embeddings/azure_openai.py in validate_environment(cls, values)
82 "AZURE_OPENAI_ENDPOINT"
83 )
---> 84 values["azure_ad_token"] = values["azure_ad_token"] or os.getenv(
85 "AZURE_OPENAI_AD_TOKEN"
86 )
KeyError: 'azure_ad_token'
```
This error also happens in AzureChatOpenAI. These behavior do not match with [current API](https://api.python.langchain.com/en/stable/chat_models/langchain_community.chat_models.azure_openai.AzureChatOpenAI.html#langchain_community.chat_models.azure_openai.AzureChatOpenAI)
### Expected behavior
Defining class successfully. | azure_ad_token variable does not work for AzureChatOpenAI and AzureOpenAIEmbeddings | https://api.github.com/repos/langchain-ai/langchain/issues/14843/comments | 3 | 2023-12-18T12:56:27Z | 2023-12-19T01:03:47Z | https://github.com/langchain-ai/langchain/issues/14843 | 2,046,610,982 | 14,843 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
safety_settings argument is missing with the `ChatVertexAI` model
### Suggestion:
Should be able to define the safety settings. For example:
```
safety_settings = {
HarmCategory.HARM_CATEGORY_DANGEROUS_CONTENT: HarmBlockThreshold.BLOCK_NONE,
HarmCategory.HARM_CATEGORY_HATE_SPEECH: HarmBlockThreshold.BLOCK_NONE,
HarmCategory.HARM_CATEGORY_HARASSMENT: HarmBlockThreshold.BLOCK_NONE,
HarmCategory.HARM_CATEGORY_SEXUALLY_EXPLICIT: HarmBlockThreshold.BLOCK_NONE,
}
llm = ChatVertexAI(safety_settings=safety_settings)
``` | safety_settings argument is missing with the ChatVertexAI mode | https://api.github.com/repos/langchain-ai/langchain/issues/14841/comments | 11 | 2023-12-18T11:21:49Z | 2024-03-27T01:00:11Z | https://github.com/langchain-ai/langchain/issues/14841 | 2,046,443,649 | 14,841 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
I am working on a project where i need to fetch the content of sub URL by giving its base URL only.
Is there any method in Langchain to fetch all the content of its Sub URL by giving its base URL only ?
### Suggestion:
_No response_ | Issue: Is there any method in Langchain to fetch all the content of its Sub URL by giving its base URL only | https://api.github.com/repos/langchain-ai/langchain/issues/14837/comments | 1 | 2023-12-18T07:52:57Z | 2024-03-25T16:08:12Z | https://github.com/langchain-ai/langchain/issues/14837 | 2,045,948,323 | 14,837 |
[
"langchain-ai",
"langchain"
] | ### Feature request
Currently we use PGVector as our Vector Database and do in combination with FastAPI. One issue is that in the indexing process, which happens over an API Endpoint, we found out that the current implementation of PgVector is blocking due to its synchronous implementation. FastAPI offers async/await and SQLAlchemy also allows using async sessions. I would like to add an Async Alternative for PGVector.
### Motivation
Non-Blocking code execution prevents blocking the main thread of our execution and provides far superior performance compared to synchronous execution
### Your contribution
I already coded a prototype, which works so far and does not block the main thread of the API. I would fully implement this async solution. Currently I only have to discuss this with my boss, since I did this during worktime. | Async PGVector to make LangChain with Postgres more performant | https://api.github.com/repos/langchain-ai/langchain/issues/14836/comments | 8 | 2023-12-18T07:45:33Z | 2024-07-07T16:59:56Z | https://github.com/langchain-ai/langchain/issues/14836 | 2,045,936,457 | 14,836 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
` #below is my code
def generate_embeddings(urls = None,persist_directory=None):
with sync_playwright() as p:
browser = p.chromium.launch()
navigate_tool = NavigateTool(sync_browser=browser)
extract_hyperlinks_tool = ExtractHyperlinksTool(sync_browser=browser)
for url in urls:
print(url,"url is ----------------------")
await navigate_tool._arun(url)
print(await navigate_tool._arun(url))
hyperlinks = await extract_hyperlinks_tool._arun()
for link in hyperlinks:
print(link,"link is ------------------------------------------")
browser.close()
asyncio.run(main())
loader = UnstructuredURLLoader(urls=urls)
urlDocument = loader.load()
text_splitter = RecursiveCharacterTextSplitter(chunk_size=1000, chunk_overlap=50)
texts = text_splitter.split_documents(documents=urlDocument)
if texts:
embedding = OpenAIEmbeddings()
Chroma.from_documents(documents=texts, embedding=embedding, persist_directory=persist_directory)
file_crawl_status = True
file_index_status = True
else:
file_crawl_status = False
file_index_status = False
return file_crawl_status, file_index_status
`
# And I am getting these error
/home/hs/CustomBot/accounts/common_langcain_qa.py:121: RuntimeWarning: coroutine 'NavigateTool._arun' was never awaited
navigate_tool._arun(url)
RuntimeWarning: Enable tracemalloc to get the object allocation traceback
<coroutine object NavigateTool._arun at 0x7ffbaa2871c0>
/home/hs/CustomBot/accounts/common_langcain_qa.py:122: RuntimeWarning: coroutine 'NavigateTool._arun' was never awaited
print(navigate_tool._arun(url))
RuntimeWarning: Enable tracemalloc to get the object allocation traceback
<coroutine object ExtractHyperlinksTool._arun at 0x7ffbaa2871c0> link is ------------------------------------------
Internal Server Error: /create-project/
Traceback (most recent call last):
File "/home/hs/env/lib/python3.8/site-packages/django/core/handlers/exception.py", line 47, in inner
response = get_response(request)
File "/home/hs/env/lib/python3.8/site-packages/django/core/handlers/base.py", line 181, in _get_response
response = wrapped_callback(request, *callback_args, **callback_kwargs)
File "/home/hs/env/lib/python3.8/site-packages/django/views/generic/base.py", line 69, in view
return self.dispatch(request, *args, **kwargs)
File "/home/hs/env/lib/python3.8/site-packages/django/views/generic/base.py", line 101, in dispatch
return handler(request, *args, **kwargs)
File "/home/hs/CustomBot/user_projects/views.py", line 1776, in post
file_crawl_status, file_index_status = generate_embeddings(
File "/home/hs/CustomBot/accounts/common_langcain_qa.py", line 126, in generate_embeddings
browser.close()
File "/home/hs/env/lib/python3.8/site-packages/playwright/sync_api/_generated.py", line 9869, in close
self._sync("browser.close", self._impl_obj.close())
File "/home/hs/env/lib/python3.8/site-packages/playwright/_impl/_sync_base.py", line 100, in _sync
task = self._loop.create_task(coro)
File "/usr/lib/python3.8/asyncio/base_events.py", line 429, in create_task
self._check_closed()
File "/usr/lib/python3.8/asyncio/base_events.py", line 508, in _check_closed
raise RuntimeError('Event loop is closed')
RuntimeError: Event loop is closed
### Suggestion:
_No response_ | Issue: I'm currently working on a project where I need to fetch all the sub-URLs from a website using Langchain. | https://api.github.com/repos/langchain-ai/langchain/issues/14834/comments | 1 | 2023-12-18T05:48:30Z | 2024-03-25T16:08:07Z | https://github.com/langchain-ai/langchain/issues/14834 | 2,045,789,011 | 14,834 |
[
"langchain-ai",
"langchain"
] | ### Feature request
There's already many tracers in LangChain (https://github.com/langchain-ai/langchain/tree/master/libs/langchain/langchain/callbacks/tracers).
I would like to ask for adding an OpentelemetryTracer.
### Motivation
Community tracing
### Your contribution
Would like to contribute | Add an OpentelemetryTracer in LangChain | https://api.github.com/repos/langchain-ai/langchain/issues/14832/comments | 4 | 2023-12-18T03:52:49Z | 2024-03-27T16:08:32Z | https://github.com/langchain-ai/langchain/issues/14832 | 2,045,680,808 | 14,832 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
My code is :
```
llm = ChatOpenAI(temperature=0, verbose=True, model="gpt-3.5-turbo-16k")
toolkit = SQLDatabaseToolkit(db=db, llm=llm)
agent_type = AgentType.ZERO_SHOT_REACT_DESCRIPTION
agent_executor_1 = initialize_agent()
agent_executor = create_sql_agent(llm=llm, toolkit=toolkit, verbose=True, agent_type=agent_type)
agent_executor.run(input="How many orders in 2023-12-15")
```
I got the result with the following format:
```
> Entering new AgentExecutor chain...
Thought: ...
Action: ...
Action Input: ...
Observation: ...
Thought: ...
AI: ...
```
I want to save AI result to a variable. I try print(agent_executor.run) but it didn't work. Is there any way that suit my need.
### Suggestion:
_No response_ | How can I grab only the AI answer of my langchain agent? | https://api.github.com/repos/langchain-ai/langchain/issues/14828/comments | 1 | 2023-12-18T00:32:57Z | 2024-03-25T16:07:56Z | https://github.com/langchain-ai/langchain/issues/14828 | 2,045,453,142 | 14,828 |
[
"langchain-ai",
"langchain"
] | ### System Info
mac os latest, latest LangChain
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [x] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [x] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
ConversationBufferWindowMemory(
chat_memory=PostgresChatMessageHistory(
session_id=session_id, connection_string=os.getenv("DB_URL")
),
return_messages=True,
memory_key=memory_key,
)
try to run a chat with this
### Expected behavior
I think it should save the massages as they are to SQL, but use the properties of the ConversationSummaryBufferMemory for the chat itself, but it reality it just uses SQL and history is jus tall the massages | PostgresChatHistory or any other database for storing history does not work with ConversationSummaryBufferMemory(or other memory taht produces 'system' massage type after summarisation | https://api.github.com/repos/langchain-ai/langchain/issues/14822/comments | 9 | 2023-12-17T19:00:18Z | 2024-03-30T16:06:06Z | https://github.com/langchain-ai/langchain/issues/14822 | 2,045,336,301 | 14,822 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
Is there a similar method for using `gemini-pro` with `ConversationalRetrievalChain.from_llm`, as there is for utilizing models from VertexAI with `ChatVertexAI` or `VertexAI`, where you specify the `model_name`?
### Suggestion:
_No response_ | Gemini with ConversationalRetrievalChain | https://api.github.com/repos/langchain-ai/langchain/issues/14819/comments | 2 | 2023-12-17T17:26:06Z | 2023-12-20T08:45:09Z | https://github.com/langchain-ai/langchain/issues/14819 | 2,045,301,971 | 14,819 |
[
"langchain-ai",
"langchain"
] | null | When using the OpenAIFunctionsAgentOutputParser() to parse agent output meet an error. | https://api.github.com/repos/langchain-ai/langchain/issues/14816/comments | 2 | 2023-12-17T15:44:02Z | 2024-01-22T07:05:03Z | https://github.com/langchain-ai/langchain/issues/14816 | 2,045,267,345 | 14,816 |
[
"langchain-ai",
"langchain"
] | ### System Info
In this file: https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/llms/together.py
There is no variable Prompt, however it is available in the documentation: https://docs.together.ai/reference/inference
```python
if config.model == "together":
return Together(
model="togethercomputer/StripedHyena-Nous-7B",
temperature=0.7,
max_tokens=128,
top_k=1,
together_api_key=config.together_api_key,
prompt="The capital of France is"
)
```
error I get is:
```bash
pydantic.v1.error_wrappers.ValidationError: 1 validation error for Together
prompt
extra fields not permitted (type=value_error.extra)
```
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Try to run together Ai model and pass Prompt on model it will fail. My goal is to set a system Prompt.
### Expected behavior
Accept Prompt as the system Prompt. | Missing "Prompt" in Together ai | https://api.github.com/repos/langchain-ai/langchain/issues/14813/comments | 3 | 2023-12-17T11:45:30Z | 2023-12-29T04:03:34Z | https://github.com/langchain-ai/langchain/issues/14813 | 2,045,185,889 | 14,813 |
[
"langchain-ai",
"langchain"
] | ### System Info
langchain.__version__ '0.0.350'
python 3.11.5
ollama version is 0.1.16
pyngrok-7.0.3
### Who can help?
@agola11 @hwchase17
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [X] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [X] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
When I use the combination: Ollama + Langchain + Google Colab + ngrok. I get an error
(The models are downloaded, I can see them in Ollama list)
```
llm = Ollama(
model="run deepseek-coder:6.7b", base_url="https://e12b-35-231-226-171.ngrok.io/")
responce = llm.predict('What do you know about Falco?')
Output exceeds the [size limit](command:workbench.action.openSettings?%5B%22notebook.output.textLineLimit%22%5D). Open the full output data [in a text editor](command:workbench.action.openLargeOutput?5f7f2031-a63a-42c0-ac20-ccc8d53de6b2)---------------------------------------------------------------------------
JSONDecodeError Traceback (most recent call last)
File [~/miniconda3/envs/llm/lib/python3.11/site-packages/requests/models.py:971](https://file+.vscode-resource.vscode-cdn.net/home/serhiy/Scalarr/llm/%20RAG/~/miniconda3/envs/llm/lib/python3.11/site-packages/requests/models.py:971), in Response.json(self, **kwargs)
970 try:
--> 971 return complexjson.loads(self.text, **kwargs)
972 except JSONDecodeError as e:
973 # Catch JSON-related errors and raise as requests.JSONDecodeError
974 # This aliases json.JSONDecodeError and simplejson.JSONDecodeError
File [~/miniconda3/envs/llm/lib/python3.11/site-packages/simplejson/__init__.py:514](https://file+.vscode-resource.vscode-cdn.net/home/serhiy/Scalarr/llm/%20RAG/~/miniconda3/envs/llm/lib/python3.11/site-packages/simplejson/__init__.py:514), in loads(s, encoding, cls, object_hook, parse_float, parse_int, parse_constant, object_pairs_hook, use_decimal, allow_nan, **kw)
510 if (cls is None and encoding is None and object_hook is None and
511 parse_int is None and parse_float is None and
512 parse_constant is None and object_pairs_hook is None
513 and not use_decimal and not allow_nan and not kw):
--> 514 return _default_decoder.decode(s)
515 if cls is None:
File [~/miniconda3/envs/llm/lib/python3.11/site-packages/simplejson/decoder.py:389](https://file+.vscode-resource.vscode-cdn.net/home/serhiy/Scalarr/llm/%20RAG/~/miniconda3/envs/llm/lib/python3.11/site-packages/simplejson/decoder.py:389), in JSONDecoder.decode(self, s, _w, _PY3)
388 if end != len(s):
--> 389 raise JSONDecodeError("Extra data", s, end, len(s))
390 return obj
JSONDecodeError: Extra data: line 1 column 5 - line 1 column 19 (char 4 - 18)
During handling of the above exception, another exception occurred:
...
973 # Catch JSON-related errors and raise as requests.JSONDecodeError
974 # This aliases json.JSONDecodeError and simplejson.JSONDecodeError
--> 975 raise RequestsJSONDecodeError(e.msg, e.doc, e.pos)
JSONDecodeError: Extra data: line 1 column 5 (char 4)
```
### Expected behavior
If I run from the terminal Ollama + Google Colab + ngrok, everything works with google colab and ngrok. Also, if I change the Python script to local base_url:
```
llm = Ollama(
model="run deepseek-coder:6.7b", base_url="http://localhost:11434")
responce = llm.predict('What do you know about Falco?')
```
everything works Ollama + Langchain.
Only the combination Ollama + Langchain + Google Colab + ngrok does not work | Error Langchain + Ollama + Google Colab + ngrok | https://api.github.com/repos/langchain-ai/langchain/issues/14810/comments | 2 | 2023-12-17T07:36:37Z | 2024-05-07T16:07:38Z | https://github.com/langchain-ai/langchain/issues/14810 | 2,045,111,414 | 14,810 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
# 使用LCEL创建代理
prompt = ChatPromptTemplate.from_messages(
[
("system", "You are a helpful assistant"),
("user", "{input}"),
MessagesPlaceholder(variable_name="agent_scratchpad"),
]
)
llm_with_tools = self.llm.bind(functions=[format_tool_to_openai_function(t) for t in self.tools])
agent = (
{
"input": lambda x: x["input"],
"agent_scratchpad": lambda x: format_to_openai_function_messages(
x["intermediate_steps"]
),
}
| prompt
| llm_with_tools
| OpenAIFunctionsAgentOutputParser()
)
# 创建AgentExecutor并调用
agent_executor = AgentExecutor(agent=agent,
tools=self.tools,
verbose=True,
return_intermediate_steps=True,
handle_parsing_errors=True,
)
try:
response = agent_executor.invoke(
{
"input": message
}
)
### Suggestion:
_No response_ | how to add LCEL memory | OpenAIFunctionsAgentOutputParser? | https://api.github.com/repos/langchain-ai/langchain/issues/14809/comments | 2 | 2023-12-17T02:51:49Z | 2024-03-24T16:07:14Z | https://github.com/langchain-ai/langchain/issues/14809 | 2,045,031,789 | 14,809 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.