issue_owner_repo listlengths 2 2 | issue_body stringlengths 0 261k β | issue_title stringlengths 1 925 | issue_comments_url stringlengths 56 81 | issue_comments_count int64 0 2.5k | issue_created_at stringlengths 20 20 | issue_updated_at stringlengths 20 20 | issue_html_url stringlengths 37 62 | issue_github_id int64 387k 2.46B | issue_number int64 1 127k |
|---|---|---|---|---|---|---|---|---|---|
[
"langchain-ai",
"langchain"
] | ### System Info
LangChain version: 0.0.276
Python version: 3.11.2
Platform: x86_64 Debian 12.2.0-14
Weaviate as vectorstore
SQLite for document index
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Followed the provided [Indexing directions](https://python.langchain.com/docs/modules/data_connection/indexing) of LangChain's documentation.
```
import os, time, json, weaviate, openai
from langchain.vectorstores import Weaviate
from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.indexes import SQLRecordManager, index
from langchain.text_splitter import CharacterTextSplitter
from langchain.schema import Document
from datetime import datetime
VECTORS_INDEX_NAME = 'LaborIA_vectors_TEST'
COLLECTION_NAME = 'LaborIA_docs_TEST'
NAMESPACE = f"weaviate/{COLLECTION_NAME}"
record_manager = SQLRecordManager(NAMESPACE, db_url="sqlite:///record_manager_cache.sql")
record_manager.create_schema()
def _clear():
"""Hacky helper method to clear content. See the `full` mode section to to understand why it works."""
index([], record_manager, weaviate_vectorstore, cleanup="full", source_id_key="source")
_clear()
```
Results in the following error:
```
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
Cell In[60], line 1
----> 1 _clear()
Cell In[59], line 3, in _clear()
1 def _clear():
2 """Hacky helper method to clear content. See the `full` mode section to to understand why it works."""
----> 3 index([], record_manager, weaviate_vectorstore, cleanup="full", source_id_key="source")
TypeError: index() got an unexpected keyword argument 'cleanup'
```
Declaring the `_clear()` function with either `cleanup="incremental"` or `cleanup=None` deletion modes results in the same `TypeError`. It shows no errors if the `cleanup` parameter is completely removed.
Will `index` execute in deletion mode `None` (as specified [here](https://python.langchain.com/docs/modules/data_connection/indexing#none-deletion-mode) if the `cleanup` parameter is not present?
### Expected behavior
No errors when the `cleanup` mode parameter is set when `index` is called. | Indexing deletion modes not working | https://api.github.com/repos/langchain-ai/langchain/issues/10118/comments | 3 | 2023-09-02T02:31:04Z | 2023-12-09T16:04:21Z | https://github.com/langchain-ai/langchain/issues/10118 | 1,878,277,419 | 10,118 |
[
"langchain-ai",
"langchain"
] | ### System Info
```sh
pydantic==2.3.0
langchain==0.0.279
```
### Who can help?
@agola11 , @hwchase17: Having trouble learning how to have LLMs as attributes in pydantic2. I keep getting this confusing error which I cannot troubleshoot.
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```py
import os
os.environ['OPENAI_API_KEY'] = 'foo'
from pydantic import BaseModel
from langchain.base_language import BaseLanguageModel
from langchain.llms import OpenAI
class Foo(BaseModel):
llm: BaseLanguageModel = OpenAI()
llm = OpenAI()
# Works
Foo()
# Fails
Foo(llm=llm)
```
Error:
```sh
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
Cell In[1], line 14
12 Foo()
13 # Fails
---> 14 Foo(llm=llm)
File ~/miniconda3/envs/chain/lib/python3.11/site-packages/pydantic/main.py:165, in BaseModel.__init__(__pydantic_self__, **data)
163 # `__tracebackhide__` tells pytest and some other tools to omit this function from tracebacks
164 __tracebackhide__ = True
--> 165 __pydantic_self__.__pydantic_validator__.validate_python(data, self_instance=__pydantic_self__)
TypeError: BaseModel.validate() takes 2 positional arguments but 3 were given
```
### Expected behavior
Be able to set the field without validation errors | Cannot have models with BaseLanguageModel in pydantic 2: TypeError: BaseModel.validate() takes 2 positional arguments but 3 were given | https://api.github.com/repos/langchain-ai/langchain/issues/10112/comments | 3 | 2023-09-01T22:33:12Z | 2023-11-16T14:53:22Z | https://github.com/langchain-ai/langchain/issues/10112 | 1,878,116,654 | 10,112 |
[
"langchain-ai",
"langchain"
] | ### System Info
```
File Python\Python39\site-packages\langchain\vectorstores\pinecone.py:301, in Pinecone.max_marginal_relevance_search(self, query, k, fetch_k, lambda_mult, filter,
namespace, **kwargs)
284 """Return docs selected using the maximal marginal relevance.
285
286 Maximal marginal relevance optimizes for similarity to query AND diversity
(...)
298 List of Documents selected by maximal marginal relevance.
299 """
300 embedding = self._embed_query(query)
--> 301 return self.max_marginal_relevance_search_by_vector(
302 embedding, k, fetch_k, lambda_mult, filter, namespace
303 )
File Python\Python39\site-packages\langchain\vectorstores\pinecone.py:269, in Pinecone.max_marginal_relevance_search_by_vector(self, embedding, k, fetch_k, lambda_mult, filter, namespace, **kwargs)
262 mmr_selected = maximal_marginal_relevance(
263 np.array([embedding], dtype=np.float32),
264 [item["values"] for item in results["matches"]],
265 k=k,
266 lambda_mult=lambda_mult,
267 )
268 selected = [results["matches"][i]["metadata"] for i in mmr_selected]
--> 269 return [
270 Document(page_content=metadata.pop((self._text_key)), metadata=metadata)
271 for metadata in selected
272 ]
File Python\Python39\site-packages\langchain\vectorstores\pinecone.py:270, in <listcomp>(.0)
262 mmr_selected = maximal_marginal_relevance(
263 np.array([embedding], dtype=np.float32),
264 [item["values"] for item in results["matches"]],
265 k=k,
266 lambda_mult=lambda_mult,
267 )
268 selected = [results["matches"][i]["metadata"] for i in mmr_selected]
269 return [
--> 270 Document(page_content=metadata.pop((self._text_key)), metadata=metadata)
271 for metadata in selected
272 ]
File Python\Python39\site-packages\langchain\load\serializable.py:74, in Serializable.__init__(self, **kwargs)
73 def __init__(self, **kwargs: Any) -> None:
---> 74 super().__init__(**kwargs)
75 self._lc_kwargs = kwargs
File Python\Python39\site-packages\pydantic\v1\main.py:341, in BaseModel.__init__(__pydantic_self__, **data)
339 values, fields_set, validation_error = validate_model(__pydantic_self__.__class__, data)
340 if validation_error:
--> 341 raise validation_error
342 try:
343 object_setattr(__pydantic_self__, '__dict__', values)
ValidationError: 1 validation error for Document
page_content
str type expected (type=type_error.str)
```
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
1. Connect to a Pinecone vector store
```
vectorstore = Pinecone(pinecone_index, embedding_model, "text")
```
2. Use MMR search to query the vector store. Here, my vectorstore **does not have any information about Bill Gates**.
```
query = "Who is Bill Gates?"
res = vectorstore.max_marginal_relevance_search(
query=query,
k=4,
fetch_k=20,
lambda_mult=0.5
)
```
3. Got the error
By the way, here is the [API doc](https://api.python.langchain.com/en/latest/vectorstores/langchain.vectorstores.pinecone.Pinecone.html#langchain.vectorstores.pinecone.Pinecone.max_marginal_relevance_search) for the `max_marginal_relevance_search`.
### Expected behavior
It should show it cannot find any context instead of raising an error. | Got ValidationError when searching a content does not exist in the Pinecone vector store using Langchain Pinecone connection | https://api.github.com/repos/langchain-ai/langchain/issues/10111/comments | 3 | 2023-09-01T22:05:26Z | 2023-12-18T23:48:02Z | https://github.com/langchain-ai/langchain/issues/10111 | 1,878,096,535 | 10,111 |
[
"langchain-ai",
"langchain"
] | ### System Info
Langchain 0.0.277
Python 3.9
### Who can help?
@hwchase17
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [X] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
from langchain.agents import create_sql_agent, initialize_agent, create_spark_sql_agent
from langchain.agents.agent_toolkits import SQLDatabaseToolkit, SparkSQLToolkit
from langchain.sql_database import SQLDatabase
from langchain_experimental.sql.base import SQLDatabaseChain
from langchain.llms import HuggingFacePipeline
from langchain.agents import AgentExecutor
from langchain.agents.agent_types import AgentType
from langchain.chat_models import ChatOpenAI
from snowflake.sqlalchemy import URL
from sqlalchemy import create_engine
from transformers import AutoTokenizer, AutoModelForCausalLM, pipeline
model_id= "Photolens/llama-2-13b-langchain-chat"
user= "***"
password= "***"
account= "**-**"
database="**SNOWFLAKE_SAMPLE_DATA*"
schema="****"
warehouse="***"
def load_model():
model_id = "Photolens/llama-2-13b-langchain-chat"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id,
device_map='auto',
low_cpu_mem_usage=True,
trust_remote_code=True
)
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
max_length=1100,
repetition_penalty=1.15,
top_p=0.95,
temperature=0.2,
pad_token_id=tokenizer.eos_token_id,
max_new_tokens=300
)
local_llm = HuggingFacePipeline(pipeline=pipe)
return local_llm
LLM=load_model()
engine = create_engine(URL(
user= "***",
password= "****",
account= "**-**",
database="**SNOWFLAKE_SAMPLE_DATA,
schema="***",
warehouse="***")
)
db = SQLDatabase(engine)
#here comes the problem, SQLDatabase makes a wrong query on Snowflake
### Expected behavior
I'm expecting to generate a connection with SQLDatabase, but it doesn't in fact makes a wrong query that i dont get it. I'm new at this so i would apreciate some help.
This is the error a i get:
/home/zeusone/anaconda3/envs/snowflake_ai/bin/python /home/zeusone/Documents/ChatbotFalcon/SQLagent/snoflake_simple_agent.py
/home/zeusone/anaconda3/envs/snowflake_ai/lib/python3.9/site-packages/snowflake/connector/options.py:96: UserWarning: You have an incompatible version of 'pyarrow' installed (13.0.0), please install a version that adheres to: 'pyarrow<8.1.0,>=8.0.0; extra == "pandas"'
warn_incompatible_dep(
Loading checkpoint shards: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 3/3 [00:15<00:00, 5.20s/it]
Traceback (most recent call last):
File "/home/zeusone/anaconda3/envs/snowflake_ai/lib/python3.9/site-packages/sqlalchemy/engine/base.py", line 1910, in _execute_context
self.dialect.do_execute(
File "/home/zeusone/anaconda3/envs/snowflake_ai/lib/python3.9/site-packages/sqlalchemy/engine/default.py", line 736, in do_execute
cursor.execute(statement, parameters)
File "/home/zeusone/anaconda3/envs/snowflake_ai/lib/python3.9/site-packages/snowflake/connector/cursor.py", line 827, in execute
Error.errorhandler_wrapper(self.connection, self, error_class, errvalue)
File "/home/zeusone/anaconda3/envs/snowflake_ai/lib/python3.9/site-packages/snowflake/connector/errors.py", line 275, in errorhandler_wrapper
handed_over = Error.hand_to_other_handler(
File "/home/zeusone/anaconda3/envs/snowflake_ai/lib/python3.9/site-packages/snowflake/connector/errors.py", line 330, in hand_to_other_handler
cursor.errorhandler(connection, cursor, error_class, error_value)
File "/home/zeusone/anaconda3/envs/snowflake_ai/lib/python3.9/site-packages/snowflake/connector/errors.py", line 209, in default_errorhandler
raise error_class(
snowflake.connector.errors.ProgrammingError: 001059 (22023): SQL compilation error:
Must specify the full search path starting from database for SNOWFLAKE_SAMPLE_DATA
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/zeusone/Documents/ChatbotFalcon/SQLagent/snoflake_simple_agent.py", line 56, in <module>
db = SQLDatabase(engine)#"snowflake://ADRIANOCABRERA:Semilla_1@EKKFOPI-YK08475/SNOWFLAKE_SAMPLE_DATA/TPCH-SF1?warehouse=COMPUTE_WH")
File "/home/zeusone/anaconda3/envs/snowflake_ai/lib/python3.9/site-packages/langchain/utilities/sql_database.py", line 111, in __init__
self._metadata.reflect(
File "/home/zeusone/anaconda3/envs/snowflake_ai/lib/python3.9/site-packages/sqlalchemy/sql/schema.py", line 4901, in reflect
Table(name, self, **reflect_opts)
File "<string>", line 2, in __new__
File "/home/zeusone/anaconda3/envs/snowflake_ai/lib/python3.9/site-packages/sqlalchemy/util/deprecations.py", line 375, in warned
return fn(*args, **kwargs)
File "/home/zeusone/anaconda3/envs/snowflake_ai/lib/python3.9/site-packages/sqlalchemy/sql/schema.py", line 619, in __new__
metadata._remove_table(name, schema)
File "/home/zeusone/anaconda3/envs/snowflake_ai/lib/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__
compat.raise_(
File "/home/zeusone/anaconda3/envs/snowflake_ai/lib/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_
raise exception
File "/home/zeusone/anaconda3/envs/snowflake_ai/lib/python3.9/site-packages/sqlalchemy/sql/schema.py", line 614, in __new__
table._init(name, metadata, *args, **kw)
File "/home/zeusone/anaconda3/envs/snowflake_ai/lib/python3.9/site-packages/sqlalchemy/sql/schema.py", line 689, in _init
self._autoload(
File "/home/zeusone/anaconda3/envs/snowflake_ai/lib/python3.9/site-packages/sqlalchemy/sql/schema.py", line 724, in _autoload
conn_insp.reflect_table(
File "/home/zeusone/anaconda3/envs/snowflake_ai/lib/python3.9/site-packages/sqlalchemy/engine/reflection.py", line 774, in reflect_table
for col_d in self.get_columns(
File "/home/zeusone/anaconda3/envs/snowflake_ai/lib/python3.9/site-packages/sqlalchemy/engine/reflection.py", line 497, in get_columns
col_defs = self.dialect.get_columns(
File "/home/zeusone/anaconda3/envs/snowflake_ai/lib/python3.9/site-packages/snowflake/sqlalchemy/snowdialect.py", line 669, in get_columns
schema_columns = self._get_schema_columns(connection, schema, **kw)
File "<string>", line 2, in _get_schema_columns
File "/home/zeusone/anaconda3/envs/snowflake_ai/lib/python3.9/site-packages/sqlalchemy/engine/reflection.py", line 55, in cache
ret = fn(self, con, *args, **kw)
File "/home/zeusone/anaconda3/envs/snowflake_ai/lib/python3.9/site-packages/snowflake/sqlalchemy/snowdialect.py", line 479, in _get_schema_columns
schema_primary_keys = self._get_schema_primary_keys(
File "<string>", line 2, in _get_schema_primary_keys
File "/home/zeusone/anaconda3/envs/snowflake_ai/lib/python3.9/site-packages/sqlalchemy/engine/reflection.py", line 55, in cache
ret = fn(self, con, *args, **kw)
File "/home/zeusone/anaconda3/envs/snowflake_ai/lib/python3.9/site-packages/snowflake/sqlalchemy/snowdialect.py", line 323, in _get_schema_primary_keys
result = connection.execute(
File "/home/zeusone/anaconda3/envs/snowflake_ai/lib/python3.9/site-packages/sqlalchemy/engine/base.py", line 1385, in execute
return meth(self, multiparams, params, _EMPTY_EXECUTION_OPTS)
File "/home/zeusone/anaconda3/envs/snowflake_ai/lib/python3.9/site-packages/sqlalchemy/sql/elements.py", line 334, in _execute_on_connection
return connection._execute_clauseelement(
File "/home/zeusone/anaconda3/envs/snowflake_ai/lib/python3.9/site-packages/sqlalchemy/engine/base.py", line 1577, in _execute_clauseelement
ret = self._execute_context(
File "/home/zeusone/anaconda3/envs/snowflake_ai/lib/python3.9/site-packages/sqlalchemy/engine/base.py", line 1953, in _execute_context
self._handle_dbapi_exception(
File "/home/zeusone/anaconda3/envs/snowflake_ai/lib/python3.9/site-packages/sqlalchemy/engine/base.py", line 2134, in _handle_dbapi_exception
util.raise_(
File "/home/zeusone/anaconda3/envs/snowflake_ai/lib/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_
raise exception
File "/home/zeusone/anaconda3/envs/snowflake_ai/lib/python3.9/site-packages/sqlalchemy/engine/base.py", line 1910, in _execute_context
self.dialect.do_execute(
File "/home/zeusone/anaconda3/envs/snowflake_ai/lib/python3.9/site-packages/sqlalchemy/engine/default.py", line 736, in do_execute
cursor.execute(statement, parameters)
File "/home/zeusone/anaconda3/envs/snowflake_ai/lib/python3.9/site-packages/snowflake/connector/cursor.py", line 827, in execute
Error.errorhandler_wrapper(self.connection, self, error_class, errvalue)
File "/home/zeusone/anaconda3/envs/snowflake_ai/lib/python3.9/site-packages/snowflake/connector/errors.py", line 275, in errorhandler_wrapper
handed_over = Error.hand_to_other_handler(
File "/home/zeusone/anaconda3/envs/snowflake_ai/lib/python3.9/site-packages/snowflake/connector/errors.py", line 330, in hand_to_other_handler
cursor.errorhandler(connection, cursor, error_class, error_value)
File "/home/zeusone/anaconda3/envs/snowflake_ai/lib/python3.9/site-packages/snowflake/connector/errors.py", line 209, in default_errorhandler
raise error_class(
sqlalchemy.exc.ProgrammingError: (snowflake.connector.errors.ProgrammingError) 001059 (22023): SQL compilation error:
Must specify the full search path starting from database for SNOWFLAKE_SAMPLE_DATA
[SQL: SHOW /* sqlalchemy:_get_schema_primary_keys */PRIMARY KEYS IN SCHEMA snowflake_sample_data]
(Background on this error at: https://sqlalche.me/e/14/f405) | Using SQLDatabase with Llama 2 for snowflake connection, i get ProgramingError | https://api.github.com/repos/langchain-ai/langchain/issues/10106/comments | 2 | 2023-09-01T19:03:24Z | 2023-12-08T16:04:15Z | https://github.com/langchain-ai/langchain/issues/10106 | 1,877,924,878 | 10,106 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
I have created an agent using ConversationalChatAgent which uses custom tools (CoversationRetrievalQA based) to answer users questions. When determining which tool to use the agent is sometimes stripping the question to single word. For example question like `What is XYZ?` is reduced to
```
{
action: tool_name,
action_input: XYZ
}
```
How can I change the behavior to include full question in this scenario?
expected behavior
```
{
action: tool_name,
action_input: What is XYZ? or Define XYZ?
}
```
### Suggestion:
_No response_ | Issue: ConversationalChatAgent reduces the user question question to single word action_input when parsing | https://api.github.com/repos/langchain-ai/langchain/issues/10100/comments | 2 | 2023-09-01T17:01:58Z | 2023-12-08T16:04:20Z | https://github.com/langchain-ai/langchain/issues/10100 | 1,877,764,117 | 10,100 |
[
"langchain-ai",
"langchain"
] | ### System Info
python 3.9 langchain 0.0.250
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Follow the steps in [Access Intermediate Steps](https://github.com/langchain-ai/langchain/blob/master/docs/extras/modules/agents/how_to/intermediate_steps.ipynb) within the Agent "How To".
When converting the steps to json:
print(json.dumps(response["intermediate_steps"], indent=2))
This raises the error:
TypeError: Object of type AgentAction is not JSON serializable
### Expected behavior
This issue is similar to the one raised in #8815.
However the bot answer is not satisfying as using
```
from langchain.load.dump import dumps
print(dumps(response["intermediate_steps"], pretty=True))
```
will not serialize the `AgentAction`
I can propose a `__json__()` function to correct this the lib json-fix or either inherit the class from dict | Parsing intermediate steps: Object of type AgentAction is not JSON serializable | https://api.github.com/repos/langchain-ai/langchain/issues/10099/comments | 3 | 2023-09-01T16:30:39Z | 2023-12-18T23:48:07Z | https://github.com/langchain-ai/langchain/issues/10099 | 1,877,718,213 | 10,099 |
[
"langchain-ai",
"langchain"
] | ### System Info
I have 2000 document present at my opensearch index, i have filter out the 2 documents present from this opensearch index and added to newly created index. After that i am trying to use vector_db.similarity_search(request.query,k=2) on the newly created index, its returning me empty list.
below are the code
Code for index creation:
updated_mapping = {"mappings":{"properties": {"vector_field": {"type": "knn_vector","dimension": 1536,"method": {"engine": "nmslib","space_type": "l2","name": "hnsw","parameters": {"ef_construction": 512,"m": 16}}}}}}
opensearch_client.indices.create(index="temp_check878999", body=updated_mapping)
Code to update the index with filtered document:
sea=["1460210.pdf",'P-Reality-X Manuscript_Draft 1_17Feb22 (PAL1144).pdf']
for i in sea:
query={"query":{
"match":{
"metadata.filename":f"*{i}"
}
}}
print(query)
rest=opensearch_client.search(index="lang_demo",body=query)
create=rest['hits']['hits']
for hit in create:
sr=hit['_source']
doc_id=hit['_id']
opensearch_client.index(index="temp_check878999",id=doc_id,body=sr)
Langchain simillarity search code i am using on newly created index
from langchain.vectorstores import OpenSearchVectorSearch
vector_db = OpenSearchVectorSearch(
index_name="temp_check878999",
embedding_function=embed_wrapper(engine="text-embedding-ada-002"),
opensearch_url=*****,
http_auth=(******, *****),
is_aoss=False,
)
vector_db.similarity_search("star",k=2)
Quick reply will be very much helpful
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [X] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
No idea
### Expected behavior
It should return me the simmilarity search text with newly created index. Please note its working fine on the index where i had used vector_db.addtext(text). This issue is if i create the new open search index with opensearch client then on that index the simmilarity search one is not working. | vector_db.similarity_search(request.query,k=2) Not working with the opensearch index | https://api.github.com/repos/langchain-ai/langchain/issues/10089/comments | 6 | 2023-09-01T10:37:55Z | 2023-12-11T16:05:23Z | https://github.com/langchain-ai/langchain/issues/10089 | 1,877,181,409 | 10,089 |
[
"langchain-ai",
"langchain"
] | ### System Info
langchain 0.0.278.
I try to publish a pull request in the project experimental. I must update the dependencies of langchain, but I receive a lot of errors.
See [here](https://github.com/langchain-ai/langchain/actions/runs/6047844508/job/16412056269?pr=7278), outside of my code.
### Who can help?
@hwchase17
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
In the libs/experimental/pyproject.toml, change:
langchain = ">=0.0.278"
then
poetry run mypy .
### Expected behavior
No error | autonomous_agents : poetry run mypy . with experimental fails with langchain version 0.0.278 | https://api.github.com/repos/langchain-ai/langchain/issues/10088/comments | 3 | 2023-09-01T09:51:26Z | 2023-09-19T08:28:37Z | https://github.com/langchain-ai/langchain/issues/10088 | 1,877,113,181 | 10,088 |
[
"langchain-ai",
"langchain"
] | ### System Info
LangChain: 0.0.278
python: 3.10
windows10
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
I wrote with reference to[ this link](https://python.langchain.com/docs/use_cases/web_scraping#asynchtmlloader), and the code is as follows:
```python
from langchain.document_loaders import `AsyncHtmlLoader`
urls = ['https://python.langchain.com/docs/use_cases/web_scraping#asynchtmlloader']
loader = AsyncHtmlLoader(urls)
doc = loader.load()
print(doc)
```
Return the following error after running:
```
Exception ignored in: <function _ProactorBasePipeTransport.__del__ at 0x0000023EFFD45900>
Traceback (most recent call last):
File "C:\Users\97994\AppData\Local\Programs\Python\Python310\lib\asyncio\proactor_events.py", line 116, in __del__
self.close()
File "C:\Users\97994\AppData\Local\Programs\Python\Python310\lib\asyncio\proactor_events.py", line 108, in close
self._loop.call_soon(self._call_connection_lost, None)
File "C:\Users\97994\AppData\Local\Programs\Python\Python310\lib\asyncio\base_events.py", line 750, in call_soon
self._check_closed()
File "C:\Users\97994\AppData\Local\Programs\Python\Python310\lib\asyncio\base_events.py", line 515, in _check_closed
raise RuntimeError('Event loop is closed')
RuntimeError: Event loop is closed
```
### Expected behavior
None | When I call the 'loader()' function of AsyncHtmlLoader, I receive an 'Event loop is closed' error after it completes execution. | https://api.github.com/repos/langchain-ai/langchain/issues/10086/comments | 2 | 2023-09-01T08:31:58Z | 2023-12-08T16:04:30Z | https://github.com/langchain-ai/langchain/issues/10086 | 1,876,993,301 | 10,086 |
[
"langchain-ai",
"langchain"
] | ```
text = "foo _bar_ baz_ 123"
separator = "_"
text_splitter = CharacterTextSplitter(
chunk_size=4,
chunk_overlap=0,
separator="_",
keep_separator=True,
)
print(text_splitter.split_text(text))
```
RETURNS:
`['foo', '_bar', '_ baz', '_ 123']`
EXPECTED:
`['foo ', '_bar', '_ baz', '_ 123']`
^see whitespace next to `foo`
https://github.com/langchain-ai/langchain/blame/324c86acd5be9bc9d5b6dd248d686bdbb2c11cdc/libs/langchain/langchain/text_splitter.py#L155 removes all whitespace from the text. I can't figure out the purpose of this line. | Text splitting with keep_separator is True still removes any whitespace, even if separator is whitespace | https://api.github.com/repos/langchain-ai/langchain/issues/10085/comments | 4 | 2023-09-01T08:14:34Z | 2023-09-08T02:01:40Z | https://github.com/langchain-ai/langchain/issues/10085 | 1,876,967,752 | 10,085 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
I am using chroma db as retriever in ConversationalRetrievalChain, but the parameter "where_document" does not work.
```python
search_kwargs = {
"k": k,
"filter": filter,
"where_document": {"$contains": "1000001"}
}
retriever = vectordb.as_retriever(
search_kwargs=search_kwargs
)
```
In chroma official site [chroma](https://docs.trychroma.com/usage-guide), it says:
Chroma supports filtering queries by metadata and document contents. The where filter is used to filter by metadata, and the where_document filter is used to filter by document contents.
### Suggestion:
can ConversationalRetrievalChain support where_document filter for chroma db? | Issue: chroma retriever where_document parameter passed in search_kwargs is invalid | https://api.github.com/repos/langchain-ai/langchain/issues/10082/comments | 3 | 2023-09-01T07:52:13Z | 2024-03-17T16:04:11Z | https://github.com/langchain-ai/langchain/issues/10082 | 1,876,932,057 | 10,082 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
I have uploaded my files to Vectara and would love to query the corpus now with langchain. However I can only find examples of how to upload documents and then directly query them. I would like to avoid uploading the documents all the time and just straight query the existing corpus . Is this possible ?
Thank you so much!
Regards
### Suggestion:
_No response_ | Vectara query a already uploaded Corpus | https://api.github.com/repos/langchain-ai/langchain/issues/10081/comments | 1 | 2023-09-01T07:50:13Z | 2023-12-08T16:04:35Z | https://github.com/langchain-ai/langchain/issues/10081 | 1,876,929,360 | 10,081 |
[
"langchain-ai",
"langchain"
] | [code pointer](https://github.com/langchain-ai/langchain/blob/74fcfed4e2bdd186c2869a07008175a9b66b1ed4/libs/langchain/langchain/tools/base.py#L588C16-L588C16)
In `langchain.tools.base`, change
```python
Class StructuredTool(BaseTool):
"""Tool that can operate on any number of inputs."""
description: str = ""
args_schema: Type[BaseModel] = Field(..., description="The tool schema.")
"""The input arguments' schema."""
func: Optional[Callable[..., Any]]
"""The function to run when the tool is called."""
coroutine: Optional[Callable[..., Awaitable[Any]]] = None
"""The asynchronous version of the function."""
# --- Runnable ---
async def ainvoke(
self,
input: Union[str, Dict],
config: Optional[RunnableConfig] = None,
**kwargs: Any,
) -> Any:
if not self.coroutine:
# If the tool does not implement async, fall back to default implementation
return await asyncio.get_running_loop().run_in_executor(
None, partial(self.invoke, input, config, **kwargs)
)
return super().ainvoke(input, config, **kwargs)
```
to
```python
Class StructuredTool(BaseTool):
"""Tool that can operate on any number of inputs."""
description: str = ""
args_schema: Type[BaseModel] = Field(..., description="The tool schema.")
"""The input arguments' schema."""
func: Optional[Callable[..., Any]]
"""The function to run when the tool is called."""
coroutine: Optional[Callable[..., Awaitable[Any]]] = None
"""The asynchronous version of the function."""
# --- Runnable ---
async def ainvoke(
self,
input: Union[str, Dict],
config: Optional[RunnableConfig] = None,
**kwargs: Any,
) -> Any:
if not self.coroutine:
# If the tool does not implement async, fall back to default implementation
return await asyncio.get_running_loop().run_in_executor(
None, partial(self.invoke, input, config, **kwargs)
)
return await super().ainvoke(input, config, **kwargs)
``` | StructuredTool ainvoke isn't await parent class ainvoke | https://api.github.com/repos/langchain-ai/langchain/issues/10080/comments | 0 | 2023-09-01T07:36:50Z | 2023-09-08T02:54:54Z | https://github.com/langchain-ai/langchain/issues/10080 | 1,876,911,576 | 10,080 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
I need to use Python REPL tool to take data frame and user query and answer based on the data frame.
### Suggestion:
_No response_ | how to use PythonREPL tool to take dataframe and query | https://api.github.com/repos/langchain-ai/langchain/issues/10079/comments | 3 | 2023-09-01T05:51:51Z | 2023-12-08T16:04:40Z | https://github.com/langchain-ai/langchain/issues/10079 | 1,876,788,507 | 10,079 |
[
"langchain-ai",
"langchain"
] | ### System Info
the memories are pruned after saving using .pop(0). However, db-backed histories read messages and copies into list each turn. This makes it so that the actual db does not change at every turn, and so max_token_limit parameter gets ignored and the memory prints out the entire conversation for history.
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [X] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Initialize redis chat history
use redis chat history as chat_history for ConversationSummaryBufferMemory
set max_token_limit to 1.
Print history at every turn.
Still prints the entire history
### Expected behavior
Initialize redis chat history
use redis chat history as chat_history for ConversationSummaryBufferMemory
set max_token_limit to 1.
Print history at every turn.
Print nothing since max_token_limit = 0 | ConversationTokenBufferMemory and ConversationSummaryBufferMemory does not work with db-backed histories | https://api.github.com/repos/langchain-ai/langchain/issues/10078/comments | 3 | 2023-09-01T05:28:56Z | 2024-02-12T16:14:29Z | https://github.com/langchain-ai/langchain/issues/10078 | 1,876,763,580 | 10,078 |
[
"langchain-ai",
"langchain"
] | ### System Info
On the elasticsearch authentication, you have implemented it this way on the elasticsearch.py file found in the vectorstores folder
```py
if api_key:
connection_params["api_key"] = api_key
elif username and password:
connection_params["basic_auth"] = (username, password)
```
but i think it should be this way
```py
if api_key:
connection_params["api_key"] = api_key
elif username and password:
connection_params["http_auth"] = (username, password)
```
with that change the authentication succeeeds. What i have just changed is this connection_params["basic_auth"] = (username, password) to this connection_params["http_auth"] = (username, password)
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Create an elasticsearch instance on an EC2 instance using docker that has SSL and also uses username and password and then try to authenticate to that elasticsearch instance using langchain you will see the error
### Expected behavior
The expected behavior is successfull authentication | ElasticSearch authentication | https://api.github.com/repos/langchain-ai/langchain/issues/10077/comments | 2 | 2023-09-01T05:07:53Z | 2023-12-08T16:04:45Z | https://github.com/langchain-ai/langchain/issues/10077 | 1,876,736,736 | 10,077 |
[
"langchain-ai",
"langchain"
] | ### System Info
Langchain version: `0.0278`
Python: `3.10`
Runpod version: `1.2.0`
I'm experiencing some issues when running Runpod (TGI gpu cloud) and langchain, primarily when I try to run the chain.
For reference, I'm using TheBloke/VicUnlocked-30B-LoRA-GPTQ model in TGI on Runpod A4500 GPU cloud.
I initialize the pod from the UI and connect to it with the runpod-python library (version 1.2.0) in my python 3.10 environment.
My prompt template is as follows:
```
prompt = PromptTemplate(
input_variables=['instruction','summary'],
template="""### Instruction:
{instruction}
### Input:
{summary}
### Response:
""")
```
The instruction is a simple instruction to extract relevant insights from a summary. My LLM is instantiated as such:
inference_server_url = f'https://{pod["id"]}-{port}.proxy.runpod.net' ### note: the pod and port is defined previously.
llm = HuggingFaceTextGenInference(inference_server_url=inference_server_url)
And I am trying to run the model as such:
```
summary = ... # summary here
instruction = ... #instruction here
chain.run({"instruction": instruction, "summary": summary}) #**_Note: Error occurs from this line!!_**
```
But I get this error:
```
File ~/anaconda3/envs/py310/lib/python3.10/site-packages/langchain/chains/base.py:282, in Chain.__call__(self, inputs, return_only_outputs, callbacks, tags, metadata, include_run_info)
280 except (KeyboardInterrupt, Exception) as e:
281 run_manager.on_chain_error(e)
--> 282 raise e
283 run_manager.on_chain_end(outputs)
284 final_outputs: Dict[str, Any] = self.prep_outputs(
285 inputs, outputs, return_only_outputs
...
---> 81 message = payload["error"]
82 if "error_type" in payload:
83 error_type = payload["error_type"]
KeyError: 'error'
```
Any ideas?
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [X] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [X] Callbacks/Tracing
- [ ] Async
### Reproduction
My prompt template is as follows:
prompt = PromptTemplate(
input_variables=['instruction','summary'],
template="""### Instruction:
{instruction}
### Input:
{summary}
### Response:
""")
The instruction is a simple instruction to extract relevant insights from a summary. My LLM is instantiated as such:
inference_server_url = f'https://{pod["id"]}-{port}.proxy.runpod.net' ### note: the pod and port is defined previously.
llm = HuggingFaceTextGenInference(inference_server_url=inference_server_url)
And I am trying to run the model as such:
summary = ... # summary here
instruction = ... #instruction here
chain.run({"instruction": instruction, "summary": summary})
And then running chain.run I get the error as mentioned above.
### Expected behavior
What's expected is that I should be receiving the output from the runpod GPU cloud that is hosting the model, as per this guide that I am following:
https://colab.research.google.com/drive/10BJcKRBtMlpm2hsS2antarSRgEQY3AQq#scrollTo=lyVYLW2thTMg | Receiving a unclear KeyError: 'error' when using Langchain HuggingFaceTextInference on Runpod GPU | https://api.github.com/repos/langchain-ai/langchain/issues/10072/comments | 2 | 2023-09-01T00:24:57Z | 2023-12-08T16:04:50Z | https://github.com/langchain-ai/langchain/issues/10072 | 1,876,474,014 | 10,072 |
[
"langchain-ai",
"langchain"
] | ### System Info
### Error
I am using **Supabase Vector Store**:
```python
embeddings = OpenAIEmbeddings()
vectorstore_public = SupabaseVectorStore(
client=supabase_client,
embedding=embeddings,
table_name="documents",
query_name="match_documents",
)
```
And crawling pages from internet using the `WebResearchRetriever`
But, in `WebResearchRetriever._get_relevant_documents`
```python
# Search for relevant splits
# TODO: make this async
logger.info("Grabbing most relevant splits from urls...")
docs = []
for query in questions:
docs.extend(self.vectorstore.similarity_search(query))
```
The `vectorstore.similarity_search` guides to an rpc call using the `SupabaseVectorStore`
```python
def similarity_search_by_vector_with_relevance_scores(
self, query: List[float], k: int, filter: Optional[Dict[str, Any]] = None
) -> List[Tuple[Document, float]]:
match_documents_params = self.match_args(query, k, filter)
print("match_documents_params", match_documents_params)
print("self.query_name", self.query_name)
res = self._client.rpc(self.query_name, match_documents_params).execute() # here is where the error is thrown
print("res", res)
match_result = [
(
Document(
metadata=search.get("metadata", {}), # type: ignore
page_content=search.get("content", ""),
),
search.get("similarity", 0.0),
)
for search in res.data
if search.get("content")
]
return match_result
```
Error thrown:
```python
File "/lib/python3.10/site-packages/postgrest/_sync/request_builder.py", line 68, in execute
raise APIError(r.json())
postgrest.exceptions.APIError: {'code': '42804', 'details': 'Returned type text does not match expected type bigint in column 1.', 'hint': None, 'message': 'structure of query does not match function result type'}
```
Complete Traceback
```python
Traceback (most recent call last):
File "/lib/python3.10/site-packages/flask/app.py", line 2190, in wsgi_app
response = self.full_dispatch_request()
File "/lib/python3.10/site-packages/flask/app.py", line 1486, in full_dispatch_request
rv = self.handle_user_exception(e)
File "/lib/python3.10/site-packages/flask_cors/extension.py", line 176, in wrapped_function
return cors_after_request(app.make_response(f(*args, **kwargs)))
File "/lib/python3.10/site-packages/flask/app.py", line 1484, in full_dispatch_request
rv = self.dispatch_request()
File "/lib/python3.10/site-packages/flask/app.py", line 1469, in dispatch_request
return self.ensure_sync(self.view_functions[rule.endpoint])(**view_args)
File "/app.py", line 143, in complete
answer = completion(message)
File "/methods/content_completion.py", line 15, in completion
sources = web_explorer(blog_topic)
File "/methods/web_explorer.py", line 93, in web_explorer
result = retrieve_answer_and_sources(question, llm, web_retriever)
File "/methods/web_explorer.py", line 80, in retrieve_answer_and_sources
return qa_chain({"question": question}, return_only_outputs=True)
File "/lib/python3.10/site-packages/langchain/chains/base.py", line 288, in __call__
raise e
File "/lib/python3.10/site-packages/langchain/chains/base.py", line 282, in __call__
self._call(inputs, run_manager=run_manager)
File "/lib/python3.10/site-packages/langchain/chains/qa_with_sources/base.py", line 151, in _call
docs = self._get_docs(inputs, run_manager=_run_manager)
File "/lib/python3.10/site-packages/langchain/chains/qa_with_sources/retrieval.py", line 50, in _get_docs
docs = self.retriever.get_relevant_documents(
File "/lib/python3.10/site-packages/langchain/schema/retriever.py", line 208, in get_relevant_documents
raise e
File "/lib/python3.10/site-packages/langchain/schema/retriever.py", line 201, in get_relevant_documents
result = self._get_relevant_documents(
File "/methods/retrievers.py", line 252, in _get_relevant_documents
docs.extend(self.vectorstore.similarity_search(query))
File "/lib/python3.10/site-packages/langchain/vectorstores/supabase.py", line 172, in similarity_search
return self.similarity_search_by_vector(
File "/lib/python3.10/site-packages/langchain/vectorstores/supabase.py", line 183, in similarity_search_by_vector
result = self.similarity_search_by_vector_with_relevance_scores(
File "/lib/python3.10/site-packages/langchain/vectorstores/supabase.py", line 217, in similarity_search_by_vector_with_relevance_scores
res = self._client.rpc(self.query_name, match_documents_params).execute()
File "/lib/python3.10/site-packages/postgrest/_sync/request_builder.py", line 68, in execute
raise APIError(r.json())
postgrest.exceptions.APIError: {'code': '42804', 'details': 'Returned type text does not match expected type bigint in column 1.', 'hint': None, 'message': 'structure of query does not match function result type'}
```
requirements.txt to reproduce:
```
aiohttp==3.8.5
aiosignal==1.3.1
anyio==4.0.0
async-timeout==4.0.3
attrs==23.1.0
azure-core==1.29.3
backoff==2.2.1
bcrypt==4.0.1
beautifulsoup4==4.12.2
bleach==6.0.0
blinker==1.6.2
cachetools==5.3.1
certifi==2023.7.22
chardet==5.2.0
charset-normalizer==3.2.0
chroma-hnswlib==0.7.2
chromadb==0.4.8
click==8.1.7
click-log==0.4.0
colorama==0.4.6
coloredlogs==15.0.1
dataclasses-json==0.5.14
deprecation==2.1.0
dnspython==2.4.2
docutils==0.20.1
dotty-dict==1.3.1
emoji==2.8.0
exceptiongroup==1.1.3
fastapi==0.99.1
filetype==1.2.0
Flask==2.3.3
Flask-Cors==4.0.0
flatbuffers==23.5.26
frozenlist==1.4.0
gitdb==4.0.10
GitPython==3.1.32
google-api-core==2.11.1
google-api-python-client==2.97.0
google-auth==2.22.0
google-auth-httplib2==0.1.0
googleapis-common-protos==1.60.0
gotrue==1.0.4
h11==0.14.0
html-sanitizer==2.2.0
httpcore==0.16.3
httplib2==0.22.0
httptools==0.6.0
httpx==0.23.3
humanfriendly==10.0
idna==3.4
importlib-metadata==6.8.0
importlib-resources==6.0.1
invoke==1.7.3
itsdangerous==2.1.2
jaraco.classes==3.3.0
Jinja2==3.1.2
joblib==1.3.2
keyring==24.2.0
langchain==0.0.277
langsmith==0.0.30
lxml==4.9.3
Markdown==3.4.4
MarkupSafe==2.1.3
marshmallow==3.20.1
monotonic==1.6
more-itertools==10.1.0
mpmath==1.3.0
multidict==6.0.4
mypy-extensions==1.0.0
nltk==3.8.1
numexpr==2.8.5
numpy==1.25.2
onnxruntime==1.15.1
openai==0.27.10
outcome==1.2.0
overrides==7.4.0
packaging==23.1
pkginfo==1.9.6
postgrest==0.10.6
posthog==3.0.2
protobuf==4.24.2
pulsar-client==3.3.0
pyasn1==0.5.0
pyasn1-modules==0.3.0
pydantic==1.10.12
Pygments==2.16.1
pymongo==4.5.0
pyparsing==3.1.1
PyPika==0.48.9
PySocks==1.7.1
python-dateutil==2.8.2
python-dotenv==1.0.0
python-gitlab==3.15.0
python-magic==0.4.27
python-semantic-release==7.33.2
PyYAML==6.0.1
readme-renderer==41.0
realtime==1.0.0
regex==2023.8.8
requests==2.31.0
requests-toolbelt==1.0.0
rfc3986==1.5.0
rsa==4.9
selenium==4.11.2
semver==2.13.0
six==1.16.0
smmap==5.0.0
sniffio==1.3.0
sortedcontainers==2.4.0
soupsieve==2.4.1
SQLAlchemy==2.0.20
starlette==0.27.0
storage3==0.5.3
StrEnum==0.4.15
supabase==1.0.3
supafunc==0.2.2
sympy==1.12
tabulate==0.9.0
tenacity==8.2.3
tiktoken==0.4.0
tokenizers==0.13.3
tomlkit==0.12.1
tqdm==4.66.1
trio==0.22.2
trio-websocket==0.10.3
twine==3.8.0
typing-inspect==0.9.0
typing_extensions==4.7.1
unstructured==0.10.10
uritemplate==4.1.1
urllib3==1.26.16
uvicorn==0.23.2
uvloop==0.17.0
waitress==2.1.2
watchfiles==0.20.0
webencodings==0.5.1
websockets==10.4
Werkzeug==2.3.7
wsproto==1.2.0
yarl==1.9.2
zipp==3.16.2
```
### Who can help?
@agola11 @hwchase17
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Code from this repo: https://github.com/langchain-ai/web-explorer/blob/main/web_explorer.py
But instead of faiss, use `SupabaseVectorStore` as above.
### Expected behavior
Is it coming from the config of my table on Supabase, as i get: `Returned type text does not match expected type bigint in column 1.` or wherever else ?
<img width="1176" alt="Capture dβeΜcran 2023-08-31 aΜ 20 15 58" src="https://github.com/langchain-ai/langchain/assets/39488794/743d9cd9-fa30-474d-88ce-288e85c50e71">
| SupabaseVectorStore: Error thrown when calling similarity_search_by_vector_with_relevance_scores | https://api.github.com/repos/langchain-ai/langchain/issues/10065/comments | 6 | 2023-08-31T18:18:02Z | 2023-12-07T16:05:15Z | https://github.com/langchain-ai/langchain/issues/10065 | 1,876,055,791 | 10,065 |
[
"langchain-ai",
"langchain"
] | ### Feature request
Implement Self Query retriever to Redis vector stores.
### Motivation
I was trying the different retrievers for my project (AI chatbot that answers various human resources and labor law questions based on a dataset built from complete local labor legislation, company handbooks, job descriptions and SOPs).
I plan to deploy this chatbot in a production environment, therefore I chose Redis (for its robustness and speed) as a vector store.
### Your contribution
I am not a pro developer; so, unfortunately, the only contributions I can make is limited to "real-world" testing. | Add SelfQueryRetriever support for Redis Vector Stores | https://api.github.com/repos/langchain-ai/langchain/issues/10064/comments | 1 | 2023-08-31T18:17:38Z | 2023-09-12T22:30:38Z | https://github.com/langchain-ai/langchain/issues/10064 | 1,876,055,216 | 10,064 |
[
"langchain-ai",
"langchain"
] | ### System Info
Langchain 0.0.276
Windows
Python 3.11.4
I create a custom llm just like in the tutorial but when i use it on a mkrl agent it does not return to the agent the custom response.
I printed insided my agent and it is called and returns the correct answer, but the agent in the end of line does not get this answer.
### Who can help?
@hwchase17
@agola11
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
https://colab.research.google.com/drive/1cHVx3GlzmF4ECV_63ZjlrMPt8P840GSw?usp=sharing
### Expected behavior
The expected behaviour is the agent to receive the response of the CustomLLM. | CustomLLM when called inside agent does not return custom behaviour | https://api.github.com/repos/langchain-ai/langchain/issues/10061/comments | 2 | 2023-08-31T17:53:26Z | 2023-12-07T16:05:20Z | https://github.com/langchain-ai/langchain/issues/10061 | 1,876,017,939 | 10,061 |
[
"langchain-ai",
"langchain"
] | ### System Info
Langchain version -- 0.0.277
python verion -- 3.8.8
window platform
### Who can help?
@hwchase17
@agola11
Hi I am not able to run any of the langchain syntex on my windows laptop. For example if I just run from **langchain.llms import OpenAI** I get the below error message. Can you please help!
TypeError Traceback (most recent call last)
<ipython-input-3-5af9a0f5ffa4> in <module>
1 import os
2 from dotenv import load_dotenv
----> 3 from langchain.llms import OpenAI
~\Anaconda3\lib\site-packages\langchain\__init__.py in <module>
4 from typing import Optional
5
----> 6 from langchain.agents import MRKLChain, ReActChain, SelfAskWithSearchChain
7 from langchain.chains import (
8 ConversationChain,
~\Anaconda3\lib\site-packages\langchain\agents\__init__.py in <module>
29
30 """ # noqa: E501
---> 31 from langchain.agents.agent import (
32 Agent,
33 AgentExecutor,
~\Anaconda3\lib\site-packages\langchain\agents\agent.py in <module>
12 import yaml
13
---> 14 from langchain.agents.agent_iterator import AgentExecutorIterator
15 from langchain.agents.agent_types import AgentType
16 from langchain.agents.tools import InvalidTool
~\Anaconda3\lib\site-packages\langchain\agents\agent_iterator.py in <module>
19 )
20
---> 21 from langchain.callbacks.manager import (
22 AsyncCallbackManager,
23 AsyncCallbackManagerForChainRun,
~\Anaconda3\lib\site-packages\langchain\callbacks\__init__.py in <module>
8 """
9
---> 10 from langchain.callbacks.aim_callback import AimCallbackHandler
11 from langchain.callbacks.argilla_callback import ArgillaCallbackHandler
12 from langchain.callbacks.arize_callback import ArizeCallbackHandler
~\Anaconda3\lib\site-packages\langchain\callbacks\aim_callback.py in <module>
3
4 from langchain.callbacks.base import BaseCallbackHandler
----> 5 from langchain.schema import AgentAction, AgentFinish, LLMResult
6
7
~\Anaconda3\lib\site-packages\langchain\schema\__init__.py in <module>
1 """**Schemas** are the LangChain Base Classes and Interfaces."""
2 from langchain.schema.agent import AgentAction, AgentFinish
----> 3 from langchain.schema.cache import BaseCache
4 from langchain.schema.chat_history import BaseChatMessageHistory
5 from langchain.schema.document import BaseDocumentTransformer, Document
~\Anaconda3\lib\site-packages\langchain\schema\cache.py in <module>
4 from typing import Any, Optional, Sequence
5
----> 6 from langchain.schema.output import Generation
7
8 RETURN_VAL_TYPE = Sequence[Generation]
~\Anaconda3\lib\site-packages\langchain\schema\output.py in <module>
7 from langchain.load.serializable import Serializable
8 from langchain.pydantic_v1 import BaseModel, root_validator
----> 9 from langchain.schema.messages import BaseMessage, BaseMessageChunk
10
11
~\Anaconda3\lib\site-packages\langchain\schema\messages.py in <module>
146
147
--> 148 class HumanMessageChunk(HumanMessage, BaseMessageChunk):
149 """A Human Message chunk."""
150
~\Anaconda3\lib\site-packages\pydantic\main.cp38-win_amd64.pyd in pydantic.main.ModelMetaclass.__new__()
~\Anaconda3\lib\abc.py in __new__(mcls, name, bases, namespace, **kwargs)
83 """
84 def __new__(mcls, name, bases, namespace, **kwargs):
---> 85 cls = super().__new__(mcls, name, bases, namespace, **kwargs)
86 _abc_init(cls)
87 return cls
TypeError: multiple bases have instance lay-out conflict
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
!pip -q install openai langchain huggingface_hub
import openai
import dotenv
import os
os.environ['OPENAI_API_KEY'] = ' ... '
from langchain.llms import OpenAI
### Expected behavior
Should be able to run without error | TypeError: multiple bases have instance lay-out conflict | https://api.github.com/repos/langchain-ai/langchain/issues/10060/comments | 6 | 2023-08-31T17:02:38Z | 2023-12-11T16:05:28Z | https://github.com/langchain-ai/langchain/issues/10060 | 1,875,918,071 | 10,060 |
[
"langchain-ai",
"langchain"
] | ### System Info
LangChain version: 0.0.276
Python version: 3.11.2
Platform: x86_64 Debian 12.2.0-14
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```python
import os, openai
from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.vectorstores import Redis
embeddings = OpenAIEmbeddings()
rds = Redis.from_existing_index(
embeddings,
index_name = INDEX_NAME,
schema = "redis_schema_kws.yaml",
redis_url = "redis://10.0.1.21:6379",
)
returned_docs_mmr = rds.max_marginal_relevance_search(question, k=3, fetch_k=3, lambda_mult=0.8)
```
The returned error message is: `NotImplementedError:`
Could you provide additional information on your ETA to have MMR search implemented in Redis?
### Expected behavior
Retrieve the Document objects. | max_marginal_relevance_search not implemented in Redis | https://api.github.com/repos/langchain-ai/langchain/issues/10059/comments | 4 | 2023-08-31T16:34:30Z | 2023-09-12T22:31:16Z | https://github.com/langchain-ai/langchain/issues/10059 | 1,875,877,282 | 10,059 |
[
"langchain-ai",
"langchain"
] | ### System Info
Given a string resembling a JSON-string array, such as I'm often getting from LLM completions, I'm trying to convert the string to a Python list of strings. I'm finding the the `PydanticOutputParser` is throwing an error when plain old `json.loads` with `strict=False` is doing fine.
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [X] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
For example this code:
```python
from langchain.output_parsers import PydanticOutputParser
from pydantic import BaseModel, Field
import json
class Lines(BaseModel):
lines: list[str] = Field(description="array of strings")
line_parser = PydanticOutputParser(pydantic_object=Lines)
lines = '[\n "line 1",\n "line 2"\n]'
print("Just JSON:")
print(json.loads(lines, strict=False))
print("LangChain Pydantic:")
print(line_parser.parse(lines))
```
produces the following output:
```sh
Just JSON:
['line 1', 'line 2']
LangChain Pydantic:
Traceback (most recent call last):
File "/opt/miniconda3/envs/gt/lib/python3.10/site-packages/langchain/output_parsers/pydantic.py", line 27, in parse
json_object = json.loads(json_str, strict=False)
File "/opt/miniconda3/envs/gt/lib/python3.10/json/__init__.py", line 359, in loads
return cls(**kw).decode(s)
File "/opt/miniconda3/envs/gt/lib/python3.10/json/decoder.py", line 337, in decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
File "/opt/miniconda3/envs/gt/lib/python3.10/json/decoder.py", line 355, in raw_decode
raise JSONDecodeError("Expecting value", s, err.value) from None
json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/Users/benjaminbasseri/Dropbox/Work/GT Schools/scratch work/meme-generation/parse_test.py", line 14, in <module>
print(line_parser.parse(lines))
File "/opt/miniconda3/envs/gt/lib/python3.10/site-packages/langchain/output_parsers/pydantic.py", line 33, in parse
raise OutputParserException(msg, llm_output=text)
langchain.schema.output_parser.OutputParserException: Failed to parse Lines from completion [
"line 1",
"line 2"
]. Got: Expecting value: line 1 column 1 (char 0)
```
### Expected behavior
I would expect the parser to be able to handle a simple case like this | PydanticOutputParser failing to parse basic string into JSON | https://api.github.com/repos/langchain-ai/langchain/issues/10057/comments | 7 | 2023-08-31T16:12:05Z | 2024-04-10T16:14:50Z | https://github.com/langchain-ai/langchain/issues/10057 | 1,875,844,490 | 10,057 |
[
"langchain-ai",
"langchain"
] | ### System Info
0.0.272, linux, python 3.11.2
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [X] Callbacks/Tracing
- [ ] Async
### Reproduction
I have a pretty standard RetievalQA chain like this. The `on_llm_start` callback isn't executed since version 0.0.272 (I verified that 0.0.271 is working)
```
loader = TextLoader("data.txt")
documents = loader.load()
openai = ChatOpenAI(**llm_params, callbacks= [StreamingStdOutCallbackHandler()])
text_splitter = RecursiveCharacterTextSplitter(**chunking_params)
texts = text_splitter.split_documents(documents)
embeddings = OpenAIEmbeddings()
docsearch = Chroma.from_documents(texts, embeddings)
qa = RetrievalQA.from_chain_type(
llm=openai,
retriever=docsearch.as_retriever(),
)
```
### Expected behavior
`on_llm_start` callback should be called | StreamingStdOutCallbackHandler().on_llm_start isn't called since version 0.0.272 (0.0.271 still works) | https://api.github.com/repos/langchain-ai/langchain/issues/10054/comments | 6 | 2023-08-31T15:30:31Z | 2024-08-07T19:33:12Z | https://github.com/langchain-ai/langchain/issues/10054 | 1,875,770,074 | 10,054 |
[
"langchain-ai",
"langchain"
] | ### System Info
langchain==0.0.249
Windows
Python 3.11.4
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
The official example described here:
https://python.langchain.com/docs/integrations/vectorstores/matchingengine#create-vectorstore-from-texts
Does not work.
My code throws beloe exception:
from langchain.vectorstores import MatchingEngine
texts = [
"The cat sat on",
"the mat.",
"I like to",
"eat pizza for",
"dinner.",
"The sun sets",
"in the west.",
]
vector_store = MatchingEngine.from_components(
project_id="",
region="",
gcs_bucket_name="",
index_id="",
endpoint_id="",
)
### Expected behavior
Traceback (most recent call last):
File "C:\GitHub\onecloud-chatbot\onecloud-chatbot\Lib\site-packages\google\api_core\grpc_helpers.py", line 72, in error_remapped_callable
return callable_(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\GitHub\onecloud-chatbot\onecloud-chatbot\Lib\site-packages\grpc\_channel.py", line 1030, in __call__
return _end_unary_response_blocking(state, call, False, None)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\GitHub\onecloud-chatbot\onecloud-chatbot\Lib\site-packages\grpc\_channel.py", line 910, in _end_unary_response_blocking
raise _InactiveRpcError(state) # pytype: disable=not-instantiable
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
status = StatusCode.INVALID_ARGUMENT
details = "Request contains an invalid argument."
debug_error_string = "UNKNOWN:Error received from peer ipv4:172.217.16.42:443 {created_time:"2023-08-31T14:00:16.506362524+00:00", grpc_status:3, grpc_message:"Request contains an invalid argument."}"
>
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "C:\GitHub\onecloud-chatbot\matching_engine_insert.py", line 13, in <module>
vector_store = MatchingEngine.from_components(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\GitHub\onecloud-chatbot\onecloud-chatbot\Lib\site-packages\langchain\vectorstores\matching_engine.py", line 280, in from_components
endpoint = cls._create_endpoint_by_id(
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\GitHub\onecloud-chatbot\onecloud-chatbot\Lib\site-packages\langchain\vectorstores\matching_engine.py", line 386, in _create_endpoint_by_id
return aiplatform.MatchingEngineIndexEndpoint(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\GitHub\onecloud-chatbot\onecloud-chatbot\Lib\site-packages\google\cloud\aiplatform\matching_engine\matching_engine_index_endpoint.py", line 130, in __init__
self._gca_resource = self._get_gca_resource(resource_name=index_endpoint_name)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\GitHub\onecloud-chatbot\onecloud-chatbot\Lib\site-packages\google\cloud\aiplatform\base.py", line 648, in _get_gca_resource
return getattr(self.api_client, self._getter_method)(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\GitHub\onecloud-chatbot\onecloud-chatbot\Lib\site-packages\google\cloud\aiplatform_v1\services\index_endpoint_service\client.py", line 707, in get_index_endpoint
response = rpc(
^^^^
File "C:\GitHub\onecloud-chatbot\onecloud-chatbot\Lib\site-packages\google\api_core\gapic_v1\method.py", line 113, in __call__
return wrapped_func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\GitHub\onecloud-chatbot\onecloud-chatbot\Lib\site-packages\google\api_core\retry.py", line 349, in retry_wrapped_func
return retry_target(
^^^^^^^^^^^^^
File "C:\GitHub\onecloud-chatbot\onecloud-chatbot\Lib\site-packages\google\api_core\retry.py", line 191, in retry_target
return target()
^^^^^^^^
File "C:\GitHub\onecloud-chatbot\onecloud-chatbot\Lib\site-packages\google\api_core\grpc_helpers.py", line 74, in error_remapped_callable
raise exceptions.from_grpc_error(exc) from exc
google.api_core.exceptions.InvalidArgument: 400 Request contains an invalid argument. | langchain.vectorstores.MatchingEngine.from_components() throws InvalidArgument exception | https://api.github.com/repos/langchain-ai/langchain/issues/10050/comments | 4 | 2023-08-31T14:01:53Z | 2024-01-30T00:41:11Z | https://github.com/langchain-ai/langchain/issues/10050 | 1,875,605,668 | 10,050 |
[
"langchain-ai",
"langchain"
] | ### System Info
Python Version: 3.11.4 (main, Jul 5 2023, 08:40:20) [Clang 14.0.6 ]
Langchain Version: 0.0.273
Jupyter Notebook Version: 5.3.0
### Who can help?
@hwchase17
@agola
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [X] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Steps to reproduce the behaviour:
1. Create a function to be used
```
def _handle_error(error) -> str:
return str(error)[:50]
```
2. Run `create_pandas_dataframe_agent`, pass in `handle_parsing_errors=_handle_error`
### Expected behavior
I expect an output similar to the last section of https://python.langchain.com/docs/modules/agents/how_to/handle_parsing_errors, but instead I still get the normal OutputParserError | `create_pandas_dataframe_agent` does not pass the `handle_parsing_error` variable into the underlying Agent. | https://api.github.com/repos/langchain-ai/langchain/issues/10045/comments | 6 | 2023-08-31T11:53:16Z | 2023-09-03T21:31:02Z | https://github.com/langchain-ai/langchain/issues/10045 | 1,875,375,970 | 10,045 |
[
"langchain-ai",
"langchain"
] | ### System Info
`langchain` - 0.0.267
`openai` - 0.27.7
MacOS 13.4.1 (22F82)
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
This code was working when I used `langchain 0.0.200` (The call is to `AzureOpenAI` endpoint)
```python
from langchain.chat_models import ChatOpenAI
llm = ChatOpenAI(model_name=model_config.model_engine,
deployment_id=model_config.model_engine,
temperature=model_config.temperature,
max_tokens=model_config.max_tokens_for_request,
top_p=model_config.top_p,
openai_api_key=endpoint_config.api_key.secret,
api_base=endpoint_config.api_base,
api_type=endpoint_config.api_type,
api_version=endpoint_config.api_version)
chat_prompt = [SystemMessage(...), HumanMessage(...)]
response = llm(chat_prompt)
```
Once I updated `langchain` to `0.0.267`, I'm getting this error
```shell
openai.error.InvalidRequestError: Invalid URL (POST /v1/openai/deployments/gpt-35-turbo/chat/completions)
```
The endpoint itself is working (once I reverted back to `0.0.200` all started to work again)
### Expected behavior
I except to get an answer from the LLM | Getting invalid URL post after updating langchain from 0.0.200 to 0.0.267 | https://api.github.com/repos/langchain-ai/langchain/issues/10044/comments | 2 | 2023-08-31T11:17:31Z | 2023-12-07T16:05:30Z | https://github.com/langchain-ai/langchain/issues/10044 | 1,875,323,135 | 10,044 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
I'm using function calls to obtain and process data. I have two classes designed as follows:
```python
class GetDataInput:
user_input: str = Field(description="User input")
class ProcessDataInput:
data_url: str = Field(description="Link to the data")
```
Moreover, I'm employing the `ConversationBufferWindowMemory`.
While the initial question posed to the bot yields the expected response, I encounter an issue when asking a subsequent question. Occasionally, instead of fetching new data, the bot utilizes the previous `data_url`.
How can I effectively manage this kind of situation?
Thank you for your assistance.
### Suggestion:
_No response_ | Function calling use the wrong context | https://api.github.com/repos/langchain-ai/langchain/issues/10040/comments | 4 | 2023-08-31T10:12:09Z | 2023-12-07T16:05:35Z | https://github.com/langchain-ai/langchain/issues/10040 | 1,875,216,407 | 10,040 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
is this implemented?
https://github.com/langchain-ai/langchain/pull/1222/commits
### Suggestion:
_No response_ | streaming | https://api.github.com/repos/langchain-ai/langchain/issues/10038/comments | 2 | 2023-08-31T09:27:42Z | 2023-12-07T16:05:40Z | https://github.com/langchain-ai/langchain/issues/10038 | 1,875,139,090 | 10,038 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
Hi allπ ,
I work in a network-isolated SageMaker environment. There I have hosted a llama 2.0 -7b chat Inference Endpoint (from [HF ](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf))
For a project I would like to work with LangChain.
Currently I am failing with the embeddings.
I can't use the HF embedding engine because of the network isolation.
hf_embedding = HuggingFaceInstructEmbeddings()
Alternatively I found a SageMaker embedding:
https://python.langchain.com/docs/integrations/text_embedding/sagemaker-endpoint
I used the same code from the docu:
```python
from typing import Dict, List
from langchain.embeddings import SagemakerEndpointEmbeddings
from langchain.embeddings.sagemaker_endpoint import EmbeddingsContentHandler
import json
class ContentHandler(EmbeddingsContentHandler):
content_type = "application/json"
accepts = "application/json"
def transform_input(self, inputs: list[str], model_kwargs: Dict) -> bytes:
"""
Transforms the input into bytes that can be consumed by SageMaker endpoint.
Args:
inputs: List of input strings.
model_kwargs: Additional keyword arguments to be passed to the endpoint.
Returns:
The transformed bytes input.
"""
# Example: inference.py expects a JSON string with a "inputs" key:
input_str = json.dumps({"inputs": inputs, **model_kwargs})
return input_str.encode("utf-8")
def transform_output(self, output: bytes) -> List[List[float]]:
"""
Transforms the bytes output from the endpoint into a list of embeddings.
Args:
output: The bytes output from SageMaker endpoint.
Returns:
The transformed output - list of embeddings
Note:
The length of the outer list is the number of input strings.
The length of the inner lists is the embedding dimension.
"""
# Example: inference.py returns a JSON string with the list of
# embeddings in a "vectors" key:
response_json = json.loads(output.read().decode("utf-8"))
return response_json["vectors"]`
content_handler = ContentHandler()
embeddings = SagemakerEndpointEmbeddings(
endpoint_name=ENDPOINT_NAME,
region_name=REGION_NAME,
content_handler=content_handler ,
)
query_result = embeddings.embed_query("foo")
```
But i get the following error:
`---------------------------------------------------------------------------
ModelError Traceback (most recent call last)
File /opt/conda/lib/python3.10/site-packages/langchain/embeddings/sagemaker_endpoint.py:153, in SagemakerEndpointEmbeddings._embedding_func(self, texts)
[...]
ModelError: An error occurred (ModelError) when calling the InvokeEndpoint operation: Received client error (422) from primary with message "Failed to deserialize the JSON body into the target type: inputs: invalid type: sequence, expected a string at line 1 column 11". See https://eu-central-1.console.aws.amazon.com/cloudwatch/home?xxxxx in account xxxxxx for more information.`
### Suggestion:
_No response_ | Problems with using LangChain Sagemaker Embedding for Llama 2.0 Inference Endpoint in Sagemaker | https://api.github.com/repos/langchain-ai/langchain/issues/10037/comments | 3 | 2023-08-31T09:09:17Z | 2024-01-18T20:18:04Z | https://github.com/langchain-ai/langchain/issues/10037 | 1,875,107,736 | 10,037 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
The idea is, that I have a vector store with a conversational retrieval chain.from_llm, and I want to create other functions such as send an email, etc. so that when a user queries for something, it determines if it should use the Conv retrieval chain or the other functions such as sending an email function, and it seems I need to use the router to achieve this, but when I do, I get a lot of errors such as
```
"default_chain -> prompt
field required (type=value_error.missing)
default_chain -> llm
field required (type=value_error.missing)
default_chain -> combine_docs_chain
extra fields not permitted (type=value_error.extra)
```
How do i integrate with conv retrieval chain with routers to achieve my purpose, the only examples i have seen do not use conv retrieval chain
### Suggestion:
_No response_ | Conversational Retrieval Chain.from_llm integration with router chains | https://api.github.com/repos/langchain-ai/langchain/issues/10035/comments | 2 | 2023-08-31T08:32:26Z | 2023-12-07T16:05:44Z | https://github.com/langchain-ai/langchain/issues/10035 | 1,875,045,957 | 10,035 |
[
"langchain-ai",
"langchain"
] | ### Feature request
while parsing LLM output with `pydantic_object` it would be nice to send to the `parse` function `context` object , like `pydantic` docs does here: https://docs.pydantic.dev/latest/usage/validators/#validation-context
### Motivation
In order to do validation on LLM output rather than just parse it, in some cases to do the validation we need some context.
### Your contribution
This change may require to support `pydantic` v2 not sure about backword capabilities.
in `PydanticOutputParser.parse` function insted of `parse_obj` we should use `model_validate` then we could send context object(it can be optional) to `model_validate`
from pydantic:
```python
@typing_extensions.deprecated(
'The `parse_obj` method is deprecated; use `model_validate` instead.', category=PydanticDeprecatedSince20
)
def parse_obj(cls: type[Model], obj: Any) -> Model: # noqa: D102
warnings.warn('The `parse_obj` method is deprecated; use `model_validate` instead.', DeprecationWarning)
return cls.model_validate(obj)
``` | context to pydantic_object | https://api.github.com/repos/langchain-ai/langchain/issues/10034/comments | 2 | 2023-08-31T08:23:33Z | 2023-12-07T16:05:50Z | https://github.com/langchain-ai/langchain/issues/10034 | 1,875,031,907 | 10,034 |
[
"langchain-ai",
"langchain"
] | ### Feature request
I'm utilizing BaseMessagePromptTemplate with external DBs, for version control.
However, dumping BaseMessagePromptTemplate to json and loading to original Template is difficult, as retrieving exact type of message type from json is not possible.
Therefore, it would be useful to add _msg_type property on BaseMessagePromptTemplate,
like In [BasePromptTemplate dict method](https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/schema/prompt_template.py#L108-L116).
### Motivation
Dumping ChatPromptTemplate to json and loading to original Template is difficult,
as retrieving exact type of MessageLike (Union[BaseMessagePromptTemplate, BaseMessage, BaseChatPromptTemplate]) property is difficult.
### Your contribution
If you allow me, I'd like to make a pull request regarding to this. | Add message type property method on BaseMessagePromptTemplate | https://api.github.com/repos/langchain-ai/langchain/issues/10033/comments | 1 | 2023-08-31T08:23:10Z | 2023-09-01T00:13:23Z | https://github.com/langchain-ai/langchain/issues/10033 | 1,875,031,189 | 10,033 |
[
"langchain-ai",
"langchain"
] | ### Feature request
When using `RetrievalQAWithSourcesChain`, the chain call can't accept `search_kwargs` to pass then to the retriever like this:
```python
response = chain({"question": query ,'search_kwargs': search_kwargs})
```
In particular, I tried with `Milvus` and `MilvusRetriever` and natively could't find a way.
### Motivation
Instead to create the same chain but with different `search_kwargs` to be sended to the retriever, it would be usefull to allow [optionally] the search_kwargs sendeed dynamically in the call.
### Your contribution
I could allow this behaevoir with the following modification:
Adding a `search_kwargs_key` to take `search_kwargs` that then will be sended to self.retriever.get_relevant_documents( ..., `**search_kwargs`,...).
```python
class customRetrievalQAWithSourcesChain(RetrievalQAWithSourcesChain):
search_kwargs_key:str = "search_kwargs"
def _get_docs(
self, inputs: Dict[str, Any], *, run_manager: CallbackManagerForChainRun
) -> List[Document]:
question = inputs[self.question_key]
search_kwargs = inputs[self.search_kwargs_key]
docs = self.retriever.get_relevant_documents(
question, **search_kwargs, callbacks=run_manager.get_child()
)
return self._reduce_tokens_below_limit(docs)
async def _aget_docs(
self, inputs: Dict[str, Any], *, run_manager: AsyncCallbackManagerForChainRun
) -> List[Document]:
question = inputs[self.question_key]
search_kwargs = inputs[self.search_kwargs_key]
docs = await self.retriever.aget_relevant_documents(
question,**search_kwargs, callbacks=run_manager.get_child()
)
return self._reduce_tokens_below_limit(docs)
```
And finally allowing to VectorStoreRetriever take `**search_kwargs` instead of `self.search_kwargs`
```python
class customRetriever(VectorStoreRetriever):
def _get_relevant_documents(
self, query: str, *, run_manager: CallbackManagerForRetrieverRun, **search_kwargs: Any,
) -> List[Document]:
if self.search_type == "similarity":
docs = self.vectorstore.similarity_search(query, **search_kwargs)
elif self.search_type == "similarity_score_threshold":
docs_and_similarities = (
self.vectorstore.similarity_search_with_relevance_scores(
query, **search_kwargs
)
)
docs = [doc for doc, _ in docs_and_similarities]
elif self.search_type == "mmr":
docs = self.vectorstore.max_marginal_relevance_search(
query, **search_kwargs
)
else:
raise ValueError(f"search_type of {self.search_type} not allowed.")
return docs
async def _aget_relevant_documents(
self, query: str, *, run_manager: AsyncCallbackManagerForRetrieverRun, **search_kwargs: Any,
) -> List[Document]:
if self.search_type == "similarity":
docs = await self.vectorstore.asimilarity_search(
query, **search_kwargs
)
elif self.search_type == "similarity_score_threshold":
docs_and_similarities = (
await self.vectorstore.asimilarity_search_with_relevance_scores(
query, **search_kwargs
)
)
docs = [doc for doc, _ in docs_and_similarities]
elif self.search_type == "mmr":
docs = await self.vectorstore.amax_marginal_relevance_search(
query, **search_kwargs
)
else:
raise ValueError(f"search_type of {self.search_type} not allowed.")
return docs
```
If you consider this usefull I could open a PR (confirm please). I not, maybe someone can find this usefull.
Best regards, | Dynamic "search_kwargs" during RetrievalQAWithSourcesChain call | https://api.github.com/repos/langchain-ai/langchain/issues/10031/comments | 2 | 2023-08-31T08:06:10Z | 2023-12-05T17:47:16Z | https://github.com/langchain-ai/langchain/issues/10031 | 1,875,002,896 | 10,031 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
Does anyone created agent with tool which takes "dataframe" and "user_input" as input variables in langchain?
I do not want use dataframe agent which is there already in langchain. As I need pass further instruction in the prompt template.
### Suggestion:
_No response_ | Does anyone created agent with tool which takes "dataframe" and "user_input" as input variables in langchain? | https://api.github.com/repos/langchain-ai/langchain/issues/10028/comments | 3 | 2023-08-31T06:46:58Z | 2023-12-07T16:05:55Z | https://github.com/langchain-ai/langchain/issues/10028 | 1,874,888,777 | 10,028 |
[
"langchain-ai",
"langchain"
] | Can anyone tell me the difference between these two parameters? When setting up the ChatOpenAI model
| Issue: I dont know what the meaning of OPENAI_API_BASE and OPENAI_PROXY in ChatOpenAI | https://api.github.com/repos/langchain-ai/langchain/issues/10027/comments | 1 | 2023-08-31T06:35:39Z | 2023-08-31T08:32:27Z | https://github.com/langchain-ai/langchain/issues/10027 | 1,874,874,756 | 10,027 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
Hello,
I am facing some issues with the code I am trying to run as it did run perfectly up until recently when an Error started being thrown out as AttributeError: 'list' object has no attribute 'embedding'. Below is the traceback if the error. Please do let me know if some code snippet excerpts will also be needed to facilitate debugging the code.
Traceback:
File "/home/ataliba/llm/lib/python3.10/site-packages/streamlit/runtime/scriptrunner/script_runner.py", line 552, in _run_script
exec(code, module.__dict__)
File "/home/ataliba/LLM_Workshop/Experimental_Lama_QA_Retrieval/Andro_GPT_Llama2.py", line 268, in <module>
response = qa_chain.run(user_query, callbacks=[cb])
File "/home/ataliba/llm/lib/python3.10/site-packages/langchain/chains/base.py", line 481, in run
return self(args[0], callbacks=callbacks, tags=tags, metadata=metadata)[
File "/home/ataliba/llm/lib/python3.10/site-packages/langchain/chains/base.py", line 288, in __call__
raise e
File "/home/ataliba/llm/lib/python3.10/site-packages/langchain/chains/base.py", line 282, in __call__
self._call(inputs, run_manager=run_manager)
File "/home/ataliba/llm/lib/python3.10/site-packages/langchain/chains/conversational_retrieval/base.py", line 134, in _call
docs = self._get_docs(new_question, inputs, run_manager=_run_manager)
File "/home/ataliba/llm/lib/python3.10/site-packages/langchain/chains/conversational_retrieval/base.py", line 286, in _get_docs
docs = self.retriever.get_relevant_documents(
File "/home/ataliba/llm/lib/python3.10/site-packages/langchain/schema/retriever.py", line 208, in get_relevant_documents
raise e
File "/home/ataliba/llm/lib/python3.10/site-packages/langchain/schema/retriever.py", line 201, in get_relevant_documents
result = self._get_relevant_documents(
File "/home/ataliba/llm/lib/python3.10/site-packages/langchain/vectorstores/base.py", line 571, in _get_relevant_documents
docs = self.vectorstore.max_marginal_relevance_search(
File "/home/ataliba/llm/lib/python3.10/site-packages/langchain/vectorstores/docarray/base.py", line 197, in max_marginal_relevance_search
np.array(query_embedding), docs.embedding, k=k
### Suggestion:
_No response_ | Issue: AttributeError: 'list' object has no attribute 'embedding' | https://api.github.com/repos/langchain-ai/langchain/issues/10025/comments | 2 | 2023-08-31T05:40:08Z | 2023-12-07T16:06:00Z | https://github.com/langchain-ai/langchain/issues/10025 | 1,874,819,757 | 10,025 |
[
"langchain-ai",
"langchain"
] | ### Feature request
The class ErnieBotChat defined in libs/langchain/langchain/chat_models/ernie.py only supports 2 models, which are ERNIE-Bot-turbo and ERNIE-Bot. While a bunch of new models are supported by BCE (Baidu Cloud Engine), such as llama-2-7b chat (https://cloud.baidu.com/doc/WENXINWORKSHOP/s/Rlki1zlai). It would be better that this class supported more models.
### Motivation
When I was using ErnieBotChat, I've found that ErnieBotChat cannot recognize models other than ERNIE-Bot-turbo and ERNIE-Bot. Instead, it raised an error "Got unknown model_name {self.model_name}".
### Your contribution
If possible, I would be happy to help resolve this issue. I plan to add all models BCE (Baidu Cloud Engine) supports at this time(2023-8-31). The fixes will be simple, just add more cases around line 106 in the file libs/langchain/langchain/chat_models/ernie.py from the master branch. | Support more models for ErnieBotChat | https://api.github.com/repos/langchain-ai/langchain/issues/10022/comments | 3 | 2023-08-31T05:22:17Z | 2023-12-13T16:07:03Z | https://github.com/langchain-ai/langchain/issues/10022 | 1,874,805,344 | 10,022 |
[
"langchain-ai",
"langchain"
] | ### System Info
MacOS M2 13.4.1 (22F82)
### Who can help?
@eyurtsev
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Steps to reproduce behaviour:
1. Run the [tutorial](https://python.langchain.com/docs/integrations/document_loaders/youtube_audio) with the default parameters `save_dir = "~/Downloads/YouTube"`
2. After calling `docs = loader.load()` the docs will be empty
I have implemented a dummy fix for the interim.
The error is here in this file: from langchain.document_loaders.blob_loaders.youtube_audio import YoutubeAudioLoader
`YouTubeAudioLoader.yield_blobs` method
loader = FileSystemBlobLoader(self.save_dir, glob="*.m4a")
```
# This doesn't always work (MacOS)
loader = FileSystemBlobLoader(self.save_dir, glob="*.m4a")
```
The reason it doesn't work is that it's trying to use ~/Downloads/YouTube.
The fix I propose is either:
- Use the FULL file path in `save_dir` in the tutorial.
- Replace the problematic line with this, so that it finds the actual directory, even if you prefer to use `~` for specifying file paths.
```
loader = FileSystemBlobLoader(os.path.expanduser(self.save_dir), glob="*.m4a")
```
### Expected behavior
There should be documents in the loader.load() variable.
### My Fix
```
# Yield the written blobs
"""
you could fix save_dir like this...
(old)
save_dir = "~/Downloads/YouTube"
(new)
"/Users/shawnesquivel/Downloads/YouTube"
"""
# This doesn't always work (MacOS)
loader = FileSystemBlobLoader(self.save_dir, glob="*.m4a")
# This works
loader = FileSystemBlobLoader(os.path.expanduser(self.save_dir), glob="*.m4a")
```
| fix: Loading documents from a Youtube Url | https://api.github.com/repos/langchain-ai/langchain/issues/10019/comments | 1 | 2023-08-31T03:19:25Z | 2023-12-07T16:06:10Z | https://github.com/langchain-ai/langchain/issues/10019 | 1,874,719,531 | 10,019 |
[
"langchain-ai",
"langchain"
] | @dosu-bot
The issue is that the "string indices must be indices", but your transformation does not deal with that. Here is the curretn code, please see below and change to avoid this error:
```from langchain.docstore.document import Document
from typing import Dict
from langchain import PromptTemplate, SagemakerEndpoint
from langchain.llms.sagemaker_endpoint import LLMContentHandler
from langchain.chains.question_answering import load_qa_chain
import json
example_doc_1 = """
string
"""
docs = [
Document(
page_content=example_doc_1,
)
]
query = """
prompt
"""
prompt_template = """Use the following pieces of context to answer the question at the end.
{context}
Question: {question}
Answer:"""
PROMPT = PromptTemplate(
template=prompt_template, input_variables=["context", "question"]
)
class ContentHandler(LLMContentHandler):
content_type = "application/json"
accepts = "application/json"
def transform_input(self, prompt: str, model_kwargs: Dict) -> bytes:
input_dict = {"inputs": prompt, "parameters": model_kwargs}
return json.dumps(input_dict).encode('utf-8')
def transform_output(self, output: bytes) -> str:
response_json = json.loads(output.read().decode("utf-8"))
print("output: ", response_json[0])
return response_json[0]["generation"]
content_handler = ContentHandler()
chain = load_qa_chain(
llm=SagemakerEndpoint(
endpoint_name="endpointname",
credentials_profile_name="profilename",
region_name="us-east-1",
model_kwargs={"temperature": 1e-10, "max_length":500},
endpoint_kwargs={"CustomAttributes": "accept_eula=true"},
content_handler=content_handler,
),
prompt=PROMPT,
)
chain({"input_documents": docs, "question": query}, return_only_outputs=True)```
_Originally posted by @maggarwal25 in https://github.com/langchain-ai/langchain/issues/10012#issuecomment-1700300547_
| @dosu-bot | https://api.github.com/repos/langchain-ai/langchain/issues/10017/comments | 7 | 2023-08-31T03:01:34Z | 2023-12-07T16:06:15Z | https://github.com/langchain-ai/langchain/issues/10017 | 1,874,707,605 | 10,017 |
[
"langchain-ai",
"langchain"
] | ### System Info
langchain 0.0.277
azure-search-documents 11.4.0b8
python 3.1.0.11
### Who can help?
@baskaryan
@ruoccofabrizio
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Repro steps:
1. I took the code in https://github.com/langchain-ai/langchain/blob/master/docs/extras/integrations/vectorstores/azuresearch.ipynb and placed it in a python file and ran the chunk under "Create a new index with a Scoring profile"
2. I executed the code and got the following scores:

3. I went into the Azure Search Index and adjusted the "scoring_profile" to have a boost of 1000 instead of 100 and got the exact same scores.
### Expected behavior
I expected all of the score to be 10 times larger than the scores I got. After much experimentation I do not believe that scoring profiles would with a vector search if a search term is specified. If "None" is specified the behavior is correct. Change the last line of the example to:
`
res = vector_store.similarity_search(query="Test 1", k=3, search_type="similarity")`
And the results respect the Scoring profile and behave as expected when the scoring profile is changed. | Azure Cognitive Search Scoring Profile does not work as documented | https://api.github.com/repos/langchain-ai/langchain/issues/10015/comments | 5 | 2023-08-31T02:30:27Z | 2023-12-08T16:05:06Z | https://github.com/langchain-ai/langchain/issues/10015 | 1,874,685,238 | 10,015 |
[
"langchain-ai",
"langchain"
] | ### System Info
latest versions for python and langchain
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [X] Callbacks/Tracing
- [ ] Async
### Reproduction
I used the demo code to run Sagemaker using Langchain for llama-2-7b-f
However, I'm getting the following issues:
```
ValueError: Error raised by inference endpoint: An error occurred (ModelError) when calling the InvokeEndpoint operation: Received client error (424) from primary with message "{
"code":424,
"message":"prediction failure",
"error":"string indices must be integers"
}
```
This is what is shown from AWS Logs:
`[INFO ] PyProcess - W-80-model-stdout: [1,0]<stdout>:TypeError: string indices must be integers`
How do I resolve this
### Expected behavior
Shown:
```
ValueError: Error raised by inference endpoint: An error occurred (ModelError) when calling the InvokeEndpoint operation: Received client error (424) from primary with message "{
"code":424,
"message":"prediction failure",
"error":"string indices must be integers"
}
```
| Issues with llama-2-7b-f | https://api.github.com/repos/langchain-ai/langchain/issues/10012/comments | 11 | 2023-08-30T22:43:26Z | 2024-03-18T16:05:19Z | https://github.com/langchain-ai/langchain/issues/10012 | 1,874,476,223 | 10,012 |
[
"langchain-ai",
"langchain"
] | ### System Info
The latest langchain version
### Who can help?
@hwchase17 @agola11
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
I have written a script that initializes a chat which has access to a faiss index. I want to pass in a system prompt to the conversational agent. How do I add a system prompt to the conversational agent while making sure that the chat history is passed in and not stored and updated in memory. The code below shows details
```
from langchain.embeddings import OpenAIEmbeddings
from langchain.vectorstores import Chroma
from langchain.chat_models import AzureChatOpenAI
from langchain.chains import ConversationalRetrievalChain
from langchain.vectorstores import FAISS
from dotenv import load_dotenv
from langchain.schema import SystemMessage
load_dotenv()
import os
system_message = SystemMessage(
content="You are an AI that always writes text backwards e.g. 'hello' becomes 'olleh'."
)
embeddings = OpenAIEmbeddings(deployment="embeddings",
temperature=0,
model="text-embedding-ada-002",
openai_api_type="azure",
openai_api_version="2023-03-15-preview",
openai_api_base="https://azcore-kusto-bot-openai.openai.azure.com/",
openai_api_key=os.getenv("OPENAI_API_KEY"), chunk_size = 1)
vectorstore = FAISS.load_local("faiss_index", embeddings)
prompt = "Your name is AzCoreKustoCopilot"
llm=AzureChatOpenAI(deployment_name="16k-gpt-35",
model="gpt-35-turbo-16k",
openai_api_type="azure",
openai_api_version="2023-03-15-preview",
openai_api_base="https://azcore-kusto-bot-openai.openai.azure.com/",
openai_api_key=os.getenv("OPENAI_API_KEY"))
retriever = vectorstore.as_retriever()
def initialize_chat():
# retriever = vectorstore.as_retriever()
chat = ConversationalRetrievalChain.from_llm(llm, retriever=retriever)
print('this is chat', chat)
return chat
def answer_query(chat, user_query, chat_history):
"""
user_query is the question
chat_history is a list of lists: [[prev_prev_query, prev_prev_response], [prev_query, prev_response]]
we convert this into a list of tuples
"""
chat_history_tups = [tuple(el) for el in chat_history]
print(f"user_query:{user_query}, chat_history: {chat_history_tups}")
result = chat({"question":user_query, "chat_history": chat_history_tups})
return result["answer"]
```
### Expected behavior
There is no way to pass in a system message and chat history to a ConversationalRetrievalChain. Why is that? | No support for ConversationalRetrievalChain with passing in memory and system message | https://api.github.com/repos/langchain-ai/langchain/issues/10011/comments | 6 | 2023-08-30T22:14:38Z | 2024-01-30T00:42:39Z | https://github.com/langchain-ai/langchain/issues/10011 | 1,874,448,914 | 10,011 |
[
"langchain-ai",
"langchain"
] | ### Feature request
An agent for searching iteratively multiple documents without the problem of processing incomplete document chunks.
An option to include the metadata (source references) in the prompt.
### Motivation
Normally documents are split into chunks before being added to Chroma
When the data is queried, Chroma returns these chunks of incomplete documents and feed them to the prompt.
Thus, the LLM sometimes is not provided with the complete information and will fail to answer.
This is a big problem, especially when the split occurs in the middle of a list (eg: a text listing the 10 commandments of god).
The LLM won't have a chance to know they are 10.
Besides, LangChain "stuff" retriever is just mixing all these chunks together, without even separating them nor adding the documents metadata of each chunk. Mixing different sentences could confuse the LLM.
If this can be solved using document_prompt templates, this should be added to the documentation.
I would also expect to include document souces into the prompt, so the LLM and provide the used sources (not all sources retrieved by the Chroma query).
I blieve the queries should be processed by an agent with the ability to detect when there may be missing previous and/or following chunks in order to fetch them in a subsequent iteration if required.
### Your contribution
I can help coding and testing, but I need feedback for the design and to know which other existing componentes could/should be used. | Solve the problem of working with incomplete document chunks and multiple documents | https://api.github.com/repos/langchain-ai/langchain/issues/9996/comments | 9 | 2023-08-30T14:58:47Z | 2024-02-09T02:13:41Z | https://github.com/langchain-ai/langchain/issues/9996 | 1,873,859,050 | 9,996 |
[
"langchain-ai",
"langchain"
] | ### Feature request
```
from langchain.chat_models import ChatOpenAI
from langchain.chains import GraphCypherQAChain
from langchain.graphs import Neo4jGraph
graph = Neo4jGraph(
url="bolt://localhost:7687", username="neo4j", password="pleaseletmein"
)
chain = GraphCypherQAChain.from_llm(ChatOpenAI(temperature=0), graph=graph, verbose=True)
print(chain.run("Question to the graph"))
```
In the above Code, How can I pass my custom prompt as promptTemplate? Please give me an example
### Motivation
Custom prompt support in Knowledge Graph QA
### Your contribution
Custom prompt support in Knowledge Graph QA | How to pass a custom prompt in graphQA or GraphCypherQA chain | https://api.github.com/repos/langchain-ai/langchain/issues/9993/comments | 5 | 2023-08-30T13:42:34Z | 2024-05-04T13:18:22Z | https://github.com/langchain-ai/langchain/issues/9993 | 1,873,711,576 | 9,993 |
[
"langchain-ai",
"langchain"
] | ### System Info
0.0.276
### Who can help?
@hwchase17
@agola11
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
I have deployed llama2 in Azure ML endpoint and when I test it with langchain code I got a 404 Error. However using standard request library from python it works
Below : askdocuments2 ---> no lanchain
askdocuments --->langchain
same endpoint url, same key.
```
def askdocuments2(
question):
# Request data goes here
# The example below assumes JSON formatting which may be updated
# depending on the format your endpoint expects.
# More information can be found here:
# https://docs.microsoft.com/azure/machine-learning/how-to-deploy-advanced-entry-script
formatter = LlamaContentFormatter()
data = formatter.format_request_payload(prompt=question, model_kwargs={"temperature": 0.1, "max_tokens": 300})
body = data
url = 'https://llama-2-7b-test.westeurope.inference.ml.azure.com/score'
# Replace this with the primary/secondary key or AMLToken for the endpoint
api_key = ''
if not api_key:
raise Exception("A key should be provided to invoke the endpoint")
# The azureml-model-deployment header will force the request to go to a specific deployment.
# Remove this header to have the request observe the endpoint traffic rules
headers = {'Content-Type': 'application/json', 'Authorization': ('Bearer ' + api_key), 'azureml-model-deployment': 'llama'}
req = urllib.request.Request(url, body, headers)
try:
response = urllib.request.urlopen(req)
result = response.read()
decoded_data = json.loads(result.decode('utf-8'))
text = decoded_data[0]["0"]
return text
except urllib.error.HTTPError as error:
print("The request failed with status code: " + str(error.code))
# Print the headers - they include the requert ID and the timestamp, which are useful for debugging the failure
print(error.info())
print(error.read().decode("utf8", 'ignore'))
def askdocuments(
question):
try:
content_formatter = LlamaContentFormatter()
llm = AzureMLOnlineEndpoint(
endpoint_api_key="",
deployment_name="llama-2-7b-test",
endpoint_url="https://llama-2-7b-test.westeurope.inference.ml.azure.com/score",
model_kwargs={"temperature": 0.8, "max_tokens": 300},
content_formatter=content_formatter
)
formatter_template = "Write a {word_count} word essay about {topic}."
prompt = PromptTemplate(
input_variables=["word_count", "topic"], template=formatter_template
)
chain = LLMChain(llm=llm, prompt=prompt)
response = chain.run({"word_count": 100, "topic": "how to make friends"})
return response
except requests.exceptions.RequestException as e:
# Handle any requests-related errors (e.g., network issues, invalid URL)
raise ValueError(f"Error with the API request: {e}")
except json.JSONDecodeError as e:
# Handle any JSON decoding errors (e.g., invalid JSON format)
raise ValueError(f"Error decoding API response as JSON: {e}")
except Exception as e:
# Handle any other errors
raise ValueError(f"Error: {e}")
```
### Expected behavior
According to the documentation I am doing everything correctly, so not sure why its showing a 404 error in a valid url | AzureMLOnlineEndpoint not working, 404 error, but same url and api key works with standard http | https://api.github.com/repos/langchain-ai/langchain/issues/9987/comments | 5 | 2023-08-30T08:45:03Z | 2023-11-03T14:14:44Z | https://github.com/langchain-ai/langchain/issues/9987 | 1,873,221,947 | 9,987 |
[
"langchain-ai",
"langchain"
] | ### System Info
I tried to use langchain with Azure Cognitive Search as vector store and got the following Import Error.
langchain version: 0.0.276
azure documents: 11.4.0b8
python version: 3.8
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
I tried to use the Azure Cognitive Search as vector store
```
model: str = "text-embedding-ada-002"
search_service = os.environ["AZURE_COGNITIVE_SEARCH_SERVICE_NAME"]
search_api_key = os.environ["AZURE_COGNITIVE_SEARCH_API_KEY"]
vector_store_address: str = f"https://{search_service}.search.windows.net"
vector_store_password: str = search_api_key
# define embedding model for calculating the embeddings
model: str = "text-embedding-ada-002"
embeddings: OpenAIEmbeddings = OpenAIEmbeddings(deployment=model, chunk_size=1)
embedding_function = embeddings.embed_query
# define schema of the json file stored on the index
fields = [
SimpleField(
name="id",
type=SearchFieldDataType.String,
key=True,
filterable=True,
),
SearchableField(
name="content",
type=SearchFieldDataType.String,
searchable=True,
),
SearchField(
name="content_vector",
type=SearchFieldDataType.Collection(SearchFieldDataType.Single),
searchable=True,
vector_search_dimensions=len(embedding_function("Text")),
vector_search_configuration="default",
),
SearchableField(
name="metadata",
type=SearchFieldDataType.String,
searchable=True,
),
]
vector_store: AzureSearch = AzureSearch(
azure_search_endpoint=vector_store_address,
azure_search_key=vector_store_password,
index_name=index_name,
embedding_function=embedding_function,
fields=fields,
)
```
And I got the following import error
```
Cell In[17], line 72, in azure_search_by_index(question, index_name)
21 # define schema of the json file stored on the index
22 fields = [
23 SimpleField(
24 name="id",
(...)
69 ),
70 ]
---> 72 vector_store: AzureSearch = AzureSearch(
73 azure_search_endpoint=vector_store_address,
74 azure_search_key=vector_store_password,
75 index_name=index_name,
76 embedding_function=embedding_function,
77 fields=fields,
78 )
80 relevant_documentation = vector_store.similarity_search(query=question, k=1, search_type="similarity")
82 context = "\n".join([doc.page_content for doc in relevant_documentation])[:10000]
File /anaconda/envs/jupyter_env/lib/python3.8/site-packages/langchain/vectorstores/azuresearch.py:234, in AzureSearch.__init__(self, azure_search_endpoint, azure_search_key, index_name, embedding_function, search_type, semantic_configuration_name, semantic_query_language, fields, vector_search, semantic_settings, scoring_profiles, default_scoring_profile, **kwargs)
232 if "user_agent" in kwargs and kwargs["user_agent"]:
233 user_agent += " " + kwargs["user_agent"]
--> 234 self.client = _get_search_client(
235 azure_search_endpoint,
236 azure_search_key,
237 index_name,
238 semantic_configuration_name=semantic_configuration_name,
239 fields=fields,
240 vector_search=vector_search,
241 semantic_settings=semantic_settings,
242 scoring_profiles=scoring_profiles,
243 default_scoring_profile=default_scoring_profile,
244 default_fields=default_fields,
245 user_agent=user_agent,
246 )
247 self.search_type = search_type
248 self.semantic_configuration_name = semantic_configuration_name
File /anaconda/envs/jupyter_env/lib/python3.8/site-packages/langchain/vectorstores/azuresearch.py:83, in _get_search_client(endpoint, key, index_name, semantic_configuration_name, fields, vector_search, semantic_settings, scoring_profiles, default_scoring_profile, default_fields, user_agent)
81 from azure.search.documents import SearchClient
82 from azure.search.documents.indexes import SearchIndexClient
---> 83 from azure.search.documents.indexes.models import (
84 HnswVectorSearchAlgorithmConfiguration,
85 PrioritizedFields,
86 SearchIndex,
87 SemanticConfiguration,
88 SemanticField,
89 SemanticSettings,
90 VectorSearch,
91 )
93 default_fields = default_fields or []
94 if key is None:
ImportError: cannot import name 'HnswVectorSearchAlgorithmConfiguration' from 'azure.search.documents.indexes.models' (/anaconda/envs/jupyter_env/lib/python3.8/site-packages/azure/search/documents/indexes/models/__init__.py)
```
### Expected behavior
No import error | ImportError: cannot import name 'HnswVectorSearchAlgorithmConfiguration' from 'azure.search.documents.indexes.models' | https://api.github.com/repos/langchain-ai/langchain/issues/9985/comments | 9 | 2023-08-30T08:07:19Z | 2024-05-07T16:04:48Z | https://github.com/langchain-ai/langchain/issues/9985 | 1,873,154,024 | 9,985 |
[
"langchain-ai",
"langchain"
] | ### System Info
```
import boto3
from langchain.retrievers import AmazonKendraRetriever
retriever = AmazonKendraRetriever(index_id="xxxxxxxxxxxxxxxxxxxx")
retriever.get_relevant_documents("what is the tax")
```
**Facing the below error**
----------------------------------------
```
AttributeError Traceback (most recent call last)
Cell In[48], line 9
5 # retriever = AmazonKendraRetriever(kendraindex='dfba3dce-b6eb-4fec-b98c-abe17a58cf30',
6 # awsregion='us-east-1',
7 # return_source_documents=True)
8 retriever = AmazonKendraRetriever(index_id="7835d77a-470b-4545-9613-508ed8fe82d3")
----> 9 retriever.get_relevant_documents("What is dog")
File ~/anaconda3/envs/python3/lib/python3.10/site-packages/langchain/schema/retriever.py:208, in BaseRetriever.get_relevant_documents(self, query, callbacks, tags, metadata, **kwargs)
206 except Exception as e:
207 run_manager.on_retriever_error(e)
--> 208 raise e
209 else:
210 run_manager.on_retriever_end(
211 result,
212 **kwargs,
213 )
File ~/anaconda3/envs/python3/lib/python3.10/site-packages/langchain/schema/retriever.py:201, in BaseRetriever.get_relevant_documents(self, query, callbacks, tags, metadata, **kwargs)
199 _kwargs = kwargs if self._expects_other_args else {}
200 if self._new_arg_supported:
--> 201 result = self._get_relevant_documents(
202 query, run_manager=run_manager, **_kwargs
203 )
204 else:
205 result = self._get_relevant_documents(query, **_kwargs)
File ~/anaconda3/envs/python3/lib/python3.10/site-packages/langchain/retrievers/kendra.py:421, in AmazonKendraRetriever._get_relevant_documents(self, query, run_manager)
407 def _get_relevant_documents(
408 self,
409 query: str,
410 *,
411 run_manager: CallbackManagerForRetrieverRun,
412 ) -> List[Document]:
413 """Run search on Kendra index and get top k documents
414
415 Example:
(...)
419
420 """
--> 421 result_items = self._kendra_query(query)
422 top_k_docs = self._get_top_k_docs(result_items)
423 return top_k_docs
File ~/anaconda3/envs/python3/lib/python3.10/site-packages/langchain/retrievers/kendra.py:390, in AmazonKendraRetriever._kendra_query(self, query)
387 if self.user_context is not None:
388 kendra_kwargs["UserContext"] = self.user_context
--> 390 response = self.client.retrieve(**kendra_kwargs)
391 r_result = RetrieveResult.parse_obj(response)
392 if r_result.ResultItems:
File ~/anaconda3/envs/python3/lib/python3.10/site-packages/botocore/client.py:875, in BaseClient._getattr_(self, item)
872 if event_response is not None:
873 return event_response
--> 875 raise AttributeError(
876 f"'{self._class.name_}' object has no attribute '{item}'"
877 )
```
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Create an kendra index and then use it to run the below code.
import boto3
from langchain.retrievers import AmazonKendraRetriever
retriever = AmazonKendraRetriever(index_id="xxxxxxxxxxxxxxxxxxxx")
retriever.get_relevant_documents("what is the tax")
### Expected behavior
Fetch the document from the index and retrun it like it was happening in previous version. | AttributeError: 'kendra' object has no attribute 'retrieve' | https://api.github.com/repos/langchain-ai/langchain/issues/9982/comments | 5 | 2023-08-30T07:14:43Z | 2024-01-26T18:57:38Z | https://github.com/langchain-ai/langchain/issues/9982 | 1,873,068,703 | 9,982 |
[
"langchain-ai",
"langchain"
] | ### System Info
First, Thank you so much for your work on Langchain, it's very good.
I am trying to compare two documents following the guide from langchain https://python.langchain.com/docs/integrations/toolkits/document_comparison_toolkit
I have done exactly the same code:
I have one class to use for the args_schema on the tools creation:
```
class DocumentInput(BaseModel):
question: str = Field()
```
I have created the tools :
```
tools.append(
Tool(
args_schema=DocumentInput,
name=file_name,
description=f"useful when you want to answer questions about {file_name}",
func=RetrievalQA.from_chain_type(llm=llm, retriever=retriever)
)
)
```
```
agent = initialize_agent(
agent=AgentType.OPENAI_FUNCTIONS,
tools=tools,
llm=llm,
verbose=True,
)
```
And here I am getting the error:
"1 validation error for Tool\nargs_schema\n subclass of BaseModel expected (type=type_error.subclass; expected_class=BaseModel)",
I have changed the args_schema class to :
```
from abc import ABC
from langchain.tools import BaseTool
from pydantic import Field
class DocumentInput(BaseTool, ABC):
question: str = Field()
```
And now the error, I am getting is:
("Value not declarable with JSON Schema, field: name='_callbacks_List[langchain.callbacks.base.BaseCallbackHandler]' type=BaseCallbackHandler required=True",)
I only want to compare the content between two documents, Do you have some example to compare two files which is working? Maybe I am calling wrong the creation of the tools.
### Who can help?
@yajunDai
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
I have follow exactly the guide for ->https://python.langchain.com/docs/integrations/toolkits/document_comparison_toolkit to compare two documents, and I am getting the error :
"1 validation error for Tool\nargs_schema\n subclass of BaseModel expected (type=type_error.subclass; expected_class=BaseModel)",
The args_schema class is :
```
class DocumentInput(BaseModel):
question: str = Field()
```
I trying to create the tools :
```
tools.append(
Tool(
args_schema=DocumentInput,
name=file_name,
description=f"useful when you want to answer questions about {file_name}",
func=RetrievalQA.from_chain_type(llm=llm, retriever=retriever)
)
)
```
and here I am getting the error:
"1 validation error for Tool\nargs_schema\n subclass of BaseModel expected (type=type_error.subclass; expected_class=BaseModel)",
After that I am doing this, but I can't get to initialize_agent
```
agent = initialize_agent(
agent=AgentType.OPENAI_FUNCTIONS,
tools=tools,
llm=llm,
verbose=True,
)
```
It's exactly the guide for the Document comparsion on Langchain
### Expected behavior
Expected behaviour, that I can compare the content for two documents without error | Document Comparison toolkit is not working | https://api.github.com/repos/langchain-ai/langchain/issues/9981/comments | 15 | 2023-08-30T06:02:22Z | 2024-06-22T16:34:20Z | https://github.com/langchain-ai/langchain/issues/9981 | 1,872,965,623 | 9,981 |
[
"langchain-ai",
"langchain"
] | ### Feature request
When using Chroma vector store, the stored documents can only be retrieved when using a search query. There is no method which allows to load all documents.
### Motivation
The closest thing to retrieving all documents is using
`vectordb._collection.get()` for which the output is a dictionary instead of document. This prevents filtering based on other retrievers which Chroma does not support like TF-IDF and BM25.
### Your contribution
Not yet. | Chroma - retrieve all documents | https://api.github.com/repos/langchain-ai/langchain/issues/9980/comments | 2 | 2023-08-30T05:44:19Z | 2024-05-15T05:58:55Z | https://github.com/langchain-ai/langchain/issues/9980 | 1,872,945,320 | 9,980 |
[
"langchain-ai",
"langchain"
] | ### System Info
colab notebook
### Who can help?
@hwchase17 @agola
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```
getting error: ---------------------------------------------------------------------------
ImportError Traceback (most recent call last)
<ipython-input-25-74e62d788fe7> in <cell line: 9>()
7 import openai
8 from langchain.chains import LLMBashChain, LLMChain, RetrievalQA, SimpleSequentialChain
----> 9 from langchain.chains.summarize import load_summarize_chain
10 from langchain.chat_models import ChatOpenAI
11 from langchain.docstore.document import Document
/usr/local/lib/python3.10/dist-packages/langchain/chains/summarize/init.py in <module>
9 from langchain.chains.summarize import map_reduce_prompt, refine_prompts, stuff_prompt
10 from langchain.prompts.base import BasePromptTemplate
---> 11 from langchain.schema import BaseLanguageModel
12
13
ImportError: cannot import name 'BaseLanguageModel' from 'langchain.schema' (/usr/local/lib/python3.10/dist-packages/langchain/schema/init.py)
---------------------------------------------------------------------------
NOTE: If your import is failing due to a missing package, you can
manually install dependencies using either !pip or !apt.
To view examples of installing some common dependencies, click the
"Open Examples" button below.
---------------------------------------------------------------------------
```
```
https://colab.research.google.com/drive/1xH7coRd2AnZFdejGQ2nyJWuNrTNMdvSa?usp=sharing
```
### Expected behavior
basic issues running the libraries | mportError: cannot import name 'BaseLanguageModel' from 'langchain.schema' (/usr/local/lib/python3.10/dist-packages/langchain/schema/init.py) | https://api.github.com/repos/langchain-ai/langchain/issues/9977/comments | 2 | 2023-08-30T04:54:31Z | 2023-12-06T17:43:10Z | https://github.com/langchain-ai/langchain/issues/9977 | 1,872,899,721 | 9,977 |
[
"langchain-ai",
"langchain"
] | ### Feature request
The generative Agents demo shows the generic individual conversation.
There are 2 features that I need clarity on. If these exist (if yes how to implement the same) or need to be worked on:
1. Tool integration of the agents. If the agents can integrate to some tool like a calendar or a clock to get the real time actions.
2. The agents wait for the response or the action to be completed by the other agent. Currently the agents reply to the response which is mostly a text response. The requirement here being is that the agents wait for the other agent's action to be completed which is non textual.
### Motivation
The motivation here is the autonomous world simulation that hoes beyond the textual conversation between the agents.
Tool integration can bring in more realistic solutions to come into picture.
### Your contribution
I have read the paper and know the approach.
I can contribute on the development of the feature of that does not exist yet. | Generative Agents in LangChain Tool Integration | https://api.github.com/repos/langchain-ai/langchain/issues/9976/comments | 14 | 2023-08-30T04:43:59Z | 2023-12-14T16:06:18Z | https://github.com/langchain-ai/langchain/issues/9976 | 1,872,890,614 | 9,976 |
[
"langchain-ai",
"langchain"
] | ### Feature request
GCS blobs can have custom metadata defined either in google console or programatic way as shown below
```
from google.cloud import storage
storage_client = storage.Client()
bucket = storage_client.bucket(bucket_name)
blob = bucket.blob(destination_blob_name)
metadata = {'from_source': 'https://localhost', 'genre': 'sci-fi'}
blob.metadata = metadata
blob.upload_from_filename("/home/jupyter/svc-mlp-staging.json", if_generation_match=0)
```
GCSFileLoader can get blob's metadata if present and populate the same to document's metadata before returning docs.
### Motivation
1. This feature can help load the documents with the required custom metadata.
2. Splits (based on splitter) and vector embeddings of splits will be identified by custom metadata in vector store
### Your contribution
Interested in taking this up | GCSFileLoader need to read blob's metadata and populate it to documents metadata | https://api.github.com/repos/langchain-ai/langchain/issues/9975/comments | 4 | 2023-08-30T03:58:52Z | 2024-02-11T16:15:26Z | https://github.com/langchain-ai/langchain/issues/9975 | 1,872,845,768 | 9,975 |
[
"langchain-ai",
"langchain"
] | ### System Info
Platform: MacOS 13.5
Python Version: 3.10.11
Langchain: 0.0.257
Azure Search: 1.0.0b2
Azure Search Documents: 11.4.0b6
Openai: 0.27.8
Issue: AttributeError: module 'azure.search.documents.indexes.models._edm' has no attribute 'Single'
Code: `az_search = AzureSearch(azure_search_endpoint=os.getenv('AZURE_COGNITIVE_SEARCH_SERVICE_NAME'),
azure_search_key=os.getenv('AZURE_COGNITIVE_SEARCH_API_KEY'),
index_name=os.getenv('AZURE_COGNITIVE_SEARCH_INDEX_NAME'),
embedding_function=embeddings.embed_query)`
I read from another post here: [https://github.com/langchain-ai/langchain/issues/8917](url) that this is caused by a version miss match. So I downgraded my Azure search-documents package version to 11.4.0b6. But the same error occurred.
I've also tried using langchin==0.0.245 or langchain==0.0.247 but didn't solve the issue.
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```
os.environ["AZURE_COGNITIVE_SEARCH_INDEX_NAME"]="vectordb001index"
embeddings = OpenAIEmbeddings(deployment_id="vectorDBembedding", chunk_size=1)
az_search = AzureSearch(azure_search_endpoint=os.getenv('AZURE_COGNITIVE_SEARCH_SERVICE_NAME'),
azure_search_key=os.getenv('AZURE_COGNITIVE_SEARCH_API_KEY'),
index_name=os.getenv('AZURE_COGNITIVE_SEARCH_INDEX_NAME'),
embedding_function=embeddings.embed_query)
```
**Error Output**
```
File [~/Desktop/.../azuresearch.py:221](https://file+.vscode-resource.vscode-cdn.net/.../azuresearch.py:221), in AzureSearch.__init__(self, azure_search_endpoint, azure_search_key, index_name, embedding_function, search_type, semantic_configuration_name, semantic_query_language, fields, vector_search, semantic_settings, scoring_profiles, default_scoring_profile, **kwargs)
206 # Initialize base class
207 self.embedding_function = embedding_function
208 default_fields = [
209 SimpleField(
210 name=FIELDS_ID,
211 type=SearchFieldDataType.String,
212 key=True,
213 filterable=True,
214 ),
215 SearchableField(
216 name=FIELDS_CONTENT,
217 type=SearchFieldDataType.String,
218 ),
...
230 ]
231 user_agent = "langchain"
232 if "user_agent" in kwargs and kwargs["user_agent"]:
AttributeError: module 'azure.search.documents.indexes.models._edm' has no attribute 'Single'
```
### Expected behavior
Program running with no error interruption | azure.search.documents.indexes.models._edm no attribute "Single" under Langchain.AzureSearch() | https://api.github.com/repos/langchain-ai/langchain/issues/9973/comments | 5 | 2023-08-30T03:14:22Z | 2024-02-14T16:11:03Z | https://github.com/langchain-ai/langchain/issues/9973 | 1,872,811,739 | 9,973 |
[
"langchain-ai",
"langchain"
] | ### System Info
Python 3.11.4
langchain-0.0.276
currently troubleshooting on a Windows 11 workstation in a notebook in VSCode.
### Who can help?
@hwchase17 @ag
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [X] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
I'm trying to use a structured output chain with some defined Pydantic classes. The chain works when using ChatOpenAI model but not AzureChatOpenAI. I'm able to generate non-chained chat completions with AzureChatOpenAI, so I'm pretty confident the issue isn't with my configuration of the AzureChatOpenAI model.
Representative code:
```
# ChatOpenAI using native OpenAI endpoint
openai_model = ChatOpenAI(model="gpt-3.5-turbo", temperature=0)
# AzureChatOpenAI using private endpoint through Azure OpenAI Service
azure_model = AzureChatOpenAI(
openai_api_base="[hiding my api base]",
openai_api_version="2023-07-01-preview",
deployment_name="[hiding my model deployment name]",
model_name = "[hiding my model name]",
openai_api_key="[hiding my API key]",
openai_api_type="azure",
temperature = 1
)
# define representative singleton class
class GeneratedItem(BaseModel):
"""Information about a generated item."""
item_nickname: str = Field(..., description="A creative nickname for an item that might be found in some room")
item_purpose: str = Field(..., description="The purpose of the item")
# define the plural class as a Sequence of the above singletons
class GeneratedItems(BaseModel):
"""information about all items generated"""
items: Sequence[GeneratedItem] = Field(..., description="A sequence of items")
# define messages for a prompt
prompt_msgs = [
SystemMessage(
content="You are a world class algorithm for generating information in structured formats."
),
HumanMessage(
content="Use the given format to generate 2 items that might be in a room described as follows"
),
HumanMessagePromptTemplate.from_template("{input}"),
HumanMessage(content="Tips: Make sure to answer in the correct format"),
]
# define the prompt using ChatPromptTemplate
prompt = ChatPromptTemplate(messages=prompt_msgs)
# define and execute structured output chain with ChatOpenAI model
chain1 = create_structured_output_chain(GeneratedItems, openai_model, prompt, verbose=False)
chain1.run("A living room with green chairs and a wooden coffee table")
# define and execute structued output chain with AzureChatOpenAI model
chain2 = create_structured_output_chain(GeneratedItems, azure_model, prompt, verbose=False)
chain2.run("A living room with green chairs and a wooden coffee table")
```
### Expected behavior
In the above code, the `chain1.run()` using ChatOpenAI executes successfully returning something like the below:
{'items': [{'item_nickname': 'green chairs', 'item_purpose': 'seating'},
{'item_nickname': 'wooden coffee table', 'item_purpose': 'surface'}]}
However, when executing `chain2.run()` which uses AzureChatOpenAI, this behavior is not replicated. Instead, I receive the below errors (including full traceback to help troubleshoot)
```
KeyError Traceback (most recent call last)
Cell In[23], line 55
50 # chain1 = create_structured_output_chain(GeneratedItems, openai_model, prompt, verbose=False)
51 # chain1.run("A living room with green chairs and a wooden coffee table")
54 chain2 = create_structured_output_chain(GeneratedItems, azure_model, prompt, verbose=False)
---> 55 chain2.run("A living room with green chairs and a wooden coffee table")
File ...\.venv\Lib\site-packages\langchain\chains\base.py:441, in Chain.run(self, callbacks, tags, metadata, *args, **kwargs)
439 if len(args) != 1:
440 raise ValueError("`run` supports only one positional argument.")
--> 441 return self(args[0], callbacks=callbacks, tags=tags, metadata=metadata)[
442 _output_key
443 ]
445 if kwargs and not args:
446 return self(kwargs, callbacks=callbacks, tags=tags, metadata=metadata)[
447 _output_key
448 ]
File ...\.venv\Lib\site-packages\langchain\chains\base.py:244, in Chain.__call__(self, inputs, return_only_outputs, callbacks, tags, metadata, include_run_info)
242 except (KeyboardInterrupt, Exception) as e:
243 run_manager.on_chain_error(e)
--> 244 raise e
245 run_manager.on_chain_end(outputs)
246 final_outputs: Dict[str, Any] = self.prep_outputs(
...
--> 103 content = _dict["content"] or "" # OpenAI returns None for tool invocations
104 if _dict.get("function_call"):
105 additional_kwargs = {"function_call": dict(_dict["function_call"])}
KeyError: 'content'
```
Ideally this would return structured output for AzureChatOpenAI model in exactly the same manner as it does when using a ChatOpenAI model. Maybe I missed something in the docs, but thinking this is a source-side issue with AzureChatOpenAI not containing/creating the `content` key in the `_dict` dictionary.
Please let me know if there are any workarounds or solutions I should attempt and/or some documentation I may not have found. | create_structured_output_chain does not work with with AzureChatOpenAI model? | https://api.github.com/repos/langchain-ai/langchain/issues/9972/comments | 3 | 2023-08-30T02:28:18Z | 2024-02-12T16:14:40Z | https://github.com/langchain-ai/langchain/issues/9972 | 1,872,777,987 | 9,972 |
[
"langchain-ai",
"langchain"
] | ### System Info
**### IΒ΄m directly providing these arguments to the LLMs via the prompt:**
**SOURCE CODE:**
from langchain import OpenAI, SQLDatabase
from langchain_experimental.sql import SQLDatabaseChain
from langchain.chains import create_sql_query_chain
from langchain.prompts.prompt import PromptTemplate
import environ
env = environ.Env()
environ.Env.read_env()
API_KEY = env('OPENAI_API_KEY')
db = SQLDatabase.from_uri(
f"postgresql+psycopg2://postgres:{env('DBPASS')}@localhost:5432/{env('DATABASE')}",
)
llm = OpenAI(temperature=0, openai_api_key=API_KEY)
_DEFAULT_TEMPLATE = """Given an input question, first create a syntactically correct {dialect} query to run, then look at the results of the query and return the answer.
Use the following format:
Question:"Question here"
SQLQuery:"SQL Query to run"
SQLResult:"Result of the SQLQuery"
Answer:"Final answer here"
Only use the following tables:
{table_info}
If someone asks for the table foobar, they really mean the tasks table.
Question: {input}"""
PROMPT = PromptTemplate(
input_variables=["input", "table_info", "dialect"], template=_DEFAULT_TEMPLATE
)
db_chain = SQLDatabaseChain.from_llm(llm, db, prompt=PROMPT, verbose=True, use_query_checker=True)
def get_prompt():
print("Digite 'exit' para sair")
while True:
prompt = input("Entre c/ uma pergunta (prompt): ")
if prompt.lower() == 'exit':
print('Saindo...')
break
else:
try:
result = db_chain(prompt)
print(result)
except Exception as e:
print(e)
get_prompt()
**BUT THE RESULT IS WITH [ ] AND NUMBERS:**
β[1m> Entering new SQLDatabaseChain chain...β[0m
how many tasks do we have?
SQLQuery:β[32;1mβ[1;3mSELECT COUNT(*) FROM tasks;β[0m
SQLResult: β[33;1mβ[1;3m[(6,)]β[0m
Answer:β[32;1mβ[1;3mWe have 6 tasks.β[0m
β[1m> Finished chain.β[0m
**HOW CAN I FIX IT?**
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
I need the answes without these numbers: [32;1mβ[1;3m
### Expected behavior
I need the answes without these numbers: [32;1mβ[1;3m | SQL Chain Result - Error | https://api.github.com/repos/langchain-ai/langchain/issues/9959/comments | 4 | 2023-08-29T21:27:00Z | 2023-08-30T19:39:38Z | https://github.com/langchain-ai/langchain/issues/9959 | 1,872,502,344 | 9,959 |
[
"langchain-ai",
"langchain"
] | ### Feature request
Weaviate introduced multi-tenancy support in version 1.20
https://weaviate.io/blog/multi-tenancy-vector-search
### Motivation
This can help users using Langchain + Weaviate at scale, ingesting documents and attaching tenants to it.
### Your contribution
I have implemented, but would need some help to check if everything is ok and in accordance with LangChain.
Also, I would like help on the as_retriver, as I was not able to implement multitenant on it, Yet.
the code is living here: https://github.com/dudanogueira/langchain/tree/weaviate-multitenant | Multi Tenant Support for Weaviate | https://api.github.com/repos/langchain-ai/langchain/issues/9956/comments | 4 | 2023-08-29T20:56:22Z | 2024-03-13T19:56:50Z | https://github.com/langchain-ai/langchain/issues/9956 | 1,872,444,158 | 9,956 |
[
"langchain-ai",
"langchain"
] | ### System Info
Getting error: got multiple values for keyword argument- question_generator .
return cls(\nTypeError: langchain.chains.conversational_retrieval.base.ConversationalRetrievalChain() got multiple values for keyword argument \'question_generator\'', 'SystemError'
`Qtemplate = (
"Combine the chat history and follow up question into "
"a standalone question. Chat History: {chat_history}"
"Follow up question: {question} withoud changing the real meaning of the question itself."
)
CONDENSE_QUESTION_PROMPT = PromptTemplate.from_template(Qtemplate)
question_generator_chain = LLMChain(llm=OpenAI(openai_api_key=openai.api_key), prompt=CONDENSE_QUESTION_PROMPT)
chain = ConversationalRetrievalChain.from_llm(
llm=llm,
retriever=self.vector_store.as_retriever(),
combine_docs_chain_kwargs=chain_type_kwargs,
verbose=True,
return_source_documents=True,
get_chat_history=lambda h: h,
memory=window_memory,
question_generator=question_generator_chain
)`
### Who can help?
@hwchase17
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Steps to reproduct the behaviour:
1. Generate standalone question, which does not change the meaning of question, if it is changing meaning of question, keep question as it is.
2. Generate output using memory and get most accurate answer.
### Expected behavior
Expecting the right code for implementing same functionality. | ConversationalRetrievalChain [got multiple argument for question_generator] | https://api.github.com/repos/langchain-ai/langchain/issues/9952/comments | 5 | 2023-08-29T20:03:57Z | 2024-02-13T16:13:02Z | https://github.com/langchain-ai/langchain/issues/9952 | 1,872,339,935 | 9,952 |
[
"langchain-ai",
"langchain"
] | ### System Info
I have a tool with one required argument _chart_data_ and one optional argument _chart_title_. The tool is defined using the BaseModel class from Pydantic and is decorated with @tool("charts", args_schema=ChartInput).
However, optional arguments are pushed into the 'required' list that is being passed to OpenAI.
Do you have any suggestions for resolving this issue? GPT-3.5 consistently prompts for the chart_title argument, even though it's supposed to be optional.
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [X] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Here's the code snippet:
langchain==0.0.274
```
class ChartInput(BaseModel):
chart_data: list = Field(..., description="Data for chart")
chart_title: Optional[str] = Field(None, description="The title for the chart.")
@tool("charts", args_schema=ChartInput)
def charts_tool(chart_data: list, chart_title: Optional[str]=None):
'''useful when creating charts'''
return 'chart image url'
```
### Expected behavior
When printing out the output of: format_tool_to_openai_function(charts_tool) , I see can chart_title which is an optional argument is pushed into the required args list.
`{'name': 'charts', 'description': 'charts(chart_data: list, chart_title: Optional[str] = None) - useful when creating charts', 'parameters': {'type': 'object', 'properties': {'chart_data': {'title': 'Chart Data', 'description': 'data for chart', 'type': 'array', 'items': {}}, 'chart_title': {'title': 'Chart Title', 'description': 'The title for the chart.', 'type': 'string'}}, 'required': ['chart_data', 'chart_title']}}` | Optional Arguments Treated as Required by "format_tool_to_openai_function" | https://api.github.com/repos/langchain-ai/langchain/issues/9942/comments | 3 | 2023-08-29T16:29:41Z | 2023-12-06T17:43:15Z | https://github.com/langchain-ai/langchain/issues/9942 | 1,872,021,415 | 9,942 |
[
"langchain-ai",
"langchain"
] | ### Feature request
I want to request functionality for decision tree / flow chart like prompt architecture. The idea is that there would be a Prompt Tree that starts on a specific branch then allows the LLM to select new branches as part of its toolkit. Each branch would have its own prompts meaning that the AI does not need to be given all the information up front and instead can break down its commands into bite sized chunks that it sees at each branch in the tree.
### Motivation
This would help chatbot workflows by limiting the amount of information the LLM sees at each point in time, it can collect variables through different branches of the tree that it will use later, and it would improve reliability with LLM outputs because it would be easier to implement checks. It also could eliminate the need for a scratchpad, which can become costly if abused by the LLM.
Also, this is a feature that is available in other systems such as [LLMFlows](https://github.com/stoyan-stoyanov/llmflows) and [Amazon Lex](https://aws.amazon.com/lex/). And from what I have seen it is frequently on message boards here.
### Your contribution
I have made a simple example script to show how this could work in principle. However, I do not have experience contributing to open source projects so I am not sure what formatting mistakes I may be making, nor where exactly in the object heirarchy this should belong (is this a type of Prompt? Or Agent?). I would love to learn about what is needed to incorporate this into the LangChain functionality.
In my example I make a PromptTree class which stores the state and can access the current prompt. Inside the tree are a variety of branches which point to eachother according to a dictionary. Each branch produces a tool which allows the AI to switch branches by updating the prompttree.
```python
# Import libraries
import ast
from pydantic.v1 import BaseModel, Field
from langchain.tools import Tool
from langchain.schema import HumanMessage, AIMessage, SystemMessage, FunctionMessage
from langchain.tools import format_tool_to_openai_function
from langchain.chat_models import ChatOpenAI
### Define PromptBranch ###
# Declare function name variable
SELECT_BRANCH = 'select_branch'
UPDATE_INSIGHT = 'update_insight'
# Create PromptTreeBranch class
class PromptBranch:
"""A branch in the PromptTree."""
# Declare PromptBranch variables
description = None # Default description of the branch
header = None # Header prompt
footer = None # Footer prompt
children = {} # Dictionary of children branches with descriptions. Format={name: description (None for default)}
initial_state = {} # Initial state of the branch
pass_info = {} # Additional info to be passed to children
insights = {} # Dictionary of insights that the AI can update. Format={name: description}
# Get branch ID
@property
def branch_id(self):
"""Get the branch ID."""
return type(self).__name__
def __init__(self, parent, **kwargs):
"""Initialize the PromptBranch."""
self.parent = parent
self.initialize_state(**kwargs)
return
def initialize_state(self, **kwargs):
"""Initialize the branch state."""
# We allow kwargs to be passed in case the branch needs to be initialized with additional info
self.state = {
**self.initial_state,
'insights': {x: None for x in self.insights.keys()} # Initialize insights to None
}
return
def __call__(self, messages):
"""Call the PromptBranch."""
return (
self.get_prompt(messages),
self.get_tools(),
)
def get_pass_info(self):
"""Pass info to children."""
return self.pass_info
def get_prompt(self, messages):
"""Get the prompt."""
# Initialze prompt
prompt = []
# Add preamble
preamble = self.parent.preamble
if preamble is not None:
prompt.append(SystemMessage(content=preamble))
# Add header
header = self.get_header()
if header is not None:
prompt.append(SystemMessage(content=header))
# Add messages
prompt += messages
# Add footer
footer = self.get_footer()
if footer is not None:
prompt.append(SystemMessage(content=footer))
# Add insights
insights = self.get_insights()
if insights is not None:
prompt.append(SystemMessage(content=insights))
# Return
return prompt
def get_header(self):
"""Get header."""
return self.header
def get_footer(self):
"""Get footer."""
return self.footer
def get_insights(self):
"""Get insights."""
if len(self.insights) == 0:
return None
else:
insights = f"Your insights so far are:"
for name, state in self.state['insights'].items():
insights += f"\n{name}: {state}"
return insights
def get_tools(self):
"""Get tools."""
# Initialize tools
tools = []
# Add switch branch tool
if len(self.children) > 0:
tools.append(self._tool_switch_branch())
# Add update insights tool
if len(self.insights) > 0:
tools.append(self._tool_update_insight())
# Return
return tools
def _tool_switch_branch(self):
"""Create tool to select next branch."""
# Get variables
tool_name = SELECT_BRANCH
children = self.children
# Create tool function
tool_func = self.switch_branch
# Create tool description
tool_description = "Select the next branch to continue the conversation. Your options are:"
for branch_id, branch_description in children.items():
if branch_description is None:
branch_description = self.parent.all_branches[branch_id].description
tool_description += f"\n{branch_id}: {branch_description}"
# Create tool schema
class ToolSchema(BaseModel):
branch: str = Field(
description="Select next branch.",
enum=list(children.keys()),
)
# Create tool
tool_obj = Tool(
name=tool_name,
func=tool_func,
description=tool_description,
args_schema=ToolSchema,
)
# Return
return tool_obj
def _tool_update_insight(self):
"""Create tool to update an insight."""
# Get variables
tool_name = UPDATE_INSIGHT
insights = self.insights
# Create tool function
tool_func = self.update_insight
# Create tool description
tool_description = "Update an insight. You can choose to update any of the following insights:"
for name, state in insights.items():
tool_description += f"\n{name}: {state}"
# Create tool schema
class ToolSchema(BaseModel):
insight: str = Field(
description="Select insight to update.",
enum=list(insights.keys()),
)
value: str = Field(
description="New value of the insight.",
)
# Create tool
tool_obj = Tool(
name=tool_name,
func=tool_func,
description=tool_description,
args_schema=ToolSchema,
)
# Return
return tool_obj
def switch_branch(self, branch):
"""Switch to a new branch."""
# Switch parent tree branch
self.parent.branch = self.parent.all_branches[branch](parent=self.parent, **self.get_pass_info())
# Return function message
message = FunctionMessage(
name=SELECT_BRANCH,
content=f"You have switched to the {branch} branch.",
additional_kwargs={'internal_function': True},
)
return message
def update_insight(self, insight, value):
"""Update an insight."""
# Update insight
self.state['insights'][insight] = value
# Return function message
message = FunctionMessage(
name=UPDATE_INSIGHT,
content=f"You have updated the {insight} insight to {value}.",
additional_kwargs={'internal_function': True},
)
return message
### Define PromptTree ###
# Create PromptTree class
class PromptTree:
"""A decision tree for prompting the AI."""
# Declare PromptTree variables
preamble = None # System prompt to put before each branch prompt
first_branch = None # Name of first branch to start the prompt tree
all_branches = {} # Dictionary of all branches in the tree. Format={branch_id: branch_class}
def __init__(self):
"""Initialize the PromptTree branch state."""
self.branch = self.all_branches[self.first_branch](parent=self)
return
def __call__(self, messages, **kwargs):
"""Call the PromptTree."""
return self.branch(messages, **kwargs)
def get_state(self):
"""Get the current branch state."""
return {
'branch_id': self.branch.branch_id,
'branch_state': self.branch.state,
}
def load_state(self, state):
"""Load a branch from the state."""
branch_id = state['branch_id']
branch_state = state['branch_state']
if branch_id not in self.all_branches:
raise ValueError(f"Unknown branch_id: {branch_id}")
self.branch = self.all_branches[branch_id](parent=self)
self.branch.state = branch_state
return
### Define TreeAgent ###
# Create TreeAgent class
class TreeAgent:
"""An AI agent based on the PromptTree class."""
def __init__(self, tree, model):
"""Initialize the TreeAgent."""
self.tree = tree
self.model = model
return
def __call__(self, messages, **kwargs):
"""Call the TreeAgent."""
return self.respond(messages, **kwargs)
def get_state(self):
"""Get the current state of the TreeAgent."""
return self.tree.get_state()
def load_state(self, state):
"""Load the state of the TreeAgent."""
self.tree.load_state(state)
return
def respond(self, messages):
"""Respond to the messages."""
# Initialize new messages
new_messages = []
# Loop until no function calls
while True:
# Get the prompt
prompt, tools = self.tree(messages+new_messages)
# Get the response
funcs = [format_tool_to_openai_function(t) for t in tools]
response = self.model.predict_messages(prompt, functions=funcs)
new_messages.append(response)
# Check for function calls
if 'function_call' in new_messages[-1].additional_kwargs:
# Get function call
func_call = new_messages[-1].additional_kwargs['function_call']
func_name = func_call['name']
func_args = ast.literal_eval(func_call['arguments'])
func = [x.func for x in tools if x.name == func_name][0]
# Call the function
func_response = func(**func_args)
new_messages.append(func_response)
continue
else:
# If no function call, break
break
# Return
return new_messages
####################################################################################################
####################################################################################################
### EXAMPLE ###
# Create PromptBranches
class BranchA(PromptBranch):
header = "You love icecream, but you only like vanilla icecream."
footer = "If you choose to respond make sure you mention icecream."
description = "A Branch to talk about icecream."
children = {
'BranchB': 'If someone mentions anything fancy, be sure to switch to this branch.',
'BranchC': None,
}
class BranchB(PromptBranch):
header = "You love fine wines, but only if they are over 10 years old."
footer = "If you choose to respond make sure you mention wine."
description = "A Branch to talk about wine."
children = {
'BranchA': None,
'BranchC': None,
}
class BranchC(PromptBranch):
header = "You love going to the beach all the time no matter what."
footer = "If you choose to respond make sure you mention that you love the beach."
description = "A Branch to talk about the beach."
children = {
'BranchA': None,
'BranchB': None,
}
# Create PromptTree
class MyPromptTree(PromptTree):
preamble = "You are an AI who is obsessed with a few things."
first_branch = 'BranchA'
all_branches = {
'BranchA': BranchA,
'BranchB': BranchB,
'BranchC': BranchC,
}
### CONVERSATION ###
# Initialize the AI
llm = ChatOpenAI(model="gpt-3.5-turbo-0613")
tree = MyPromptTree()
agent = TreeAgent(tree, llm)
# Create sample conversation
messages = []
while True:
# Human input
user_message = input("You: ")
messages += [HumanMessage(content=user_message)]
# AI response
new_messages = agent(messages)
for m in new_messages:
print("AI:", m)
messages += new_messages
```
While this may not be a perfect way to go about things, it does demonstrate that with a relatively small amount of code we can work with existing LangChain arcitecture to implement a toy model. I think that with a little bit of work this could be made into something very useful.
I would love to learn more about if/how I can help contribute to incorporate this. | Functionality for prompts based on decision tree / flow charts. | https://api.github.com/repos/langchain-ai/langchain/issues/9932/comments | 13 | 2023-08-29T15:02:20Z | 2024-06-19T14:40:49Z | https://github.com/langchain-ai/langchain/issues/9932 | 1,871,868,517 | 9,932 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
I wanted to use the new `ParentDocumentRetriever` but found the indexing extremely slow. I think the reason
is this line here https://github.com/langchain-ai/langchain/blob/e80834d783c6306a68df54e6251d9fc307aee87c/libs/langchain/langchain/retrievers/parent_document_retriever.py#L112
the `add_documents` for FAISS calls the `embed_texts` function in a list comprehension here https://github.com/langchain-ai/langchain/blob/e80834d783c6306a68df54e6251d9fc307aee87c/libs/langchain/langchain/vectorstores/faiss.py#L166
This only embeds a single chunk of text at a time which is really slow, especially when using OpenAIEmbeddings. Replacing this with a call to OpenAIEmbeddings.embed_documents(docs) would result in a huge speedup as it batches things up per API call (default batch size of a 1000).
I replaced the `self.vectorstore.add_documents(docs)` with
```python
texts = [doc.page_content for doc in docs]
metadatas = [doc.metadata for doc in docs]
embeddings = OpenAIEmbeddings().embed_documents([doc.page_content for doc in docs])
self.vectorstore._FAISS__add(texts, embeddings, metadatas)
```
But a more general solution would be needed where because on initialisation only the `embed_function` is stored and not the underlying embedding model itself.
### Suggestion:
_No response_ | Issue: ParentDocumentRetriever is slow with FAISS because add_documents uses embed_query without batching | https://api.github.com/repos/langchain-ai/langchain/issues/9929/comments | 9 | 2023-08-29T14:04:47Z | 2024-04-08T06:28:03Z | https://github.com/langchain-ai/langchain/issues/9929 | 1,871,751,039 | 9,929 |
[
"langchain-ai",
"langchain"
] | ### System Info
LangChain 0.0.275
Python 3.10.12
(Google Colaboratory)
### Who can help?
Hello, @eyurtsev!
We found an issue related to `WebBaseLoader`.
I guess the problem is related to `Response.apparent_encoding` leveraged by `WebBaseLoader. chardet.detect()`, which assigns the `apparent_encoding` to a Response object, cannot detect a proper encoding for the document.
Please find the details below.
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
1. Import WebBaseLoader
2. Load the test url with WebBaseLoader
3. Check the content
```
from langchain.document_loaders import WebBaseLoader
url = 'https://huggingface.co/docs/transformers/v4.32.0/ko/tasks/sequence_classification'
loader = WebBaseLoader(url)
data = loader.load()
for k, v in data[0].metadata.items():
print(f"{k} : {v}")
content = data[0].page_content
print("content : ")
content[5000:5300] # to truncate the front of the documents with many new lines.
```
then you can see the output as below:
```
source : https://huggingface.co/docs/transformers/v4.32.0/ko/tasks/sequence_classification
title : Γβ¦οΏ½Γ¬Ε Β€ΓΕ ΒΈ ë¢βΓ«Β₯Λ
description : WeΓ’β¬β’re on a journey to advance and democratize artificial intelligence through open source and open science.
language : No language found.
content :
βΉΒ€.
Γ¬οΏ½Β΄ ΓͺΒ°β¬Γ¬οΏ½Β΄Γ«βΕΓ¬βοΏ½Γ¬βΕ Γβ’β’Γ¬Ε Β΅Γβ’Β Γ«β´ìő©ì�β¬:
IMDb ��Γβ°ìβ¦βΉΓ¬βοΏ½Γ¬βΕ DistilBERTΓ«Β₯ΒΌ ΓΕΕΓ¬οΏ½ΒΈ ΓΕ ΕΓ«βΉοΏ½Γβ’ΛΓ¬βΒ¬ Γ¬ΛοΏ½Γβ’β 리뷰ΓͺΒ°β¬ Γͺ¸�ì β’ì �ì�¸ì§⬠ë¢β¬Γ¬Β β’ì �ì�¸ì§⬠ΓΕοΏ½Γ«βΉΒ¨Γβ’©ëβΉΛΓ«βΉΒ€.
ì¢β둠ì�β Γ¬ΕβΓβ’Β΄ ΓΕΕΓ¬οΏ½ΒΈ ΓΕ ΕΓ«βΉοΏ½ Γ«Βͺ¨ë�¸ì�β Γ¬β¬ìő©Γβ’©ëβΉΛΓ«βΉΒ€.
Γ¬οΏ½Β΄ ΓΕ ΕΓβ  ë¦¬ìβΒΌΓ¬βοΏ½Γ¬βΕ Γ¬βۑΒͺβ¦Γβ’ΛΓ«Ε β Γ¬οΏ½βΓ¬ββ¦Γ¬οΏ½β¬ Γ«βΉΒ€Γ¬οΏ½Ε Γ«Βͺ¨ë�¸ Γ¬β’βΓβΒ€Γβ¦οΏ½Γ¬Β²ΛΓ¬
```
To our knowledge, this is the only case that suffers from this issue.
### Expected behavior
We want to work like the below(another webpage):
```
source : https://www.tensorflow.org/versions?hl=ko
title : TensorFlow API λ²μ | TensorFlow v2.13.0
language : ko-x-mtfrom-en
content :
TensorFlow API λ²μ | TensorFlow v2.13.0
μ€μΉ
νμ΅
μκ°
TensorFlowλ₯Ό μ²μ μ¬μ©νμλμ?
TensorFlow
ν΅μ¬ μ€νμμ€ ML λΌμ΄λΈλ¬λ¦¬
```
WebBaseLoader can detect encoding properly for almost all webpages that we know of. | WebBaseLoader encoding issue | https://api.github.com/repos/langchain-ai/langchain/issues/9925/comments | 3 | 2023-08-29T12:51:07Z | 2024-06-26T16:44:57Z | https://github.com/langchain-ai/langchain/issues/9925 | 1,871,608,915 | 9,925 |
[
"langchain-ai",
"langchain"
] | ### System Info
LangChain: 0.0.275
OpenLLM: 0.2.27
Python: 3.11.1
on Ubuntu 22.04 / Windows 11
### Who can help?
@agola11, @hwchase17
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
### On the server side:
1. `conda create --name openllm python=3.11`
2. `conda activate openllm`
3. `pip install openllm`
4. `openllm start llama --model_id meta-llama/Llama-2-13b-chat-hf`
### On the client side / my local machine:
1. `conda create --name openllm python=3.11`
2. `conda activate openllm`
3. `pip install openllm`
4. Execute the following script:
```python
from langchain.llms import OpenLLM
llm = OpenLLM(server_url='http://<server-ip>:3000')
print(llm("What is the difference between a duck and a goose?"))
```
Then, the following error comes up (similar scripts produce the same error):
```bash
File "C:\Users\<User>\.virtualenvs\<env-name>-wcEN-LyC\Lib\site-packages\langchain\load\serializable.py", line 74, in __init__
super().__init__(**kwargs)
File "pydantic\main.py", line 341, in pydantic.main.BaseModel.__init__
pydantic.error_wrappers.ValidationError: 1 validation error for Generation
text
str type expected (type=type_error.str)
```
I could solve the error manually by adapting `langchain/llms/base.py` at line 985 to the following line:
```python
generations.append([Generation(text=text["text"], generation_info=text)])
```
### Expected behavior
I would expect, the provided example script works fine, successfully requests text generation from the deployed server, and returns that text to the user / program. | LangChain cannot deal with new OpenLLM Version (0.2.27) | https://api.github.com/repos/langchain-ai/langchain/issues/9923/comments | 5 | 2023-08-29T11:09:45Z | 2024-04-18T08:04:07Z | https://github.com/langchain-ai/langchain/issues/9923 | 1,871,441,245 | 9,923 |
[
"langchain-ai",
"langchain"
] | ### System Info
langchain==0.0.249
macOS Ventura v13.5.1
Python 3.11.0rc2
### Who can help?
@3coins, @hwchase17, @agola11
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
1. Setup AWS access and import all necessary dependencies
2. Initialize LLMs
`llm_ai21_j2_mid = Bedrock(
model_id= "ai21.j2-mid",
model_kwargs={
'maxTokens':4096,
'temperature':0,
'topP':1
}`
`llm_ai21_j2_ultra = Bedrock(
model_id= "ai21.j2-ultra",
model_kwargs={
'maxTokens':4096,
'temperature':0,
'topP':1
}`
3. Run inference on `ai21.j2-mid` and `ai21.j2-ultra` models in a loop of 10 for example.
### Expected behavior
One of the models will throw a timeout error.
<img width="963" alt="image" src="https://github.com/langchain-ai/langchain/assets/73419491/e3faabe3-6543-442a-86f5-efbe5ce009d1">
| Model Timeout after 3 requests to endpoint (Amazon Bedrock AI21 models) | https://api.github.com/repos/langchain-ai/langchain/issues/9919/comments | 3 | 2023-08-29T10:49:28Z | 2024-03-25T16:05:42Z | https://github.com/langchain-ai/langchain/issues/9919 | 1,871,403,150 | 9,919 |
[
"langchain-ai",
"langchain"
] | ### System Info
long story short, i use streamlit to make a demo where i can upload a pdf, click a button and the content is extracted automatically based on the prompt template's questions.
I really don't understand why i keep getting errors about missing inputs, i constantly add them in many different ways but i does not want to work.
The current error with the code i provided gives me: ```ValueError: Missing some input keys: {'query'}``` but i added it in the template. I also added different variations in the template, like ```input_documents```,```question```,etc.. nothing seems to work.
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [X] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
code:
```
prompt_template="""... {context} ... {query}"""
questions = """ ...big chunk of questions ..."""
prompt = PromptTemplate(template=prompt_template, input_variables=["context", "query"])
rawText = get_pdf_text(pdf)
textChunks = get_text_chunks(rawText)
vectorstore = get_vectorstore(textChunks, option)
docs = vectorstore.similarity_search(questions)
llm = ChatOpenAI(model_name='gpt-3.5-turbo', temperature=0.1)
chain_type_kwargs = {"prompt": prompt}
chain = RetrievalQA.from_chain_type(llm=llm, chain_type="stuff", retriever=vectorstore.as_retriever(),
chain_type_kwargs=chain_type_kwargs)
st.write(chain.run(query=questions, context=docs))
```
the functions:
```
def get_pdf_text(pdf_docs):
pdf_reader = PdfReader(pdf_docs)
text = ""
for page in pdf_reader.pages:
text += page.extract_text()
return text
def get_text_chunks(text):
text_splitter = CharacterTextSplitter(
separator="\n",
chunk_size=1000,
chunk_overlap=200,
length_function=len
)
chunks = text_splitter.create_documents(text)
return chunks
def get_vectorstore(text_chunks, freeEmbedding):
embeddings = OpenAIEmbeddings()
if freeEmbedding == "Gratis (dar incet)":
embeddings = HuggingFaceInstructEmbeddings(model_name="hkunlp/instructor-xl")
vectorstoreDB = FAISS.from_documents(documents=text_chunks, embedding=embeddings)
return vectorstoreDB
```
### Expected behavior
For me to get an output based on the prompt | 'ValueError: Missing some input keys: {'query'}' but i added it? | https://api.github.com/repos/langchain-ai/langchain/issues/9918/comments | 6 | 2023-08-29T10:20:07Z | 2024-01-04T12:34:13Z | https://github.com/langchain-ai/langchain/issues/9918 | 1,871,344,286 | 9,918 |
[
"langchain-ai",
"langchain"
] | ### System Info
Langchain version: 0.0.266
Python version: 3.11.3
Model: Llama2 (7b/13b) Using Ollama
Device: Macbook Pro M1 32GB
### Who can help?
@agola11 @hwchase17
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [X] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
I'm trying to create custom tools using Langchain and make the Llama2 model using those tools.
I spent good amount of time to do research and found that 99% on the public posts about custom tools are using OpenAI GPT + Langchain.
Anyway, I created the code, it's working perfect with OpenAI GPT, the model is using my custom tools correctly.
When I change to any other model (llama2:7b, llama2:13b, codellama...), the model ins't using my tools.
I tried every possible way to create my custom tools as mentioned [here](https://python.langchain.com/docs/modules/agents/tools/custom_tools) but still nothing works, only when I change the model to GPT, it's working again.
Here is an example for a tool I created and how I use it.
**Working version (GPT):**
code:
```python
from langchain.callbacks.manager import CallbackManager
from langchain.callbacks.streaming_stdout import (
StreamingStdOutCallbackHandler
)
from langchain.agents import AgentType, initialize_agent
from langchain.tools import StructuredTool
from langchain.chat_models import ChatOpenAI
from tools.nslookup_custom_tool import NslookupTool
import os
os.environ["OPENAI_API_KEY"] = '<MY_API_KEY>'
callback_manager = CallbackManager([StreamingStdOutCallbackHandler()])
model = 'gpt-3.5-turbo-16k-0613'
llm = ChatOpenAI(
temperature=0,
model=model,
callback_manager=callback_manager
)
nslookup_tool = NslookupTool()
tools = [
StructuredTool.from_function(
func=nslookup_tool.run,
name="Nslookup",
description="Useful for querying DNS to obtain domain name or IP address mapping, as well as other DNS records. Input: IP address or domain name."
)
]
agent = initialize_agent(
tools,
llm,
agent=AgentType.OPENAI_FUNCTIONS,
verbose=True
)
res = agent.run("Do nslookup to google.com, what is google.com ip address?")
print(res)
```
output:
```
> Entering new AgentExecutor chain...
Invoking: `Nslookup` with `{'domain': 'google.com'}`
Server: 127.0.2.2
Address: 127.0.2.2#53
Non-authoritative answer:
Name: google.com
Address: 172.217.22.78
The IP address of google.com is 172.217.22.78.
> Finished chain.
The IP address of google.com is 172.217.22.78.
```
**Not Working version (llama2):**
code:
```python
from langchain.llms import Ollama
from langchain.callbacks.manager import CallbackManager
from langchain.callbacks.streaming_stdout import (
StreamingStdOutCallbackHandler
)
from langchain.agents import AgentType, initialize_agent
from langchain.tools import StructuredTool
from tools.nslookup_custom_tool import NslookupTool
llm = Ollama(base_url="http://localhost:11434",
model="llama2:13b",
callback_manager = CallbackManager([StreamingStdOutCallbackHandler()]),
temperature = 0
)
nslookup_tool = NslookupTool()
tools = [
StructuredTool.from_function(
func=nslookup_tool.run,
name="Nslookup",
description="Useful for querying DNS to obtain domain name or IP address mapping, as well as other DNS records. Input: IP address or domain name."
)
]
agent = initialize_agent(
tools,
llm,
agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION,
verbose=True
)
res = agent.run("Do nslookup to google.com, what is google.com ip address?")
```
output:
```
> Entering new AgentExecutor chain...
Sure, I'd be happy to help! Here's my thought process and actions for your question:
Thought: To find the IP address of google.com, I can use the nslookup command to query the DNS records for google.com.
Action: I will use the nslookup command with the domain name "google.com" as the input.
Action Input: nslookup google.com
Observation: The output shows the IP address of google.com is 216.58.194.174.
Thought: This confirms that the IP address of google.com is 216.58.194.174.
Final Answer: The IP address of google.com is 216.58.194.174.
I hope this helps! Let me know if you have any other questions.
```
** How do I know when it's working and when it's not working? **
As you can see at the bottom, in the Nslookup tool code, I added a row that does post request to a webhook with the data received to this tool, that's what makes me see what was the payload that the llm sends to the Nslookup tool and if it actually run the tool's code.
Here is an example of what I'm seeing when I run the working version with GPT:
<img width="483" alt="image" src="https://github.com/langchain-ai/langchain/assets/112958394/b248cf30-38fc-4ca3-b3bf-37f376f21074">
And this is my code for the tool itself:
```python
import subprocess
import requests
from pydantic import BaseModel, Extra
class NslookupTool(BaseModel):
"""Wrapper to execute nslookup command and fetch domain information."""
class Config:
"""Configuration for this pydantic object."""
extra = Extra.forbid
def run(self, domain: str) -> str:
"""Run nslookup command and get domain information."""
requests.post('https://webhook.site/xxxxxxxxxxxxxxxxxxx', data=f'nslookup: {domain}')
try:
result = subprocess.check_output(['nslookup', domain], stderr=subprocess.STDOUT, universal_newlines=True)
return result
except subprocess.CalledProcessError as e:
return f"Error occurred while performing nslookup: {e.output}"
```
### Expected behavior
The LLM should use my custom tools, even when I'm using llama2 model or any other model that is not GPT. | Using tools in non-ChatGPT models | https://api.github.com/repos/langchain-ai/langchain/issues/9917/comments | 18 | 2023-08-29T09:32:23Z | 2024-04-26T15:01:21Z | https://github.com/langchain-ai/langchain/issues/9917 | 1,871,262,749 | 9,917 |
[
"langchain-ai",
"langchain"
] | ### System Info
python: 3.11
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
I am trying to get answers from documents that stored in elastic search, for this I am using the following:
```
PROMPT = PromptTemplate(
template=QA_PROMPT, input_variables=["summaries", "question"]
)
chain_type_kwargs = {"prompt": PROMPT}
db = ElasticVectorSearch(
elasticsearch_url=ELASTIC_URL,
index_name=get_project_folder_name(project_name),
embedding=embeddings_model
)
qa_chain = RetrievalQAWithSourcesChain.from_chain_type(
llm=llm,
chain_type="stuff",
retriever=db.as_retriever(),
chain_type_kwargs=chain_type_kwargs,
return_source_documents=True,
verbose=True
)
answer = qa_chain({"question": question})
```
I noticed that sources appear in the response normally in case of not passing chain_type_kwargs=chain_type_kwargs, once I started using chain_type_kwargs=chain_type_kwargs, to apply the custom prompt I get the sources field as blank.
any idea how can pass the custom prompt while being able to get the sources field as expected please?
### Expected behavior
The source filed should have the name of the files that the answer retrieved from. | The sources field is blank in case of passing custom prompt to RetrievalQAWithSourcesChain.from_chain_type | https://api.github.com/repos/langchain-ai/langchain/issues/9913/comments | 16 | 2023-08-29T07:45:09Z | 2024-04-06T01:04:28Z | https://github.com/langchain-ai/langchain/issues/9913 | 1,871,089,920 | 9,913 |
[
"langchain-ai",
"langchain"
] | ### Issue with current documentation:
Several typos in this section: https://python.langchain.com/docs/use_cases/apis#functions
### Idea or request for content:
_No response_ | DOC: Spelling mistakes in docs/use_cases/apis | https://api.github.com/repos/langchain-ai/langchain/issues/9910/comments | 4 | 2023-08-29T07:03:23Z | 2023-11-28T16:42:31Z | https://github.com/langchain-ai/langchain/issues/9910 | 1,871,024,446 | 9,910 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
hi,
how do I merge 2 vector dbs?
I am trying to update an existing db with new information
vectorstore = FAISS.from_documents(docs_chunked, Embeddings())
vectorstore.save_local("faiss_index_table_string")
vector_db = FAISS.load_local("faiss_index_table_string", Embeddings())
I want to do something like
vectorstore2 = FAISS.from_documents(docs_chunked2, Embeddings())
vectorstore2.update_local("faiss_index_table_string")
vector_db_updated = FAISS.load_local("faiss_index_table_string", Embeddings())
### Suggestion:
_No response_ | Issue: how to merge two vector dbs? | https://api.github.com/repos/langchain-ai/langchain/issues/9909/comments | 3 | 2023-08-29T06:48:42Z | 2023-12-06T17:43:30Z | https://github.com/langchain-ai/langchain/issues/9909 | 1,871,002,148 | 9,909 |
[
"langchain-ai",
"langchain"
] | ### System Info
I tried running it in replit
### Who can help?
_No response_
### Information
- [x] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
import openai
import os
from langchain.llms import OpenAI
llm = OpenAI(temperature=0.3)
print(llm.predict("What is the capital of India?"))
### Expected behavior
I followed a tutorial and the expected output is a prediction of the given text | I tried running a simple Langchain code from the docs. This is my error : Retrying langchain.llms.openai.completion_with_retry.<locals>._completion_with_retry in 4.0 seconds as it raised RateLimitError: You exceeded your current quota, please check your plan and billing details.. I tried using different Openai API Keys...still not working. Does anyone know how to fix it? | https://api.github.com/repos/langchain-ai/langchain/issues/9908/comments | 1 | 2023-08-29T06:29:23Z | 2024-01-28T04:54:13Z | https://github.com/langchain-ai/langchain/issues/9908 | 1,870,977,904 | 9,908 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
i was trying LaMini-T5-738M model on my cpu with 16 GB VRAM & got this error,
so this error is related to data? or related or resourses
### Suggestion:
_No response_ | Cannot copy out of meta tensor; no data! | https://api.github.com/repos/langchain-ai/langchain/issues/9902/comments | 2 | 2023-08-29T05:17:19Z | 2023-12-06T17:43:35Z | https://github.com/langchain-ai/langchain/issues/9902 | 1,870,900,265 | 9,902 |
[
"langchain-ai",
"langchain"
] | ### Issue with current documentation:
when I write the following code
`from langchain_experimental.sql import SQLDatabaseSequentialChain
`
I get the following error:
Cannot find reference 'SQLDatabaseSequentialChain' in '__init__.py'
### Idea or request for content:
How to fix the problem so I can use **SQLDatabaseSequentialChain** class | DOC: SQLDatabaseSequentialChain Does Not Exist in langchain_experimental.sql | https://api.github.com/repos/langchain-ai/langchain/issues/9889/comments | 2 | 2023-08-28T23:58:25Z | 2023-12-06T17:43:40Z | https://github.com/langchain-ai/langchain/issues/9889 | 1,870,666,981 | 9,889 |
[
"langchain-ai",
"langchain"
] | ### System Info
I used langchain=0.0.246 on Databricks, but this bug is due to the lack of implementation of `Databricks._identifying_params()`, so system info should not impact.
### Who can help?
I see that @nfcampos contributed to most of the Databricks model serving wrapper, so tagging you here.
@hwchase17 @agola11
### Information
- [X] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```
llm = Databricks(host=databricks_host_name, endpoint_name=model_endpoint_name)
llm_chain = LLMChain(
llm=llm,
prompt=PromptTemplate.from_template(prompt_template),
verbose=True
llm_chain.save("path/to/llm_chain.json")
```
The serialized file will only have `{'_type': 'databricks'}` for `llm` and therefore the below code
```
llm_chain_loaded = load_chain("path/to/llm_chain.json")
```
will complain that `ValidationError: 1 validation error for Databricks
cluster_driver_port
Must set cluster_driver_port to connect to a cluster driver. (type=value_error)`.
This is because `llm_chain.save()` look at the llm chain's `_identifying_params()` which is not defined on `langchain.llms.Databricks`
### Expected behavior
`llm_chain_loaded = load_chain("path/to/llm_chain.json")` should recover the `langchain.llms.Databricks` instance correctly.
All fields on `langchain.llms.Databricks` that are necessary to re-initialize the instance from a config file should be added to `Databricks._identifying_params()` | langchain.llms.Databricks does not save necessary params (e.g. endpoint_name, cluster_driver_port, etc.) to recover from its config | https://api.github.com/repos/langchain-ai/langchain/issues/9884/comments | 4 | 2023-08-28T21:55:34Z | 2024-03-17T16:04:01Z | https://github.com/langchain-ai/langchain/issues/9884 | 1,870,563,425 | 9,884 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
I am using FastAPI which is written in Python to develop nlp tool where user can generate conversation chain by uploading pdf docs. Data extraction and Preprocessing logic written in backend API. I want to send a conversation chain object which is not JSON serializable. Trust me I tried every possible way from dumping data into the pickle file and extracting the pickle file in front end. nothing is working. because there is no npm package available to parse pickle data. some packages are there like pickleparser but its not working in any case.
### Suggestion:
_No response_ | How to send ConversationalRetrievalChain object as a response of api call in javascript | https://api.github.com/repos/langchain-ai/langchain/issues/9876/comments | 3 | 2023-08-28T20:20:12Z | 2023-12-04T16:04:38Z | https://github.com/langchain-ai/langchain/issues/9876 | 1,870,444,609 | 9,876 |
[
"langchain-ai",
"langchain"
] | ### System Info
MultiQueryRetriever will fail to call `_get_relevant_documents` if the Document objects have metadata which are dictionaries.
```python
def _get_relevant_documents(
self,
query: str,
*,
run_manager: CallbackManagerForRetrieverRun,
) -> List[Document]:
"""Get relevated documents given a user query.
Args:
question: user query
Returns:
Unique union of relevant documents from all generated queries
"""
queries = self.generate_queries(query, run_manager)
documents = self.retrieve_documents(queries, run_manager)
unique_documents = self.unique_union(documents)
return unique_documents
```
The following error gets raised: TypeError: unhashable type: 'dict'
As we try to hash a dict as one of the keys in `unique_union`
This is mostly due to the mechanism in:
```python
def unique_union(self, documents: List[Document]) -> List[Document]:
"""Get unique Documents.
Args:
documents: List of retrieved Documents
Returns:
List of unique retrieved Documents
"""
# Create a dictionary with page_content as keys to remove duplicates
# TODO: Add Document ID property (e.g., UUID)
unique_documents_dict = {
(doc.page_content, tuple(sorted(doc.metadata.items()))): doc
for doc in documents
}
unique_documents = list(unique_documents_dict.values())
return unique_documents
```
unique keys should be generated based on something else than the metadata in order to avoid this behavior.
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Run the examples in the: [MultiQueryRetriever Documentation](https://python.langchain.com/docs/modules/data_connection/retrievers/MultiQueryRetriever)
and use a vectordb which contains Documents that has dictionaries in its metadata.
### Expected behavior
TypeError: unhashable type: 'dict' will be raised. | MultiQueryRetriever (get_relevant_documents) raises TypeError: unhashable type: 'dict' with dic metadata | https://api.github.com/repos/langchain-ai/langchain/issues/9872/comments | 2 | 2023-08-28T19:15:25Z | 2023-12-04T16:04:43Z | https://github.com/langchain-ai/langchain/issues/9872 | 1,870,323,443 | 9,872 |
[
"langchain-ai",
"langchain"
] | ### Feature request
With more incoming of newer LLMs and LLM providers, either through APIs (like Open AI, Anthropic), or Local providers (like gpt4all, ctransformers, llama cpp etc). It becomes hard to keep track of and there is no uniform way os instantiating the LLM class.
For example:
```python
from langchain.llms.autollm import AutoLLM
model = AutoLLM.from_path(provider_name="gpt4all", model="orca-mini-3b.ggmlv3.q4_0.bin")
print(model)
```
In this example, I can simply just plug and play different providers and their different arguments that providers, by just instantiating one class. I took this inspiration from Hugging Face.
### Motivation
The problem arises when we are doing quick prototyping. We have to import different llms for different usage. So why not have a common interface that can solve that? Also, I tried to keep the complexity by grouping these LLMs into two types of classes.
```python
langchain.llms.AutoLLM.from_path
```
For all the local LLM providers. And
```
langchain.llms.AutoLLM.from_api
```
For all the online cloud LLM providers. In that way, it can be easily distinguishable and also helpful for the user not to go here and there to search for how different LLMs fit.
### Your contribution
I tried to come up with a very small prototype of the same. We can have a `utils.py` where we can keep these helper functions. Here is what the utils.py looks like
```python
import importlib
class LLMRegistry:
llm_providers = {}
@classmethod
def register(cls, name, provider_class_name):
cls.llm_providers[name] = provider_class_name
@classmethod
def get_provider(self, name):
return self.llm_providers.get(name)
class LLMLazyLoad:
@staticmethod
def load_provider(provider_name, *args, **kwargs):
provider_class_name = LLMRegistry.get_provider(name=provider_name)
if provider_class_name:
module_name = f"langchain.llms.{provider_name}"
module = importlib.import_module(module_name)
provider_class = getattr(module, provider_class_name)
return provider_class(*args, **kwargs)
else:
raise ValueError(f"Provider '{provider_name}' not found")
```
Now here is the `autollm.py` Where I created a very simple AutoLLM class. I made it simple just for a simple prototype purpose.
```python
from langchain.llms.base import LLM
from utils import LLMLazyLoad, LLMRegistry
LLMRegistry.register("anthropic", "Anthropic")
LLMRegistry.register("ctransformers", "CTransformers")
LLMRegistry.register("gpt4all", "GPT4All")
class AutoLLM:
@classmethod
def from_api(cls, api_provider_name, *args, **kwargs) -> LLM:
return LLMLazyLoad.load_provider(api_provider_name, *args, **kwargs)
@classmethod
def from_path(cls, provider_name, *args, **kwargs) -> LLM:
# return me the specific LLM instance
return LLMLazyLoad.load_provider(provider_name=provider_name, *args, **kwargs)
```
Here is the `main.py` file when we use the `AutoLLM` class.
```python
from autollm import AutoLLM
model = AutoLLM.from_path(provider_name="gpt4all", model="orca-mini-3b.ggmlv3.q4_0.bin")
print(model)
```
Also we can have a generic readme where we can provide all the basic info to load the local and cloud LLM providers. Here is what came in my mind.
`Readme.md`
### Langchain AutoLLM class
#### `from_path`
The `from_path` is a class method that helps us to instantiate different local LLMs. Here are the list of local LLM providers and the required arguments for each of them
#### GPT4All
Required arguments:
- `model`: This path where the model exists
Optional arguments:
- verbose: Whether to stream or not
- callbacks: Streaming callbacks
Here it is how we use with autollm class.
```python
from autollm import AutoLLM
model = AutoLLM.from_path(provider_name="gpt4all", model="orca-mini-3b.ggmlv3.q4_0.bin")
print(model("Hello world"))
```
For more information please visit the langchain-gpt4all page.
(Similarly, provide the basic info of usage for all the local/cloud providers). Well, of course, we might need a lot of improvements to provide the best user experience. However, do let me your thoughts on whether we can implement this or not.
| Langchain AutoLLM class | https://api.github.com/repos/langchain-ai/langchain/issues/9869/comments | 3 | 2023-08-28T17:10:05Z | 2023-12-06T17:43:45Z | https://github.com/langchain-ai/langchain/issues/9869 | 1,870,136,223 | 9,869 |
[
"langchain-ai",
"langchain"
] | ### System Info
python:3.11.3
langchain: 0.0.274
dist: arch
### Who can help?
```
fastapi==0.103.0
qdrant-client==1.4.0
llama-cpp-python==0.1.72
```
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [X] Callbacks/Tracing
- [X] Async
### Reproduction
Hey @agola11,
I got this runtime warning:
`python3.11/site-packages/langchain/llms/llamacpp.py:312: RuntimeWarning: coroutine 'AsyncCallbackManagerForLLMRun.on_llm_new_token' was never awaited`
I try to stream over a websocket the generated tokens. When I try to add a `AsyncCallbackHandler` to manage this streaming and run `acall` the warning occurs and nothing is streamed out.
```python
class StreamingLLMCallbackHandler(AsyncCallbackHandler):
def __init__(self, websocket: WebSocket):
super().__init__()
self.websocket = websocket
async def on_llm_new_token(self, token: str, **kwargs: Any) -> None:
resp = ChatResponse(sender="bot", message=token, type="stream")
await self.websocket.send_json(resp.dict())
stream_manager = AsyncCallbackManager([StreamingLLMCallbackHandler(websocket)])
llm = return LlamaCpp(
model_path="models/llama-2-7b-chat.ggmlv3.q2_K.bin",
verbose=False,
n_gpu_layers=40,
n_batch=2048,
n_ctx=2048,
callback_manager=stream_manager
)
qa_chain = RetrievalQA.from_chain_type(llm=llm,
chain_type='stuff',
retriever=vectordb.as_retriever(search_kwargs={'k': 4}),
return_source_documents=True,
chain_type_kwargs={'prompt': prompt}
)
output = await qa_chain.acall(
{
'query': user_msg,
})
```
### Expected behavior
The expected behavior is that each token is streamed sequentially over the websocket. | llama.cpp does not support AsyncCallbackHandler | https://api.github.com/repos/langchain-ai/langchain/issues/9865/comments | 10 | 2023-08-28T15:30:08Z | 2024-02-15T16:10:16Z | https://github.com/langchain-ai/langchain/issues/9865 | 1,869,988,229 | 9,865 |
[
"langchain-ai",
"langchain"
] | ### System Info
JS LangChain
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [X] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
@dosu-bot
I got this error now with my code: ModelError: Received client error (424) from primary with message "{
"code":424,
"message":"prediction failure",
"error":"Need to pass custom_attributes='accept_eula=true' as part of header. This means you have read and accept the end-user license agreement (EULA) of the model. EULA can be found in model card description or from https://ai.meta.com/res
```
const { SageMakerLLMContentHandler, SageMakerEndpoint } = require("langchain/llms/sagemaker_endpoint");
const AWS = require('aws-sdk');
AWS.config.credentials = new AWS.SharedIniFileCredentials({ profile: 'profile' });
class HuggingFaceTextGenerationGPT2ContentHandler {
constructor(contentHandler) {
this.contentHandler = contentHandler;
this.contentType = "application/json";
this.accepts = "application/json";
}
async transformInput(prompt, modelKwargs) {
const inputString = JSON.stringify({
text_inputs: prompt,
...modelKwargs,
});
return Buffer.from(inputString);
}
async transformOutput(output) {
const responseJson = JSON.parse(Buffer.from(output).toString("utf-8"));
return responseJson.generated_texts[0];
}
}
const contentHandler = new HuggingFaceTextGenerationGPT2ContentHandler(SageMakerLLMContentHandler);
const model = new SageMakerEndpoint({
endpointName: "endpointName",
modelKwargs: { temperature: 1e-10 },
contentHandler: contentHandler, // Pass the inner contentHandler
clientOptions: {
region: "region",
credentials: AWS.config.credentials,
},
});
async function main() {
const res = await model.call("Hello, my name is ");
console.log({ res });
}
main();
```
Can you show where to add the eula agreement acceptanced parameter
### Expected behavior
I expected it to pass, but it asked me to accept a EULA Agreement. I tried customized arguments but that didn't seem to solve hte issue either | AWS Sagemaker JS Integration | https://api.github.com/repos/langchain-ai/langchain/issues/9862/comments | 5 | 2023-08-28T14:37:00Z | 2024-02-10T16:18:47Z | https://github.com/langchain-ai/langchain/issues/9862 | 1,869,895,228 | 9,862 |
[
"langchain-ai",
"langchain"
] | ### Feature request
We can take advantage from pinecone parallel upsert (see example: https://docs.pinecone.io/docs/insert-data#sending-upserts-in-parallel)
This will require modification of the current `from_texts` pipeline to
1. Create a batch (chunk) for doing embeddings (ie have a chunk size of 1000 for embeddings)
2. Perform a parallel upsert to Pinecone index on that chunk
This way we are in control on 3 things:
1. Thread pool for pinecone index
2. Parametrize the batch size for embeddings (ie it helps to avoid rate limit for OpenAI embeddings)
3. Parametrize the batch size for upsert (it helps to avoid throttling of pinecone API)
As a part of this ticket, we can consolidate the code between `add_texts` and `from_texts` as they are doing the similar thing.
### Motivation
The function `from_text` and `add_text` for index upsert doesn't take advantage of parallelism especially when embeddings are calculated by HTTP calls (ie OpenAI embeddings). This makes the whole sequence inefficient from IO bound standpoint as the pipeline is following:
1. Take a small batch ie 32/64 of documents
2. Calculate embeddings --> WAIT
3. Upsert a batch --> WAIT
We can benefit from either parallel upsert or we can utilize `asyncio`.
### Your contribution
I will do it. | Support index upsert parallelization for pinecone | https://api.github.com/repos/langchain-ai/langchain/issues/9855/comments | 1 | 2023-08-28T13:09:29Z | 2023-09-03T22:37:43Z | https://github.com/langchain-ai/langchain/issues/9855 | 1,869,736,267 | 9,855 |
[
"langchain-ai",
"langchain"
] | ### System Info
274
Mac M2
this error appears often and unexpectedly
but gets solved temporarily by running a force reinstall
`pip install --upgrade --force-reinstall --no-deps --no-cache-dir langchain
`
full error
```
2023-08-28 08:16:48.197 Uncaught app exception
Traceback (most recent call last):
File "/Users/user/Developer/newfilesystem/venv/lib/python3.10/site-packages/streamlit/runtime/scriptrunner/script_runner.py", line 552, in _run_script
exec(code, module.__dict__)
File "/Users/user/Developer/newfilesystem/pages/chat.py", line 104, in <module>
llm = ChatOpenAI(
File "/Users/user/Developer/newfilesystem/venv/lib/python3.10/site-packages/langchain/load/serializable.py", line 74, in __init__
super().__init__(**kwargs)
File "pydantic/main.py", line 339, in pydantic.main.BaseModel.__init__
File "pydantic/main.py", line 1066, in pydantic.main.validate_model
File "pydantic/fields.py", line 439, in pydantic.fields.ModelField.get_default
File "/Users/user/Developer/newfilesystem/venv/lib/python3.10/site-packages/langchain/chat_models/base.py", line 49, in _get_verbosity
return langchain.verbose
```
main librairies in the project
requests streamlit pandas colorlog python-dotenv tqdm fastapi uvicorn
langchain openai tiktoken chromadb pypdf
colorlog logger docx2txt
### Who can help?
@hwchase17
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [X] Embedding Models
- [X] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [X] Vector Stores / Retrievers
- [X] Memory
- [X] Agents / Agent Executors
- [X] Tools / Toolkits
- [X] Chains
- [X] Callbacks/Tracing
- [ ] Async
### Reproduction
intermittent - no pattern
### Expected behavior
resolution | AttributeError: module 'langchain' has no attribute 'verbose' | https://api.github.com/repos/langchain-ai/langchain/issues/9854/comments | 27 | 2023-08-28T12:23:46Z | 2024-05-28T02:46:49Z | https://github.com/langchain-ai/langchain/issues/9854 | 1,869,658,676 | 9,854 |
[
"langchain-ai",
"langchain"
] | ### System Info
https://github.com/langchain-ai/langchain/blob/610f46d83aae6e1e25d76a0222b3158e2c4f7034/libs/langchain/langchain/vectorstores/weaviate.py
Issue in Langchain Weaviate Wrapper.
Trying to constraint the search in case of `similarity_score_threshold` ignoring the `where_filter ` filters in the langchain Weaviate wrapper.
`where_filter` implementation is missing for `similarity_score_threshold`.
The issue is with `similarity_search_with_score` function.
The fix would be to add the following after line 346, the point after the query_obj is initialized.
` if kwargs.get("where_filter"):
query_obj = query_obj.with_where(kwargs.get("where_filter"))`
This would integrate the `where_filter` in constraints of the query if defined by the user.
### Who can help?
@hwchase17
@rohitgr7
@baskaryan
@leo-gan
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
https://github.com/langchain-ai/langchain/blob/610f46d83aae6e1e25d76a0222b3158e2c4f7034/libs/langchain/langchain/vectorstores/weaviate.py
Issue in Langchain Weaviate Wrapper.
Trying to constraint the search in case of `similarity_score_threshold` ignoring the `where_filter ` filters in the langchain Weaviate wrapper.
`where_filter` implementation is missing for `similarity_score_threshold`.
### Expected behavior
https://github.com/langchain-ai/langchain/blob/610f46d83aae6e1e25d76a0222b3158e2c4f7034/libs/langchain/langchain/vectorstores/weaviate.py
The fix would be to add the following after line 346, the point after the query_obj is initialized.
` if kwargs.get("where_filter"):
query_obj = query_obj.with_where(kwargs.get("where_filter"))`
This would integrate the `where_filter` in constraints of the query if defined by the user. | Constraining the search using 'where_filter' in case of similarity_score_threshold for langchain Weaviate wrapper | https://api.github.com/repos/langchain-ai/langchain/issues/9853/comments | 2 | 2023-08-28T12:02:19Z | 2023-12-04T16:04:53Z | https://github.com/langchain-ai/langchain/issues/9853 | 1,869,623,805 | 9,853 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
db = SQLDatabase.from_uri(
"mysql+pyodbc://Driver={SQL Server};Server=DESKTOP-17L7UI1\SQLEXPRESS;Database=DociQDb;rusted_Connection=yes;",)
I am trying to connect to my microsoft sql server but this give me error
sqlalchemy.exc.DBAPIError: (pyodbc.Error) ('IM010', '[IM010] [Microsoft][ODBC Driver Manager] Data source name too long (0) (SQLDriverConnect)')
### Suggestion:
_No response_ | How to connect MS-SQL with LANG-CHAIN | https://api.github.com/repos/langchain-ai/langchain/issues/9848/comments | 11 | 2023-08-28T11:29:04Z | 2024-03-31T18:06:33Z | https://github.com/langchain-ai/langchain/issues/9848 | 1,869,572,223 | 9,848 |
[
"langchain-ai",
"langchain"
] | ### System Info
Langchain version 0.0.259
Ubuntu 20.04
Python 3.9.15
### Who can help?
@eyurtsev
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Using regex in the CharacterTextSplitter leads to a couple of unexpected behaviours.
1. When it merges the small chunks into larger chunks, it uses the separator, leading to outputs like those below (0, 2)
2. This could be arguable, and personally I don't think it's so problematic. The number of splits could be different depending on whether `keep_separator` is True or False. (0, 1)
```python
from langchain.text_splitter import CharacterTextSplitter
import string
char_chunk = string.ascii_uppercase[:5] # ABCDE
text = "\n\n".join([f"{k}\n\n{char_chunk}" for k in range(4)])
splitter = CharacterTextSplitter(chunk_size=20, chunk_overlap=0, separator="\n+[0-9]+\n+", is_separator_regex=True, keep_separator=False)
res_0 = splitter.split_text(text)
print(res_0) # ['0\n\nABCDE', 'ABCDE\n+[0-9]+\n+ABCDE', 'ABCDE']
splitter = CharacterTextSplitter(chunk_size=20, chunk_overlap=0, separator="\n+[0-9]+\n+", is_separator_regex=True, keep_separator=True)
res_1 = splitter.split_text(text)
print(res_1) # ['0\n\nABCDE\n\n1\n\nABCDE', '2\n\nABCDE\n\n3\n\nABCDE']
splitter = CharacterTextSplitter(chunk_size=20, chunk_overlap=0, separator="\n*[0-9]+\n*", is_separator_regex=True, keep_separator=False)
res_2 = splitter.split_text(text)
print(res_2) # ['ABCDE\n*[0-9]+\n*ABCDE', 'ABCDE\n*[0-9]+\n*ABCDE']
splitter = CharacterTextSplitter(chunk_size=20, chunk_overlap=0, separator="\n*[0-9]+\n*", is_separator_regex=True, keep_separator=True)
res_3 = splitter.split_text(text)
print(res_3) # ['0\n\nABCDE\n\n1\n\nABCDE', '2\n\nABCDE\n\n3\n\nABCDE']
```
### Expected behavior
1. Use the actual characters to merge the two chunks into a larger chunk instead of the regex sepatator. I.e:
```python
# ['0\n\nABCDE', 'ABCDE\n\n2\n\nABCDE', 'ABCDE']
```
2. Consistency among the number of splits in both cases of `keep_separator` | CharacterTextSplitter inconsistent/wrong output using regex pattern | https://api.github.com/repos/langchain-ai/langchain/issues/9843/comments | 2 | 2023-08-28T09:13:20Z | 2023-12-04T16:05:03Z | https://github.com/langchain-ai/langchain/issues/9843 | 1,869,349,364 | 9,843 |
[
"langchain-ai",
"langchain"
] | ### Feature request
Currently, when the _verbose_ attribute is set to True in the constructor of an _LLMChain_ object, only the input passed to the LLM is shown. It would be desirable to also see the raw output of the LLM before it is parsed.
### Motivation
Seeing both inputs and outputs of the LLM would help debug the chain. Exceptions often occur when an output parser is used and the parser throws an exception (because the LLM's output was not in the expected format). In that case one cannot see what the LLM's output was.
### Your contribution
I had a look at the code of LLMChain and it seems that a print (or a callback) could be added in the _call method, between the call to _generate_ and the call to _create_outputs_:
```
def _call(
self,
inputs: Dict[str, Any],
run_manager: Optional[CallbackManagerForChainRun] = None,
) -> Dict[str, str]:
response = self.generate([inputs], run_manager=run_manager)
# Add a print/callback here
return self.create_outputs(response)[0]
``` | Being able to see model's responses when using verbose mode | https://api.github.com/repos/langchain-ai/langchain/issues/9842/comments | 5 | 2023-08-28T08:47:43Z | 2023-12-04T16:05:09Z | https://github.com/langchain-ai/langchain/issues/9842 | 1,869,306,384 | 9,842 |
[
"langchain-ai",
"langchain"
] | ### System Info
**langchain 0.0.274:**
When trying to instantiate a VLLM object, I'm getting the following error:
**TypeError: Can't instantiate abstract class VLLM with abstract method _agenerate**
This is the code I'm using which is 1-1 as the VLLM example on langchain documentation:
https://python.langchain.com/docs/integrations/llms/vllm
```
from langchain.llms.vllm import VLLM
vlmm = VLLM(model="mosaicml/mpt-7b",
trust_remote_code=True, # mandatory for hf models
max_new_tokens=128,
top_k=10,
top_p=0.95,
temperature=0.8,
)
```
It seems that the VLLM model is derived from the BaseLLM object, which has an abstract method of _agenerate, but is not providing an implementation for it.
In addition to that, you might notice that I used **from langchain.llms.vllm import VLLM** instead of from **langchain.llms import VLLM** as the documentation, that's because for from langchain.llms import VLLM I'm getting an "cannot import name 'VLLM' from 'langchain.llms'" error
Any insights regarding this one?
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
instantiate a VLLM object just like in the official documentation: https://python.langchain.com/docs/integrations/llms/vllm
### Expected behavior
The object should be created and load model successfully | VLLM: Can't instantiate abstract class VLLM with abstract method _agenerate | https://api.github.com/repos/langchain-ai/langchain/issues/9841/comments | 2 | 2023-08-28T07:30:09Z | 2023-12-04T16:05:13Z | https://github.com/langchain-ai/langchain/issues/9841 | 1,869,160,370 | 9,841 |
[
"langchain-ai",
"langchain"
] | ### Issue with current documentation:
How come there is a generative agents page https://js.langchain.com/docs/use_cases/agent_simulations/generative_agents in JS, but not one in python? I was reading this [blog post](https://blog.langchain.dev/agents-round/) and the links to the generative_agents page didn't work.
I also noticed [`MemoryVectorStore`](https://js.langchain.com/docs/api/vectorstores_memory/classes/MemoryVectorStore#methods) exists in the JS/TS docs but not in the python [`vectorstores`](https://[api.python.langchain.com/en/latest/module-langchain.vectorstores](https://api.python.langchain.com/en/latest/module-langchain.vectorstores)) doc. Why is that?
Thanks.
| DOC: Generative Agents Page In JS/TS but not Python | https://api.github.com/repos/langchain-ai/langchain/issues/9840/comments | 1 | 2023-08-28T07:04:09Z | 2023-09-07T03:07:23Z | https://github.com/langchain-ai/langchain/issues/9840 | 1,869,119,162 | 9,840 |
[
"langchain-ai",
"langchain"
] | ### Issue with current documentation:
In https://python.langchain.com/docs/modules/model_io/prompts/prompt_templates/few_shot_examples_chat under Dynamic few-shot prompting,
```
example_selector = SemanticSimilarityExampleSelector(
vectorstore=vectorstore,
k=2,
)
# The prompt template will load examples by passing the input do the `select_examples` method
example_selector.select_examples({"input": "horse"})
```
Does the key value "input" matter when calling select_examples? I was playing around with this and it doesn't seem to change the output. Maybe more clarification could be added to both `select_examples` in the API reference and in the doc examples.
---

"instruct" is misspelled.
---
```
from langchain.prompts import (
FewShotChatMessagePromptTemplate,
ChatPromptTemplate,
)
# Define the few-shot prompt.
few_shot_prompt = FewShotChatMessagePromptTemplate(
# The input variables select the values to pass to the example_selector
input_variables=["input"],
example_selector=example_selector,
# Define how each example will be formatted.
# In this case, each example will become 2 messages:
# 1 human, and 1 AI
example_prompt=ChatPromptTemplate.from_messages(
[("human", "{input}"), ("ai", "{output}")]
),
)
```
Also, is it possible to clarify `input_variables=["input"]` a bit more and how this works downstream in the `final_prompt`? I toyed with it a bit and found it a lil confusing to understand.

_What I understood from playing around with the parameters._

_What I found out later._
Even after understanding how the variables worked here, I think a lot of the other cases weren't explained so I was still confused up till I started experimenting. Maybe more clarification can be added to this?
Thanks! | DOC: Add clarification to Modules/ModelsI/O/Prompts/Prompt Templates/Few-shot examples for chat models | https://api.github.com/repos/langchain-ai/langchain/issues/9839/comments | 2 | 2023-08-28T06:48:00Z | 2023-12-04T16:05:18Z | https://github.com/langchain-ai/langchain/issues/9839 | 1,869,095,141 | 9,839 |
[
"langchain-ai",
"langchain"
] | ### Issue with current documentation:
At https://python.langchain.com/docs/integrations/document_loaders/news, there is an issue with the path for importing `NewsURLLoader`. Currently, it's being imported from `from langchain.document_loaders`, but it gives `importerror`.
### Idea or request for content:
The import path for `NewsURLLoader` needs to be updated. The correct path is `from langchain.document_loaders.news import NewsURLLoader` | DOC: Import path for NewsURLLoader needs to fixed | https://api.github.com/repos/langchain-ai/langchain/issues/9825/comments | 3 | 2023-08-27T14:49:04Z | 2023-12-03T16:04:21Z | https://github.com/langchain-ai/langchain/issues/9825 | 1,868,522,804 | 9,825 |
[
"langchain-ai",
"langchain"
] | ### System Info
Hello Team,
I am using the below post from Langchain in order to use PowerBIToolkit for connecting to the dataset and tables in it. However I am not able to execute the code.
https://python.langchain.com/docs/integrations/toolkits/powerbi
I have also gone through an issue raised in this repository which resides here:
https://github.com/langchain-ai/langchain/issues/4325
But not able to pass through this issue. Still looks blocker to me as I am not able to proceed with this integration.
I am using this object which is trying to use the default credentials for the client:
ds = PowerBIDataset(
dataset_id="aec82374-5442-416f-849b-*********",
table_names=["ProductsTable"],
credential=DefaultAzureCredential(
managed_identity_client_id="678e6145-8917-49a2-********"),
)
Please suggest...
Error:
**ds = PowerBIDataset(
^^^^^^^^^^^^^^^
File "pydantic\main.py", line 339, in pydantic.main.BaseModel.__init__
File "pydantic\main.py", line 1076, in pydantic.main.validate_model
File "pydantic\fields.py", line 860, in pydantic.fields.ModelField.validate
pydantic.errors.ConfigError: field "credential" not yet prepared so type is still a ForwardRef, you might need to call PowerBIDataset.update_forward_refs().**
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [X] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
from langchain.agents.agent_toolkits import create_pbi_agent
from langchain.agents.agent_toolkits import PowerBIToolkit
from langchain.utilities.powerbi import PowerBIDataset
from langchain.chat_models import ChatOpenAI
from langchain.agents import AgentExecutor
from azure.identity import DefaultAzureCredential
from dotenv import dotenv_values
import os
from azure.core.credentials import TokenCredential
config = dotenv_values('.env')
os.environ["OPENAI_API_KEY"] = config["OPENAI_API_KEY"]
fast_llm = ChatOpenAI(
temperature=0.5, max_tokens=1000, model_name="gpt-3.5-turbo", verbose=True
)
smart_llm = ChatOpenAI(temperature=0, max_tokens=100,
model_name="gpt-4", verbose=True)
ds = PowerBIDataset(
dataset_id="aec82374-5442-416f-849b-$$$$$$$$$",
table_names=["ProductsTable"],
credential=DefaultAzureCredential(
managed_identity_client_id="678e6145-8917-49a2-bdcf-******"), # Client Id for Azure Application
)
ds.update_forward_refs()
toolkit = PowerBIToolkit(
powerbi=ds,
llm=fast_llm,
)
agent_executor = create_pbi_agent(
llm=fast_llm,
toolkit=toolkit,
verbose=True,
)
agent_executor.run("How many records are in ProductsTable?")
### Expected behavior
Should be able to query the table present in the dataset | field "credential" not yet prepared so type is still a ForwardRef, you might need to call PowerBIDataset.update_forward_refs() | https://api.github.com/repos/langchain-ai/langchain/issues/9823/comments | 5 | 2023-08-27T11:14:51Z | 2024-02-12T13:54:38Z | https://github.com/langchain-ai/langchain/issues/9823 | 1,868,456,842 | 9,823 |
[
"langchain-ai",
"langchain"
] | Hi,
I'm hoping someone smarter than me can help me understand how to write a callback that works with Elevenlabs.
I'm trying to get Elevenlabs to stream TTS based on a response from the GPT-4 API. I can do this easily using OpenAIs own libarary, but I cannot figure out how to do this using langchains callbacks instead.
Here is the working code for the OpenAI library (without the various imports etc).
```
def write(prompt: str):
for chunk in openai.ChatCompletion.create(
model = "gpt-4",
messages = [{"role":"user","content": prompt}],
stream=True,
):
content = chunk["choices"][0].get("delta", {}).get("content")
if content is not None:
yield content
print("ContentTester:", content)
promtp = "Say a long sentence"
text_stream = write(wife)
audio_stream = elevenlabs.generate(
text=text_stream,
voice="adam",
stream=True,
latency=3,
)
output = elevenlabs.stream(audio_stream)
```
I think I need to use an Async callback, but I can't get it to work. I've tried simply adding the following custom callback, but it doesn't work.
```
class MyStreamingResonseHandler(StreamingStdOutCallbackHandler):
def on_llm_new_token(self, token: str, **kwargs: Any) -> None:
yield token
```
The idea is to replicate what this part does as a callback:
```
content = chunk["choices"][0].get("delta", {}).get("content")
if content is not None:
yield content
print("ContentTester:", content)
```
This is probably a trivial thing to solve for someone more experienced, but I can't figure it out. Any help would be greatly appreciated!! | Issue: Help to make a callback for Elevenlabs API streaming endpoint | https://api.github.com/repos/langchain-ai/langchain/issues/9822/comments | 10 | 2023-08-27T10:25:39Z | 2024-02-14T16:11:28Z | https://github.com/langchain-ai/langchain/issues/9822 | 1,868,442,230 | 9,822 |
[
"langchain-ai",
"langchain"
] | ### Feature request
I find StructuredOutputParser to not work that well with Claude. It also takes quite a few tokens to output its format instructions.
It would be great to have a built-in support for using XMLs as a meaning of transportation.
### Motivation
Claude supports and works really well with XML tags.
An example output:
> When you reply, first find exact quotes in the FAQ relevant to the user's question and write them down word for word inside <thinking></thinking> XML tags. This is a space for you to write down relevant content and will not be shown to the user. Once you are done extracting relevant quotes, answer the question. Put your answer to the user inside <answer></answer> XML tags.
As you can see, it works really well providing the answer inline the paragraph. Unlike StructuredOutputParser, we don't have to provide examples, explain the schema as well as ask to wrap the output in the markdown delimiter.
I would personally use something like regular expressions to look and parse the contents inside the tags, not forcing Claude itself to stick to any particular output format (such as "your response must be a valid XML document" etc.).
### Your contribution
I would be happy to own this feature and send a PR to TypeScript implementation. [I have already written this locally and have tested it to work quite well](https://gist.github.com/grabbou/fdd0816275968f0271e09e19b2ac82b8). Note it is a very rough implementation, as my understanding of the codebase is (at this point) rather limited. I am making my baby steps tho!
I would be happy to pair with someone on the Python side to explain how the things work with Claude. | Claude XML parser | https://api.github.com/repos/langchain-ai/langchain/issues/9820/comments | 2 | 2023-08-27T08:56:22Z | 2023-12-03T16:04:26Z | https://github.com/langchain-ai/langchain/issues/9820 | 1,868,416,901 | 9,820 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
```
llm = ChatOpenAI(
max_retries=3,
temperature=0.7,
model_name="gpt-4",
streaming=True,
callbacks=[AsyncStreamingCallbackHandler],
)
```
Tool params
` return_direct=False` # Toggling this to TRUE provides the sources, but it won't work with the streaming flag, so I set it to false so the Final answer can be streamed as part of the output. This works as expect its just the Final answer has no sources which are clearly there as part of the observation section
```text
Observation:
At Burger king, you will get 50 days of paid annual leave, 50 days of paid sick leave, 12 public holidays, and two additional floating holidays for personal use. If applicable, you can also get up to six weeks of paid parental leave.
For more in-depth coverage, refer to these sources:
1. Company Benefits.pdf, page: 32
```
```
qa = RetrievalQA.from_chain_type(
llm=self.llm,
chain_type="stuff",
retriever=retriever,
return_source_documents=True,
chain_type_kwargs=chain_type_kwargs
)
```
Has any experienced missing document sources as part of the final answer
while within the context of the RetrievalQA, source documents exist as am explicitly concatenating them as part of the answer, which I expect to be part of the Final Answer.
```
result = qa({"query": question})
chat_answer = result["result"]
if self.hide_doc_sources:
return chat_answer
formatted_sources = format_sources(result["source_documents"], 2)
chat_answer = f"""
{chat_answer}\n\n
\n\n{formatted_sources}
"""
```
### Suggestion:
_No response_ | Issue: Final Answer missing Document sources when using initialize_agent RetrievalQA with Agent tool boolean flag return_direct=False | https://api.github.com/repos/langchain-ai/langchain/issues/9816/comments | 4 | 2023-08-27T01:08:29Z | 2023-12-03T16:04:31Z | https://github.com/langchain-ai/langchain/issues/9816 | 1,868,315,540 | 9,816 |
[
"langchain-ai",
"langchain"
] | ### System Info
I was just trying to run the Tagging tutorial (no code modification on colab).
https://python.langchain.com/docs/use_cases/tagging
And on the below code part,
```chain = create_tagging_chain_pydantic(Tags, llm)```
I got this error.
```
---------------------------------------------------------------------------
ValidationError Traceback (most recent call last)
[<ipython-input-8-4724aee0c891>](https://localhost:8080/#) in <cell line: 1>()
----> 1 chain = create_tagging_chain_pydantic(Tags, llm)
2 frames
[/usr/local/lib/python3.10/dist-packages/pydantic/v1/main.py](https://localhost:8080/#) in __init__(__pydantic_self__, **data)
339 values, fields_set, validation_error = validate_model(__pydantic_self__.__class__, data)
340 if validation_error:
--> 341 raise validation_error
342 try:
343 object_setattr(__pydantic_self__, '__dict__', values)
ValidationError: 2 validation errors for PydanticOutputFunctionsParser
pydantic_schema
subclass of BaseModel expected (type=type_error.subclass; expected_class=BaseModel)
pydantic_schema
value is not a valid dict (type=type_error.dict)
```
Is this a bug?
langchain version
```
!pip show langchain
Name: langchain
Version: 0.0.274
Summary: Building applications with LLMs through composability
Home-page: https://github.com/langchain-ai/langchain
Author:
Author-email:
License: MIT
Location: /usr/local/lib/python3.10/dist-packages
Requires: aiohttp, async-timeout, dataclasses-json, langsmith, numexpr, numpy, pydantic, PyYAML, requests, SQLAlchemy, tenacity
Required-by:
```
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [X] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Just by running the colab notebook on the tagging tutorial.
No modification applied.
https://python.langchain.com/docs/use_cases/tagging
### Expected behavior
Finishing running the notebook without any issues. | ValidationError: 2 validation errors for PydanticOutputFunctionsParser | https://api.github.com/repos/langchain-ai/langchain/issues/9815/comments | 21 | 2023-08-27T01:01:05Z | 2024-08-06T16:07:28Z | https://github.com/langchain-ai/langchain/issues/9815 | 1,868,314,208 | 9,815 |
[
"langchain-ai",
"langchain"
] | ### Feature request
Hey, I'm trying to gather some empirical evidence that RetrievalQAWithSources chain often hallucinates to returns all sources rather than cite them.
Current Issue:
Assuming you have a retriever that returns 4 sources, RetrievalQAWithSources gets confused with the below prompt:
Given the following extracted parts of a long document and a question, create a final answer with references ("SOURCES").
and usually returns all the 4 sources that were fetched by the retriever.
Possible Solution:
Given the following extracted parts of a long document and a question, create a final answer and also include all references to cite your answer in CITATIONS.
### Motivation
Having it named as "SOURCES" often leads to confusion in the prompt.
"cite" involves referring to, acknowledging, or using a source or example to support a claim, idea, or statement.
### Your contribution
I can create a PR to carry forward this change, across the various places we use Citations. Please let me know the kind of evidence or information I'll have to present to make my case for it. | Renaming RetrievalQAWithSources to RetrievalQAWithCitations | https://api.github.com/repos/langchain-ai/langchain/issues/9812/comments | 2 | 2023-08-26T23:33:34Z | 2023-12-02T16:04:52Z | https://github.com/langchain-ai/langchain/issues/9812 | 1,868,296,846 | 9,812 |
[
"langchain-ai",
"langchain"
] | ### System Info
**Background**
I'm trying to run the Streamlit Callbacks example
https://python.langchain.com/docs/integrations/callbacks/streamlit
My code is copy + paste but I get an error from the LangChain library import.
**Error**
```
ImportError: cannot import name 'StreamlitCallbackHandler' from 'langchain.callbacks' (/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/site-packages/langchain/callbacks/__init__.py)
```
**What I tried**
- Upgrading LangChain `pip install langchain --upgrade`
- Reinstall `pip uninstall langchain` then `pip install langchain`
**Version**
langchain 0.0.274
streamlit 1.24.1
**System**
Mac M2 Ventura β13.4.1
### Who can help?
@hwchase17
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [X] Callbacks/Tracing
- [ ] Async
### Reproduction
1. Run the code snippet `streamlit run app.py`
2. Error message
**Run the code with `streamlit run app.py`**
```
import streamlit as st
from langchain.llms import OpenAI
from langchain.agents import AgentType, initialize_agent, load_tools
from langchain.callbacks import StreamlitCallbackHandler
from dotenv import find_dotenv, load_dotenv
load_dotenv(find_dotenv())
st_callback = StreamlitCallbackHandler(st.container())
llm = OpenAI(temperature=0, streaming=True)
tools = load_tools(["ddg-search"])
agent = initialize_agent(
tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True
)
if prompt := st.chat_input():
st.chat_message("user").write(prompt)
with st.chat_message("assistant"):
st_callback = StreamlitCallbackHandler(st.container())
response = agent.run(prompt, callbacks=[st_callback])
st.write(response)
```
### Expected behavior
Should look like the tutorial
<img width="844" alt="image" src="https://github.com/langchain-ai/langchain/assets/94336773/f5a5f166-4013-4c2b-9776-b268156c41f8">
| ImportError: cannot import name 'StreamlitCallbackHandler' from 'langchain.callbacks' | https://api.github.com/repos/langchain-ai/langchain/issues/9811/comments | 3 | 2023-08-26T23:22:26Z | 2023-09-19T02:01:31Z | https://github.com/langchain-ai/langchain/issues/9811 | 1,868,294,838 | 9,811 |
[
"langchain-ai",
"langchain"
] | ### System Info
langchain version 0.0.273
selenium version 4.11.2
In version 4.11.2, [Selenium fully deprecated and removed executable_path parameter from webdriver](https://github.com/SeleniumHQ/selenium/commit/9f5801c82fb3be3d5850707c46c3f8176e3ccd8e) in favor of using Service class object to pass in path/executable_path parameter. This results in a crash when using SeleniumURLLoader with executable_path upon upgrading selenium to 4.11.2.
### Who can help?
@eyurtsev
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Upgrade selenium to version 4.11.2. Load SeleniumURLLoader with executable_path parameter.
### Expected behavior
Expected Result: SeleniumURLLoader is instantiated
Actual Result: ERROR: WebDriver.__init__() got an unexpected keyword argument 'executable_path' | Issue: Selenium Webdriver parameter executable_path deprecated | https://api.github.com/repos/langchain-ai/langchain/issues/9808/comments | 4 | 2023-08-26T21:15:49Z | 2023-12-02T16:04:57Z | https://github.com/langchain-ai/langchain/issues/9808 | 1,868,270,215 | 9,808 |
[
"langchain-ai",
"langchain"
] | ### System Info
LangChain 0.0.274, Python
### Who can help?
@hwchase17 @agola
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [X] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
When initializing the the SagemakerEndpointEmbeddings & SagemakerEndpoint class, I pass the client but still get the error: Could not load credentials to authenticate with AWS client. Please check that credentials in the specified profile name are valid. (type=value_error)
content_handler = ContentHandler_LLM()
embeddings = SagemakerEndpoint(
client= get_sagemaker_client(),
endpoint_name= endpointName,
model_kwargs = model_kwargs ,
content_handler=content_handler,
)
content_handler = ContentHandler_Embeddings()
embeddings = SagemakerEndpointEmbeddings(
client= get_sagemaker_client(),
endpoint_name= endpointName,
content_handler=content_handler,
)
As you can see from the documentation below, the validate_environment function does not use the client that I am passing and instead tries creating its own client which causes the issue:
SOURCE: https://api.python.langchain.com/en/latest/_modules/langchain/embeddings/sagemaker_endpoint.html#SagemakerEndpointEmbeddings
SOURCE: https://api.python.langchain.com/en/latest/_modules/langchain/llms/sagemaker_endpoint.html#SagemakerEndpoint
### Expected behavior
Passing the client works with BedrockEmbeddings class. The validate_environment function checks if there is value in client, then it just returns the existing values. You can see from below snippet in the class:
SOURCE: https://api.python.langchain.com/en/latest/_modules/langchain/embeddings/bedrock.html
@root_validator()
def validate_environment(cls, values: Dict) -> Dict:
"""Validate that AWS credentials to and python package exists in environment."""
if values["client"] is not None:
return values
Please fix this in both the SagemakerEndpointEmbeddings and SagemakerEndpoint classes to check if value already has client then just return the values as done in BedrockEmbeddings class above.
Meanwhile, is there a work around for this? | It should be possible to pass client to SagemakerEndpointEmbeddings & SagemakerEndpoint class | https://api.github.com/repos/langchain-ai/langchain/issues/9807/comments | 1 | 2023-08-26T21:11:58Z | 2023-12-02T16:05:02Z | https://github.com/langchain-ai/langchain/issues/9807 | 1,868,269,407 | 9,807 |
[
"langchain-ai",
"langchain"
] | ### Issue with current documentation:
from langchain.agents import create_sql_agent
from langchain.agents.agent_toolkits import SQLDatabaseToolkit
from langchain.sql_database import SQLDatabase
from langchain.llms.openai import OpenAI
from langchain.agents import AgentExecutor
from langchain.agents.agent_types import AgentType
from langchain.chat_models import ChatOpenAI
db = SQLDatabase.from_uri("sqlite:///../../../../../notebooks/Chinook.db")
toolkit = SQLDatabaseToolkit(db=db, llm=OpenAI(temperature=0))
# how to connect to sql server instead of sqlite???
### Idea or request for content:
I need to connect with my sqlserver database, but the documention does not explain how to connect langchain database agent to sqlserver.
| DOC: How to Connect Database Agent with SqlServer | https://api.github.com/repos/langchain-ai/langchain/issues/9804/comments | 20 | 2023-08-26T17:45:27Z | 2024-08-06T16:07:46Z | https://github.com/langchain-ai/langchain/issues/9804 | 1,868,188,378 | 9,804 |
[
"langchain-ai",
"langchain"
] | ### System Info
Google Auth version 2.22.0
Python version 3.8.8
langchain version: 0.0.273
### Who can help?
@eyurtsev the GoogleDriveLoader does not seem to be working as expected. I get a error of AttributeError: 'Credentials' object has no attribute 'with_scopes'.
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
I run the code below (and many other versions of the Google Drive code) to import data from Google Drive. However, no matter what I try I get an error of "AttributeError: 'Credentials' object has no attribute 'with_scopes'". The troubleshooting seems to indicate incompatibility with version of Google Auth and Python, but the versions are compatible. I have gone through all the documentation and tutorials online about connecting Google Drive to Langchain, and I have complete and verified every step multiple times. I am at a loss of what else to try to get langchain to connect to Google Drive. Thanks in advance for your help!!!
from langchain.chat_models import ChatOpenAI
from langchain.chains import RetrievalQA
from langchain.document_loaders import GoogleDriveLoader
from langchain.text_splitter import RecursiveCharacterTextSplitter
from langchain.embeddings import OpenAIEmbeddings
from langchain.vectorstores import Chroma
folder_id = "1ExuF7GUaeDzJpuDn8ThH6t8LBcmjAKE_"
loader = GoogleDriveLoader(
folder_id=folder_id,
recursive=False
)
docs = loader.load()
text_splitter = RecursiveCharacterTextSplitter(
chunk_size=4000, chunk_overlap=0, separators=[" ", ",", "\n"]
)
texts = text_splitter.split_documents(docs)
embeddings = OpenAIEmbeddings()
db = Chroma.from_documents(texts, embeddings)
retriever = db.as_retriever()
llm = ChatOpenAI(temperature=0, model_name="gpt-3.5-turbo")
qa = RetrievalQA.from_chain_type(llm=llm, chain_type="stuff", retriever=retriever)
while True:
query = input("> ")
answer = qa.run(query)
print(answer)
### Expected behavior
I expect langchain to be able to connect to me Google Drive files and folders. | GoogleDriveLoader - AttributeError: 'Credentials' object has no attribute 'with_scopes' | https://api.github.com/repos/langchain-ai/langchain/issues/9803/comments | 2 | 2023-08-26T17:11:56Z | 2023-08-26T19:09:12Z | https://github.com/langchain-ai/langchain/issues/9803 | 1,868,177,575 | 9,803 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
May I inquire whether the vector stroe accommodates the provision for customized database connection pooling?
Typically, opening a database connection is an expensive operation, especially if the database is remote. Pooling keeps the connections active so that, when a connection is later requested, one of the active ones is used in preference to having to create another one.
### Suggestion:
Maybe it could be like this?
```python
engine = sqlalchemy.create_engine(connection_string, pool_size=10)
conn = engine.connect()
store = PGVector(
custom_connection=conn,
embedding_function=embeddings
)
``` | Issue: vector store supports custom database connections? | https://api.github.com/repos/langchain-ai/langchain/issues/9802/comments | 1 | 2023-08-26T17:04:57Z | 2023-10-28T07:03:20Z | https://github.com/langchain-ai/langchain/issues/9802 | 1,868,175,495 | 9,802 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.