issue_owner_repo listlengths 2 2 | issue_body stringlengths 0 261k ⌀ | issue_title stringlengths 1 925 | issue_comments_url stringlengths 56 81 | issue_comments_count int64 0 2.5k | issue_created_at stringlengths 20 20 | issue_updated_at stringlengths 20 20 | issue_html_url stringlengths 37 62 | issue_github_id int64 387k 2.46B | issue_number int64 1 127k |
|---|---|---|---|---|---|---|---|---|---|
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```
% python3 --version
Python 3.11.9
% pip install 'langchain[all]'
```
### Error Message and Stack Trace (if applicable)
ERROR: Could not build wheels for greenlet, which is required to install pyproject.toml-based projects
### Description
ERROR: Could not build wheels for greenlet, which is required to install pyproject.toml-based projects
### System Info
% python -m langchain_core.sys_info
System Information
------------------
> OS: Darwin
> OS Version: Darwin Kernel Version 22.6.0: Mon Feb 19 19:48:53 PST 2024; root:xnu-8796.141.3.704.6~1/RELEASE_X86_64
> Python Version: 3.11.9 (main, Apr 19 2024, 11:44:45) [Clang 14.0.6 ]
Package Information
-------------------
> langchain_core: 0.2.10
> langsmith: 0.1.82
> langchain_groq: 0.1.6
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph
> langserve
| ERROR: Could not build wheels for greenlet, which is required to install pyproject.toml-based projects | https://api.github.com/repos/langchain-ai/langchain/issues/23682/comments | 1 | 2024-06-30T08:34:16Z | 2024-06-30T08:39:22Z | https://github.com/langchain-ai/langchain/issues/23682 | 2,382,149,439 | 23,682 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
from langchain_community.utilities import SQLDatabase
host = config["DEFAULT"]["DB_HOST"]
port = config["DEFAULT"]["DB_PORT"]
db_name = config["DEFAULT"]["DB_NAME"]
username = config["DEFAULT"]["DB_USERNAME"]
password = os.getenv(config["DEFAULT"]["DB_PASSWORD"])
url = f"postgresql+psycopg2://{username}:{password}@{host}:{port}/{db_name}"
include_tables = [
"schema",
"schema_field"
]
db = SQLDatabase.from_uri(url, include_tables=include_tables)
```
### Error Message and Stack Trace (if applicable)
```console
{
"name": "ValueError",
"message": "include_tables {'schema_field', 'schema'} not found in database",
"stack": "---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
Cell In[15], line 16
9 url = f\"postgresql+psycopg2://{username}:{password}@{host}:{port}/{db_name}\"
11 include_tables = [
12 \"schema\",
13 \"schema_field\"
14 ]
---> 16 db = SQLDatabase.from_uri(url, include_tables=include_tables)
File ~/mambaforge/envs/langchain/lib/python3.12/site-packages/langchain_community/utilities/sql_database.py:135, in SQLDatabase.from_uri(cls, database_uri, engine_args, **kwargs)
133 \"\"\"Construct a SQLAlchemy engine from URI.\"\"\"
134 _engine_args = engine_args or {}
--> 135 return cls(create_engine(database_uri, **_engine_args), **kwargs)
File ~/mambaforge/envs/langchain/lib/python3.12/site-packages/langchain_community/utilities/sql_database.py:82, in SQLDatabase.__init__(self, engine, schema, metadata, ignore_tables, include_tables, sample_rows_in_table_info, indexes_in_table_info, custom_table_info, view_support, max_string_length, lazy_table_reflection)
80 missing_tables = self._include_tables - self._all_tables
81 if missing_tables:
---> 82 raise ValueError(
83 f\"include_tables {missing_tables} not found in database\"
84 )
85 self._ignore_tables = set(ignore_tables) if ignore_tables else set()
86 if self._ignore_tables:
ValueError: include_tables {'schema_field', 'schema'} not found in database"
}
```
### Description
Using:
```python
include_tables = [
"schema$raw",
"schema_field$raw"
]
```
instead of the fields without the `$raw` suffix works.
In this postgresql database, there are `$raw` version of each schema (e.g., `schema$raw` and `schema`). The `$raw` version includes all records, while the non-raw version contains the "cleaned up" tables.
It appears that `langchain_community.utilities.SQLDatabase` is not properly handling this situation. It only seems to detect the `$raw` schemas.
### System Info
System Information
------------------
> OS: Darwin
> OS Version: Darwin Kernel Version 23.4.0: Fri Mar 15 00:12:49 PDT 2024; root:xnu-10063.101.17~1/RELEASE_ARM64_T6020
> Python Version: 3.12.3 | packaged by conda-forge | (main, Apr 15 2024, 18:35:20) [Clang 16.0.6 ]
Package Information
-------------------
> langchain_core: 0.2.1
> langchain: 0.2.1
> langchain_community: 0.0.38
> langsmith: 0.1.77
> langchain_groq: 0.1.5
> langchain_openai: 0.1.8
> langchain_text_splitters: 0.2.0
> langgraph: 0.0.69
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langserve | langchain_community.utilities.SQLDatabase: include_tables {...} not found in database | https://api.github.com/repos/langchain-ai/langchain/issues/23672/comments | 3 | 2024-06-29T20:58:02Z | 2024-07-08T17:05:58Z | https://github.com/langchain-ai/langchain/issues/23672 | 2,381,956,295 | 23,672 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
from langchain_community.vectorstores import Chroma
results in
TypeError: typing.ClassVar[typing.Collection[str]] is not valid as type argument
### Error Message and Stack Trace (if applicable)
--------------------------------------------------------------------------
TypeError Traceback (most recent call last)
Cell In[3], line 6
4 import matplotlib.pyplot as plt
5 from langchain_community.embeddings import HuggingFaceEmbeddings
----> 6 from langchain_community.vectorstores import Chroma
7 from tqdm import tqdm
8 from datasets import load_dataset
File <frozen importlib._bootstrap>:1075, in _handle_fromlist(module, fromlist, import_, recursive)
File /home/user/miniconda3/envs/textgen/lib/python3.10/site-packages/langchain_community/vectorstores/__init__.py:514, in __getattr__(name)
512 def __getattr__(name: str) -> Any:
513 if name in _module_lookup:
--> 514 module = importlib.import_module(_module_lookup[name])
515 return getattr(module, name)
516 raise AttributeError(f"module {__name__} has no attribute {name}")
File /home/user/miniconda3/envs/textgen/lib/python3.10/importlib/__init__.py:126, in import_module(name, package)
124 break
125 level += 1
--> 126 return _bootstrap._gcd_import(name[level:], package, level)
File /home/user/miniconda3/envs/textgen/lib/python3.10/site-packages/langchain_community/vectorstores/chroma.py:23
21 from langchain_core.embeddings import Embeddings
22 from langchain_core.utils import xor_args
---> 23 from langchain_core.vectorstores import VectorStore
25 from langchain_community.vectorstores.utils import maximal_marginal_relevance
27 if TYPE_CHECKING:
File /home/user/miniconda3/envs/textgen/lib/python3.10/site-packages/langchain_core/vectorstores.py:755
751 tags = kwargs.pop("tags", None) or [] + self._get_retriever_tags()
752 return VectorStoreRetriever(vectorstore=self, tags=tags, **kwargs)
--> 755 class VectorStoreRetriever(BaseRetriever):
756 """Base Retriever class for VectorStore."""
758 vectorstore: VectorStore
File /home/user/miniconda3/envs/textgen/lib/python3.10/site-packages/pydantic/main.py:282, in ModelMetaclass.__new__(mcs, name, bases, namespace, **kwargs)
279 return isinstance(v, untouched_types) or v.__class__.__name__ == 'cython_function_or_method'
281 if (namespace.get('__module__'), namespace.get('__qualname__')) != ('pydantic.main', 'BaseModel'):
--> 282 annotations = resolve_annotations(namespace.get('__annotations__', {}), namespace.get('__module__', None))
283 # annotation only fields need to come first in fields
284 for ann_name, ann_type in annotations.items():
File /home/user/miniconda3/envs/textgen/lib/python3.10/site-packages/pydantic/typing.py:287, in resolve_annotations(raw_annotations, module_name)
285 value = ForwardRef(value)
286 try:
--> 287 value = _eval_type(value, base_globals, None)
288 except NameError:
289 # this is ok, it can be fixed with update_forward_refs
290 pass
File /home/user/miniconda3/envs/textgen/lib/python3.10/typing.py:327, in _eval_type(t, globalns, localns, recursive_guard)
321 """Evaluate all forward references in the given type t.
322 For use of globalns and localns see the docstring for get_type_hints().
323 recursive_guard is used to prevent infinite recursion with a recursive
324 ForwardRef.
325 """
326 if isinstance(t, ForwardRef):
--> 327 return t._evaluate(globalns, localns, recursive_guard)
328 if isinstance(t, (_GenericAlias, GenericAlias, types.UnionType)):
329 ev_args = tuple(_eval_type(a, globalns, localns, recursive_guard) for a in t.__args__)
File /home/user/miniconda3/envs/textgen/lib/python3.10/typing.py:693, in ForwardRef._evaluate(self, globalns, localns, recursive_guard)
689 if self.__forward_module__ is not None:
690 globalns = getattr(
691 sys.modules.get(self.__forward_module__, None), '__dict__', globalns
692 )
--> 693 type_ = _type_check(
694 eval(self.__forward_code__, globalns, localns),
695 "Forward references must evaluate to types.",
696 is_argument=self.__forward_is_argument__,
697 allow_special_forms=self.__forward_is_class__,
698 )
699 self.__forward_value__ = _eval_type(
700 type_, globalns, localns, recursive_guard | {self.__forward_arg__}
701 )
702 self.__forward_evaluated__ = True
File /home/user/miniconda3/envs/textgen/lib/python3.10/typing.py:167, in _type_check(arg, msg, is_argument, module, allow_special_forms)
164 arg = _type_convert(arg, module=module, allow_special_forms=allow_special_forms)
165 if (isinstance(arg, _GenericAlias) and
166 arg.__origin__ in invalid_generic_forms):
--> 167 raise TypeError(f"{arg} is not valid as type argument")
168 if arg in (Any, NoReturn, Final, TypeAlias):
169 return arg
### Description
after installing langchain_community with -U still getting the error
### System Info
python 3.10
oracle linux 8 | TypeError: typing.ClassVar[typing.Collection[str]] is not valid as type argument Selection deleted | https://api.github.com/repos/langchain-ai/langchain/issues/23664/comments | 1 | 2024-06-29T16:00:09Z | 2024-06-29T16:08:15Z | https://github.com/langchain-ai/langchain/issues/23664 | 2,381,827,643 | 23,664 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```
loader = TextLoader(file_path)
# loader = Docx2txtLoader(file_path)
documents = loader.load() # + docx_documents
print("texts doc: =============================")
print(type(documents))
text_splitter = RecursiveCharacterTextSplitter(
chunk_size=800, chunk_overlap=200)
# text_splitter = TokenTextSplitter(chunk_size=512, chunk_overlap=24)
texts = text_splitter.split_documents(documents)
raph = Neo4jGraph()
llm_transformer = LLMGraphTransformer(llm=model)
print("===================load llm_transformer!=========================")
graph_documents = llm_transformer.convert_to_graph_documents(texts)
```
### Error Message and Stack Trace (if applicable)
_No response_
### Description
Traceback (most recent call last):
File "/work/baichuan/script/langchain/graphRag.py", line 225, in <module>
graph_documents = llm_transformer.convert_to_graph_documents(texts)
File "/root/miniconda3/envs/rag/lib/python3.10/site-packages/langchain_experimental/graph_transformers/llm.py", line 762, in convert_to_graph_documents
return [self.process_response(document) for document in documents]
File "/root/miniconda3/envs/rag/lib/python3.10/site-packages/langchain_experimental/graph_transformers/llm.py", line 762, in <listcomp>
return [self.process_response(document) for document in documents]
File "/root/miniconda3/envs/rag/lib/python3.10/site-packages/langchain_experimental/graph_transformers/llm.py", line 714, in process_response
nodes_set.add((rel["head"], rel["head_type"]))
TypeError: list indices must be integers or slices, not str
### System Info
# Name Version Build Channel
_libgcc_mutex 0.1 conda_forge conda-forge
_openmp_mutex 4.5 2_gnu conda-forge
absl-py 2.1.0 pypi_0 pypi
accelerate 0.21.0 pypi_0 pypi
addict 2.4.0 pypi_0 pypi
aiofiles 23.2.1 pypi_0 pypi
aiohttp 3.9.5 py310h2372a71_0 conda-forge
aiosignal 1.3.1 pyhd8ed1ab_0 conda-forge
altair 5.3.0 pypi_0 pypi
annotated-types 0.7.0 pyhd8ed1ab_0 conda-forge
anyio 4.3.0 pyhd8ed1ab_0 conda-forge
astunparse 1.6.2 pypi_0 pypi
async-timeout 4.0.3 pyhd8ed1ab_0 conda-forge
attrs 23.2.0 pyh71513ae_0 conda-forge
backoff 2.2.1 pypi_0 pypi
beautifulsoup4 4.12.3 pypi_0 pypi
bitsandbytes 0.41.0 pypi_0 pypi
blas 1.0 mkl anaconda
blinker 1.8.2 pypi_0 pypi
brotli-python 1.0.9 py310hd8f1fbe_7 conda-forge
bzip2 1.0.8 h5eee18b_6
ca-certificates 2024.3.11 h06a4308_0
certifi 2024.2.2 py310h06a4308_0
chardet 5.2.0 pypi_0 pypi
charset-normalizer 3.3.2 pyhd8ed1ab_0 conda-forge
click 8.1.7 pypi_0 pypi
cmake 3.29.3 pypi_0 pypi
contourpy 1.2.1 pypi_0 pypi
cudatoolkit 11.4.2 h7a5bcfd_10 conda-forge
cycler 0.12.1 pypi_0 pypi
dataclasses-json 0.6.6 pyhd8ed1ab_0 conda-forge
datasets 2.14.7 pypi_0 pypi
deepdiff 7.0.1 pypi_0 pypi
deepspeed 0.9.5 pypi_0 pypi
dill 0.3.7 pypi_0 pypi
dnspython 2.6.1 pypi_0 pypi
docstring-parser 0.16 pypi_0 pypi
docx2txt 0.8 pypi_0 pypi
einops 0.8.0 pypi_0 pypi
email-validator 2.1.1 pypi_0 pypi
emoji 2.12.1 pypi_0 pypi
exceptiongroup 1.2.1 pypi_0 pypi
faiss 1.7.3 py310cuda112hae2f2aa_0_cuda conda-forge
faiss-gpu 1.7.3 h5b0ac8e_0 conda-forge
fastapi 0.111.0 pypi_0 pypi
fastapi-cli 0.0.4 pypi_0 pypi
ffmpy 0.3.2 pypi_0 pypi
filelock 3.14.0 pypi_0 pypi
filetype 1.2.0 pypi_0 pypi
flask 3.0.3 pypi_0 pypi
flask-cors 4.0.1 pypi_0 pypi
fonttools 4.52.1 pypi_0 pypi
frozenlist 1.4.1 py310h2372a71_0 conda-forge
fsspec 2023.10.0 pypi_0 pypi
gradio-client 0.17.0 pypi_0 pypi
greenlet 1.1.2 py310hd8f1fbe_2 conda-forge
grpcio 1.64.0 pypi_0 pypi
h11 0.14.0 pypi_0 pypi
hjson 3.1.0 pypi_0 pypi
httpcore 1.0.5 pypi_0 pypi
httptools 0.6.1 pypi_0 pypi
httpx 0.27.0 pypi_0 pypi
huggingface-hub 0.17.3 pypi_0 pypi
idna 3.7 pyhd8ed1ab_0 conda-forge
importlib-metadata 7.1.0 pypi_0 pypi
importlib-resources 6.4.0 pypi_0 pypi
intel-openmp 2021.4.0 h06a4308_3561 anaconda
itsdangerous 2.2.0 pypi_0 pypi
jinja2 3.1.4 pypi_0 pypi
joblib 1.2.0 py310h06a4308_0 anaconda
json-repair 0.25.2 pypi_0 pypi
jsonpatch 1.33 pyhd8ed1ab_0 conda-forge
jsonpath-python 1.0.6 pypi_0 pypi
jsonpointer 2.4 py310hff52083_3 conda-forge
jsonschema 4.22.0 pypi_0 pypi
jsonschema-specifications 2023.12.1 pypi_0 pypi
kiwisolver 1.4.5 pypi_0 pypi
langchain 0.2.6 pypi_0 pypi
langchain-community 0.2.6 pypi_0 pypi
langchain-core 0.2.10 pypi_0 pypi
langchain-experimental 0.0.62 pypi_0 pypi
langchain-text-splitters 0.2.2 pypi_0 pypi
langdetect 1.0.9 pypi_0 pypi
langsmith 0.1.82 pypi_0 pypi
ld_impl_linux-64 2.38 h1181459_1
libblas 3.9.0 12_linux64_mkl conda-forge
libfaiss 1.7.3 cuda112hb18a002_0_cuda conda-forge
libfaiss-avx2 1.7.3 cuda112h1234567_0_cuda conda-forge
libffi 3.4.4 h6a678d5_1
libgcc-ng 13.2.0 h77fa898_7 conda-forge
libgfortran-ng 7.5.0 ha8ba4b0_17
libgfortran4 7.5.0 ha8ba4b0_17
libgomp 13.2.0 h77fa898_7 conda-forge
liblapack 3.9.0 12_linux64_mkl conda-forge
libstdcxx-ng 13.2.0 hc0a3c3a_7 conda-forge
libuuid 1.41.5 h5eee18b_0
lit 18.1.6 pypi_0 pypi
loguru 0.7.0 pypi_0 pypi
lxml 5.2.2 pypi_0 pypi
markdown 3.6 pypi_0 pypi
markdown-it-py 3.0.0 pypi_0 pypi
markupsafe 2.1.5 pypi_0 pypi
marshmallow 3.21.2 pyhd8ed1ab_0 conda-forge
matplotlib 3.8.4 pypi_0 pypi
mdurl 0.1.2 pypi_0 pypi
mkl 2021.4.0 h06a4308_640 anaconda
mkl-service 2.4.0 py310h7f8727e_0 anaconda
mkl_fft 1.3.1 py310hd6ae3a3_0 anaconda
mkl_random 1.2.2 py310h00e6091_0 anaconda
mmengine 0.10.4 pypi_0 pypi
mpi 1.0 mpich
mpi4py 3.1.4 py310hfc96bbd_0
mpich 3.3.2 hc856adb_0
mpmath 1.3.0 pypi_0 pypi
multidict 6.0.5 py310h2372a71_0 conda-forge
multiprocess 0.70.15 pypi_0 pypi
mypy_extensions 1.0.0 pyha770c72_0 conda-forge
ncurses 6.4 h6a678d5_0
neo4j 5.22.0 pypi_0 pypi
networkx 3.3 pypi_0 pypi
ninja 1.11.1.1 pypi_0 pypi
nltk 3.8.1 pypi_0 pypi
numpy 1.21.4 pypi_0 pypi
numpy-base 1.24.3 py310h8e6c178_0 anaconda
nvidia-cublas-cu11 11.10.3.66 pypi_0 pypi
nvidia-cuda-cupti-cu11 11.7.101 pypi_0 pypi
nvidia-cuda-nvrtc-cu11 11.7.99 pypi_0 pypi
nvidia-cuda-runtime-cu11 11.7.99 pypi_0 pypi
nvidia-cudnn-cu11 8.5.0.96 pypi_0 pypi
nvidia-cufft-cu11 10.9.0.58 pypi_0 pypi
nvidia-curand-cu11 10.2.10.91 pypi_0 pypi
nvidia-cusolver-cu11 11.4.0.1 pypi_0 pypi
nvidia-cusparse-cu11 11.7.4.91 pypi_0 pypi
nvidia-nccl-cu11 2.14.3 pypi_0 pypi
nvidia-nvtx-cu11 11.7.91 pypi_0 pypi
opencv-python 4.9.0.80 pypi_0 pypi
openssl 3.3.0 h4ab18f5_3 conda-forge
ordered-set 4.1.0 pypi_0 pypi
orjson 3.10.3 py310he421c4c_0 conda-forge
packaging 24.0 pypi_0 pypi
pandas 1.2.5 pypi_0 pypi
peft 0.4.0 pypi_0 pypi
pillow 10.3.0 pypi_0 pypi
pip 24.0 py310h06a4308_0
platformdirs 4.2.2 pypi_0 pypi
protobuf 5.27.0 pypi_0 pypi
psutil 5.9.8 pypi_0 pypi
py-cpuinfo 9.0.0 pypi_0 pypi
pyarrow 16.1.0 pypi_0 pypi
pyarrow-hotfix 0.6 pypi_0 pypi
pydantic 2.7.3 pypi_0 pypi
pydantic-core 2.18.4 pypi_0 pypi
pydub 0.25.1 pypi_0 pypi
pygments 2.18.0 pypi_0 pypi
pyparsing 3.1.2 pypi_0 pypi
pypdf 4.2.0 pypi_0 pypi
pyre-extensions 0.0.29 pypi_0 pypi
pysocks 1.7.1 pyha2e5f31_6 conda-forge
python 3.10.14 h955ad1f_1
python-dateutil 2.9.0.post0 pypi_0 pypi
python-dotenv 1.0.1 pypi_0 pypi
python-iso639 2024.4.27 pypi_0 pypi
python-magic 0.4.27 pypi_0 pypi
python-multipart 0.0.9 pypi_0 pypi
python_abi 3.10 2_cp310 conda-forge
pytz 2024.1 pypi_0 pypi
pyyaml 6.0.1 py310h2372a71_1 conda-forge
rapidfuzz 3.9.3 pypi_0 pypi
readline 8.2 h5eee18b_0
referencing 0.35.1 pypi_0 pypi
regex 2024.5.15 pypi_0 pypi
requests 2.32.2 pyhd8ed1ab_0 conda-forge
rich 13.7.1 pypi_0 pypi
rpds-py 0.18.1 pypi_0 pypi
ruff 0.4.7 pypi_0 pypi
safetensors 0.4.3 pypi_0 pypi
scikit-learn 1.3.0 py310h1128e8f_0 anaconda
scipy 1.10.1 pypi_0 pypi
semantic-version 2.10.0 pypi_0 pypi
sentence-transformers 2.7.0 pypi_0 pypi
sentencepiece 0.2.0 pypi_0 pypi
setuptools 70.0.0 pypi_0 pypi
shellingham 1.5.4 pypi_0 pypi
shtab 1.7.1 pypi_0 pypi
six 1.16.0 pyhd3eb1b0_1 anaconda
sniffio 1.3.1 pyhd8ed1ab_0 conda-forge
soupsieve 2.5 pypi_0 pypi
sqlalchemy 2.0.30 py310hc51659f_0 conda-forge
sqlite 3.45.3 h5eee18b_0
starlette 0.37.2 pypi_0 pypi
sympy 1.12 pypi_0 pypi
tabulate 0.9.0 pypi_0 pypi
tenacity 8.3.0 pyhd8ed1ab_0 conda-forge
tensorboard 2.16.2 pypi_0 pypi
tensorboard-data-server 0.7.2 pypi_0 pypi
termcolor 2.4.0 pypi_0 pypi
threadpoolctl 2.2.0 pyh0d69192_0 anaconda
tiktoken 0.7.0 pypi_0 pypi
tk 8.6.14 h39e8969_0
tokenizers 0.14.1 pypi_0 pypi
tomli 2.0.1 pypi_0 pypi
tomlkit 0.12.0 pypi_0 pypi
toolz 0.12.1 pypi_0 pypi
torch 2.0.0 pypi_0 pypi
tqdm 4.62.3 pypi_0 pypi
transformers 4.34.0 pypi_0 pypi
transformers-stream-generator 0.0.5 pypi_0 pypi
triton 2.0.0 pypi_0 pypi
trl 0.7.11 pypi_0 pypi
typer 0.12.3 pypi_0 pypi
typing-extensions 4.9.0 pypi_0 pypi
typing_inspect 0.9.0 pyhd8ed1ab_0 conda-forge
tyro 0.8.4 pypi_0 pypi
tzdata 2024a h04d1e81_0
ujson 5.10.0 pypi_0 pypi
unstructured 0.14.4 pypi_0 pypi
unstructured-client 0.22.0 pypi_0 pypi
urllib3 2.2.1 pyhd8ed1ab_0 conda-forge
uvicorn 0.30.1 pypi_0 pypi
uvloop 0.19.0 pypi_0 pypi
watchfiles 0.22.0 pypi_0 pypi
websockets 11.0.3 pypi_0 pypi
werkzeug 3.0.3 pypi_0 pypi
wheel 0.43.0 py310h06a4308_0
wikipedia 1.4.0 pypi_0 pypi
wrapt 1.16.0 pypi_0 pypi
xformers 0.0.19 pypi_0 pypi
xxhash 3.4.1 pypi_0 pypi
xz 5.4.6 h5eee18b_1
yaml 0.2.5 h7f98852_2 conda-forge
yapf 0.40.2 pypi_0 pypi
yarl 1.9.4 py310h2372a71_0 conda-forge
zipp 3.18.2 pypi_0 pypi
zlib 1.2.13 h5eee18b_1
| llm_transformer.convert_to_graph_documents TypeError: list indices must be integers or slices, not str | https://api.github.com/repos/langchain-ai/langchain/issues/23661/comments | 17 | 2024-06-29T08:45:40Z | 2024-07-23T16:08:21Z | https://github.com/langchain-ai/langchain/issues/23661 | 2,381,588,395 | 23,661 |
[
"langchain-ai",
"langchain"
] | ### URL
https://python.langchain.com/v0.2/docs/integrations/chat/
### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
ChatHuggingFace does not support structured output, and raises a `NotImplementedError`
### Idea or request for content:
_No response_ | DOC: ChatHuggingFace incorrectly marked as supporting structured output | https://api.github.com/repos/langchain-ai/langchain/issues/23660/comments | 8 | 2024-06-29T05:30:34Z | 2024-07-05T23:08:34Z | https://github.com/langchain-ai/langchain/issues/23660 | 2,381,507,664 | 23,660 |
[
"langchain-ai",
"langchain"
] | ### Privileged issue
- [X] I am a LangChain maintainer, or was asked directly by a LangChain maintainer to create an issue here.
### Issue Content
We observe performance differences between `model.bind_tools(tools, tool_choice="any")` and `model.bind_tools(tools, tool_choice=tool_name)` when `len(tools) == 1`.
Implementations would need to be provider specific. | Update with_structured_output to explicitly pass tool name | https://api.github.com/repos/langchain-ai/langchain/issues/23644/comments | 0 | 2024-06-28T21:00:18Z | 2024-07-16T15:32:51Z | https://github.com/langchain-ai/langchain/issues/23644 | 2,381,192,142 | 23,644 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```
%pip install langchain==0.2.6
%pip install pymilvus==2.4.4
%pip install langchain_milvus==0.1.1
from langchain_milvus.vectorstores import Milvus
b2 = 10000
b1 = 4
loops, remainder = divmod(len(docs), b2)
while loops >=0:
print(b1 ,b2)
_docs = docs[b1:b2]
db = Milvus.from_documents(documents=_docs,embedding= embed_model, collection_name ='gene', connection_args={'db_name':'<db>','user':dbutils.secrets.get(scope = "milvus" , key = 'MILVUS_USER'), 'password':dbutils.secrets.get(scope = "milvus" , key = 'MILVUS_PASSWORD'), 'host': dbutils.secrets.get(scope = "milvus" , key = 'MILVUS_HOST'), 'port': dbutils.secrets.get(scope = "milvus" , key = 'MILVUS_PORT')}
)
loops -= 1
b1 = b2+1
b2 += 10000
print('done')
db.similarity_search(<query>) #This Works
#############################Now Establishing a Connection First before testing out Similarity Search###################
##Loading the Collection we created earlier, using 'from_documents'
db = Milvus(collection_name= 'gene', embedding_function=embed_model, connection_args={'db_name': '<db>','user':dbutils.secrets.get(scope = "milvus" , key = 'MILVUS_USER'), 'password':dbutils.secrets.get(scope = "milvus" , key = 'MILVUS_PASSWORD'), 'host': dbutils.secrets.get(scope = "milvus" , key = 'MILVUS_HOST'), 'port': dbutils.secrets.get(scope = "milvus" , key = 'MILVUS_PORT')})
db.similarity_search(<query>) ##This Does not work and returns an empty list.
```
### Error Message and Stack Trace (if applicable)
_No response_
### Description
I'm trying to create a collection on Milvus (hosted) using langchain. Then I want to load the collection back again and do a similairty search.
- I created the collection and loaded the documents successfully using Milvus.from_documents object.
- I then ran a similarity search on the object using db.similarity search. It worked fine and gave me accurate results.
- I then tried to establish a connection and load the collection back again, by initiating the Milvus class object i.e. db = Milvus(collection_name = , connection_args = {}) It worked without any errors.
- But when I tried to run a similarity search on the object (db.similarity_search), It just returned an empty string.
Note:
- The collection exists in Milvus.
- Im passing 'db_name' as a connection argument because I only have access to that particualt db withing Milvus.
### System Info
%pip install langchain==0.2.6
%pip install pymilvus==2.4.4
%pip install langchain_milvus==0.1.1
Databricks Runtime 14 | Similarity Search Returns Empty when Using Milvus | https://api.github.com/repos/langchain-ai/langchain/issues/23634/comments | 0 | 2024-06-28T14:19:43Z | 2024-06-28T14:22:22Z | https://github.com/langchain-ai/langchain/issues/23634 | 2,380,553,191 | 23,634 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
We recently upgraded our libaries as follows:
```
-langchain = "0.2.0rc2"
+langchain = "^0.2.5"
-langchain-community = "0.2.0rc1"
+langchain-community = "^0.2.5"
-langchain-anthropic = "^0.1.9"
+langchain-anthropic = "^0.1.15"
-langchain-groq = "0.1.3"
+langchain-groq = "^0.1.5"
-langchain-core = "^0.1.52"
+langchain-core = "^0.2.9"
-langgraph = "^0.0.38"
+langgraph = "^0.0.69"
```
And have lost the ability to view the Tavily Search tool call's output in our callback handler.
When we revert packages, the tool output appears again. Our custom callback handler is able to output the Tool call output with every other tool in our stack after the update.
We implement the tavily tool like so:
```python
from langchain_community.tools.tavily_search import TavilySearchResults
from langchain_community.utilities.tavily_search import TavilySearchAPIWrapper
def get_tavily_search_tool(callbacks=[]):
search = TavilySearchResults(
name="search",
api_wrapper=TavilySearchAPIWrapper(tavily_api_key="tvly-"),
callbacks=callbacks,
max_results=5
)
return search
```
And our callback handler looks something like this:
```python
async def on_tool_start(
self,
serialized: Dict[str, Any],
input_str: str,
**kwargs: Any,
) -> None:
tool_spec = flatten_dict(serialized)
tool_name = tool_spec.get("name", "Unknown Tool")
tool_description = tool_spec.get("description", "No description available")
self.queue.put_nowait(f"\n\nUsing `{tool_name}` (*{tool_description}*)\n")
async def on_tool_end(
self,
output: str,
color: Optional[str] = None,
observation_prefix: Optional[str] = None,
llm_prefix: Optional[str] = None,
**kwargs: Any,
) -> None:
"""If not the final action, print out observation."""
if observation_prefix is not None:
self.queue.put_nowait(f"\n{observation_prefix}\n")
if len(output) > 10000:
# truncate output to 10.000 characters
output = output[:10000]
output += " ... (truncated to 10,000 characters)"
self.queue.put_nowait(f"\n```json\n{output}\n```\n\n")
else:
if isinstance(output, dict):
pretty_output = json.dumps(output, indent=4)
self.queue.put_nowait(f"\n```json\n{pretty_output}\n```\n\n")
elif isinstance(output, str):
# attempt to parse the output as json
try:
pretty_output = json.dumps(ast.literal_eval(output), indent=4)
self.queue.put_nowait(f"\n```json\n{pretty_output}\n```\n\n")
except:
pretty_output = output
self.queue.put_nowait(f"\n```\n{pretty_output}\n```\n\n")
if llm_prefix is not None:
self.queue.put_nowait(f"\n{llm_prefix}\n")
```
### Error Message and Stack Trace (if applicable)
_No response_
### Description
We recently upgraded our libaries and have lost the ability to view the Tavily Search tool call's output in our callback handler.
When we revert packages, the tool output appears again. Our custom callback handler is able to output the Tool call output with every other tool in our stack after the update.
### System Info
langchain==0.1.16
langchain-anthropic==0.1.13
langchain-community==0.0.34
langchain-core==0.1.52
langchain-google-genai==1.0.3
langchain-mistralai==0.1.8
langchain-openai==0.1.6
langchain-text-splitters==0.0.1 | [Community] Tavily Search lost Tool output Callbacks in newest versions | https://api.github.com/repos/langchain-ai/langchain/issues/23632/comments | 2 | 2024-06-28T13:52:27Z | 2024-07-01T21:44:28Z | https://github.com/langchain-ai/langchain/issues/23632 | 2,380,492,978 | 23,632 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```
class GradeDocuments(BaseModel):
score: str = Field(
description="Die Frage handelt sich um ein Smalltalk-Thema, 'True' oder 'False'"
)
def question_classifier(state: AgentState):
question = state["question"]
print(f"In question classifier with question: {question}")
system = """<s>[INST] Du bewertest, ob es in sich bei der Frage des Nutzers um ein Smalltalk-Thema handelt oder nicht. \n
Falls es bei der Frage um generelle Smalltalk-Fragen wie zum Beispiel: 'Hallo, wer bist du?' geht, bewerte es als 'True'. \n
Falls es sich bei der Frage um eine spezifische Frage zu einem Thema handelt wie zum Beispiel: 'Nenne mir Vorteile von Multi CLoud' geht, bewerte die Frage mit 'False'.[/INST]"""
grade_prompt = ChatPromptTemplate.from_messages(
[
("system", system),
(
"human",
"Frage des Nutzers: {question}",
),
]
)
#llm = ChatOpenAI() mit ChatOpenAI gehts, mit Chatgroq vorher iwie nicht mehr
env_vars = dotenv_values('.env')
load_dotenv()
groq_key = env_vars.get("GROQ_API_KEY")
print("Loading Structured Groq.")
llm = ChatGroq(model_name="mixtral-8x7b-32768", groq_api_key = groq_key)
structured_llm = llm.with_structured_output(GradeDocuments)
grader_llm = grade_prompt | structured_llm
result = grader_llm.invoke({"question": question})
state["is_smalltalk"] = result.score
return state
```
### Error Message and Stack Trace (if applicable)
When it comes to grade_llm.invoke:
Error code: 400 - {'error': {'message': 'response_format` does not support streaming', 'type': 'invalid_request_error'}}
### Description
Hi,
I want to use a Groq LLM for getting structured output, in my case it should return True or False. The code works fine with using ChatOpenAI() but it fails when using Groq even if it should work with structured output as I have read in the documentation.
I also tried `structured_llm = llm.with_structured_output(GradeDocuments, method="json_mode")` without success.
I also already updated my langchain-groq version.
Does anyone have an idea how to solve this?
EDIT: I also tried with a simple example where it works with ChatOpenAI but not with Groq:
With ChatOpenAI:
```
from langchain_groq import ChatGroq
class GradeDocuments(BaseModel):
"""Boolean values to check for relevance on retrieved documents."""
score: str = Field(
description="Die Frage handelt sich um ein Smalltalk-Thema, 'True' oder 'False'"
)
model = ChatOpenAI(model="gpt-3.5-turbo-0125", temperature=0) # with this it works
#model = ChatGroq(model_name="mixtral-8x7b-32768", groq_api_key = "")
structured_llm = model.with_structured_output(GradeDocuments)
structured_llm.invoke("Hello, how are you?")
# Returns: GradeDocuments(score='False')
```
With ChatGroq the same errror like above. But if I use e.g. the Llama 3 model from Groq it works, so seems like it is an issue of the Mixtral 8x7B model
### System Info
langchain-groq 0.1.5
langchain 0.2.5 | Structured Output with Groq: Error code: 400 - {'error': {'message': 'response_format` does not support streaming', 'type': 'invalid_request_error'}} | https://api.github.com/repos/langchain-ai/langchain/issues/23629/comments | 4 | 2024-06-28T11:52:52Z | 2024-07-15T16:42:34Z | https://github.com/langchain-ai/langchain/issues/23629 | 2,380,257,120 | 23,629 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
### Issue Description
I encountered a problem when using the `Qdrant.from_existing_collection` method in the Langchain Qdrant integration. Here is the code I used:
```python
from langchain_community.vectorstores.qdrant import Qdrant
url = "http://localhost:6333"
collection_name = "unique_case_2020"
qdrant = Qdrant.from_existing_collection(
embedding=embeddings, # Please set according to actual situation
collection_name=collection_name,
url=url
)
```
When I run this code, I get the following error:
```
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
Cell In[21], line 7
3 url = "http://localhost:6333"
4 collection_name = "unique_case_2020"
----> 7 qdrant = Qdrant.from_existing_collection(
8 embedding=embeddings, # Please set according to actual situation
9 collection_name=collection_name,
10 url=url
11 )
TypeError: Qdrant.from_existing_collection() missing 1 required positional argument: 'path'
```
To resolve this, I added the `path` argument, but encountered another error:
```python
qdrant = Qdrant.from_existing_collection(
embedding=embeddings, # Please set according to actual situation
collection_name=collection_name,
url=url,
path=""
)
```
This raised the following error:
```
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
Cell In[23], line 7
3 url = "http://localhost:6333"
4 collection_name = "unique_case_2020"
----> 7 qdrant = Qdrant.from_existing_collection(
8 embedding=embeddings, # Please set according to actual situation
9 collection_name=collection_name,
10 url=url,
11 path=""
12 )
File ~/.local/lib/python3.10/site-packages/langchain_community/vectorstores/qdrant.py:1397, in Qdrant.from_existing_collection(cls, embedding, path, collection_name, location, url, port, grpc_port, prefer_grpc, https, api_key, prefix, timeout, host, **kwargs)
1374 @classmethod
1375 def from_existing_collection(
1376 cls: Type[Qdrant],
(...)
1390 **kwargs: Any,
1391 ) -> Qdrant:
1392 """
1393 Get instance of an existing Qdrant collection.
1394 This method will return the instance of the store without inserting any new
1395 embeddings
1396 """
-> 1397 client, async_client = cls._generate_clients(
1398 location=location,
1399 url=url,
1400 port=port,
1401 grpc_port=grpc_port,
1402 prefer_grpc=prefer_grpc,
1403 https=https,
1404 api_key=api_key,
1405 prefix=prefix,
1406 timeout=timeout,
1407 host=host,
1408 path=path,
1409 **kwargs,
1410 )
1411 return cls(
1412 client=client,
1413 async_client=async_client,
(...)
1416 **kwargs,
1417 )
File ~/.local/lib/python3.10/site-packages/langchain_community/vectorstores/qdrant.py:2250, in Qdrant._generate_clients(location, url, port, grpc_port, prefer_grpc, https, api_key, prefix, timeout, host, path, **kwargs)
2233 @staticmethod
2234 def _generate_clients(
2235 location: Optional[str] = None,
(...)
2246 **kwargs: Any,
2247 ) -> Tuple[Any, Any]:
2248 from qdrant_client import AsyncQdrantClient, QdrantClient
-> 2250 sync_client = QdrantClient(
2251 location=location,
2252 url=url,
2253 port=port,
2254 grpc_port=grpc_port,
2255 prefer_grpc=prefer_grpc,
2256 https=https,
2257 api_key=api_key,
2258 prefix=prefix,
2259 timeout=timeout,
2260 host=host,
2261 path=path,
2262 **kwargs,
2263 )
2265 if location == ":memory:" or path is not None:
2266 # Local Qdrant cannot co-exist with Sync and Async clients
2267 # We fallback to sync operations in this case
2268 async_client = None
File ~/.local/lib/python3.10/site-packages/qdrant_client/qdrant_client.py:107, in QdrantClient.__init__(self, location, url, port, grpc_port, prefer_grpc, https, api_key, prefix, timeout, host, path, force_disable_check_same_thread, grpc_options, auth_token_provider, **kwargs)
104 self._client: QdrantBase
106 if sum([param is not None for param in (location, url, host, path)]) > 1:
--> 107 raise ValueError(
108 "Only one of <location>, <url>, <host> or <path> should be specified."
109 )
111 if location == ":memory:":
112 self._client = QdrantLocal(
113 location=location,
114 force_disable_check_same_thread=force_disable_check_same_thread,
115 )
ValueError: Only one of <location>, <url>, <host> or <path> should be specified.
```
### Error Message and Stack Trace (if applicable)
_No response_
### Description
### Expected Behavior
The `from_existing_collection` method should allow the `path` argument to be optional, as specifying both `url` and `path` leads to a conflict, and `path` should not be mandatory when `url` is provided.
### Actual Behavior
- When `path` is not provided, a `TypeError` is raised indicating that `path` is a required positional argument.
- When `path` is provided, a `ValueError` is raised indicating that only one of `<location>`, `<url>`, `<host>`, or `<path>` should be specified.
### Suggested Fix
- Update the `from_existing_collection` method to make the `path` argument optional.
- Adjust the internal logic to handle cases where `url` is provided without requiring `path`.
### Reproduction
1. Use the provided code to instantiate a `Qdrant` object from an existing collection.
2. Observe the `TypeError` when `path` is not provided.
3. Observe the `ValueError` when `path` is provided along with `url`.
Thank you for looking into this issue.
### System Info
### Environment
- Python version: 3.10
- Name: langchain-community
Version: 0.2.2
Summary: Community contributed LangChain integrations.
Home-page: https://github.com/langchain-ai/langchain
Author:
Author-email:
License: MIT
Location: /home/lighthouse/.local/lib/python3.10/site-packages
Requires: aiohttp, dataclasses-json, langchain, langchain-core, langsmith, numpy, PyYAML, requests, SQLAlchemy, tenacity
Required-by: langchain-experimental
- Qdrant version: 1.9.x (docker pull) | BUG in langchain_community.vectorstores.qdrant | https://api.github.com/repos/langchain-ai/langchain/issues/23626/comments | 1 | 2024-06-28T09:05:36Z | 2024-08-06T15:00:43Z | https://github.com/langchain-ai/langchain/issues/23626 | 2,379,952,622 | 23,626 |
[
"langchain-ai",
"langchain"
] | ### URL
https://python.langchain.com/v0.2/docs/integrations/retrievers/google_vertex_ai_search/
### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
I could not get any responses from Vertex AI Search when referring the datastore with chunk option.
```
from langchain_community.retrievers import (
GoogleVertexAIMultiTurnSearchRetriever,
GoogleVertexAISearchRetriever,
)
retriever = GoogleVertexAISearchRetriever(
project_id=PROJECT_ID,
location_id=LOCATION,
data_store_id=DATA_STORE_ID,
max_documents=3,
)
query = "What is Transformer?"
retriever.invoke(query)
```
### Idea or request for content:
When I tried to query with the dataset without chunk option, it returned correct results:
```
from langchain_community.retrievers import (
GoogleVertexAIMultiTurnSearchRetriever,
GoogleVertexAISearchRetriever,
)
retriever = GoogleVertexAISearchRetriever(
project_id=PROJECT_ID,
location_id=LOCATION,
data_store_id=DATA_STORE_ID,
max_documents=3,
)
query = "What is Transformer?"
retriever.invoke(query)
[Document(page_content='2 Background\nThe goal of reducing sequential computation also forms the foundation of the Extended Neural GPU\n[16], ByteNet [18] and ConvS2S [9], all of which use convolutional neural networks as basic building\nblock, computing hidden representations in parallel for all input and output positions. In these models,\nthe number of operations required to relate signals from two arbitrary input or output positions grows\nin the distance between positions, linearly for ConvS2S and logarithmically for ByteNet. This makes\nit more difficult to learn dependencies between distant positions [12]. In the Transformer this is\nreduced to a constant number of operations, albeit at the cost of reduced effective resolution due\nto averaging attention-weighted positions, an effect we counteract with Multi-Head Attention as\ndescribed in section 3.2.\nSelf-attention, sometimes called intra-attention is an attention mechanism rel
``` | Vertex AI Search doesn't return the result with chunked dataset | https://api.github.com/repos/langchain-ai/langchain/issues/23624/comments | 0 | 2024-06-28T08:07:11Z | 2024-06-28T08:09:54Z | https://github.com/langchain-ai/langchain/issues/23624 | 2,379,849,291 | 23,624 |
[
"langchain-ai",
"langchain"
] | ### Privileged issue
- [X] I am a LangChain maintainer, or was asked directly by a LangChain maintainer to create an issue here.
### Issue Content
Currently we only support properties that are explicitly listed in OpenAI's [API reference](https://platform.openai.com/docs/api-reference/chat/create#chat-create-messages).
https://github.com/langchain-ai/langchain/blob/a1520357c8053c89cf13caa269636688908d3bf1/libs/partners/openai/langchain_openai/chat_models/base.py#L222
In a popular OpenAI [cookbook](https://cookbook.openai.com/examples/how_to_call_functions_with_chat_models), `"name"` is included with a tool message.
Support for "name" would enable use of ChatOpenAI with certain proxies, namely Google Gemini, as described [here](https://github.com/langchain-ai/langchain/pull/23551).
Opened a [discussion](https://community.openai.com/t/is-name-a-supported-parameter-for-tool-messages/843543) on the OpenAI forums to try to get clarity. | openai: add "name" to supported properties for tool messages? | https://api.github.com/repos/langchain-ai/langchain/issues/23601/comments | 0 | 2024-06-27T18:43:53Z | 2024-06-27T18:46:33Z | https://github.com/langchain-ai/langchain/issues/23601 | 2,378,867,844 | 23,601 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
# #import dependencies
import json
from langchain.schema import (
AIMessage,
HumanMessage,
SystemMessage,
messages_from_dict,
messages_to_dict)
from dotenv import load_dotenv,find_dotenv
from langchain.chains import ConversationalRetrievalChain
from langchain.memory import ConversationBufferMemory
from langchain_community.chat_message_histories import ChatMessageHistory
from langchain.prompts import PromptTemplate
load_dotenv(find_dotenv())
#load existing session messages from db
messages_from_db = self.db_handler.check_into_db(user_id, chat_session_id)
deserialized_messages = self.deserialized_db_messages(messages_from_db)
# Develop ChatMessagesHistory
retrieved_chat_history = ChatMessageHistory(messages= deserialized_messages)
# Create a new ConversationBufferMemory from a ChatMessageHistory class
retrieved_memory = ConversationBufferMemory(
chat_memory=retrieved_chat_history, memory_key="chat_history")
# print(retrieved_memory)
# Build a second Conversational Retrieval Chain
second_chain = ConversationalRetrievalChain.from_llm(
self.llm,
retriever=self.vectordb.as_retriever(),
memory=retrieved_memory,
combine_docs_chain_kwargs={"prompt": self.QA_PROMPT},
get_chat_history=lambda h : h,
verbose=True
)
#*********************************************
answer = second_chain.invoke({"question": question})
#*********************************************
### Error Message and Stack Trace (if applicable)

### Description
i'm also facing same issue,
LLMChain rephrasing my follow up question in a wrong way, like below

anyone can help me, how can i control the behavior of default prompt template (LLMChain) which are rephrasing question incorrectly before passing to my custom prompt
### System Info
(pri_env) abusufyan@abusufyan:~/development/vetrefs-llama/app$ pip show langchain
Name: langchain
Version: 0.2.3
Summary: Building applications with LLMs through composability
Home-page: https://github.com/langchain-ai/langchain
Author:
Author-email:
License: MIT
Location: /home/abusufyan/development/private_VetRef/pri_env/lib/python3.11/site-packages
Requires: aiohttp, langchain-core, langchain-text-splitters, langsmith, numpy, pydantic, PyYAML, requests, SQLAlchemy, tenacity
Required-by: langchain-community | rephrasing follow up question incorrectly | https://api.github.com/repos/langchain-ai/langchain/issues/23587/comments | 0 | 2024-06-27T14:22:11Z | 2024-06-27T14:24:46Z | https://github.com/langchain-ai/langchain/issues/23587 | 2,378,307,191 | 23,587 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
from transformers import AutoTokenizer
from langchain_huggingface import ChatHuggingFace
from langchain_huggingface import HuggingFaceEndpoint
import requests
sample = requests.get(
"https://raw.githubusercontent.com/huggingface/blog/main/langchain.md"
).text
tokenizer = AutoTokenizer.from_pretrained("meta-llama/Meta-Llama-3-70B-Instruct")
def n_tokens(text):
return len(tokenizer(text)["input_ids"])
print(f"The number of tokens in the sample is {n_tokens(sample)}")
llm_10 = HuggingFaceEndpoint(
repo_id="meta-llama/Meta-Llama-3-70B-Instruct",
max_new_tokens=10,
cache=False,
seed=123,
)
llm_4096 = HuggingFaceEndpoint(
repo_id="meta-llama/Meta-Llama-3-70B-Instruct",
max_new_tokens=4096,
cache=False,
seed=123,
)
messages = [
(
"system",
"You are a smart AI that has to describe a given text in to at least 1000 characters.",
),
("user", f"Summarize the following text:\n\n{sample}\n"),
]
# native endpoint
response_10_native = llm_10.invoke(messages)
print(f"Native response 10: {n_tokens(response_10_native)} tokens")
response_4096_native = llm_4096.invoke(messages)
print(f"Native response 4096: {n_tokens(response_4096_native)} tokens")
# make sure the native responses are different lengths
assert len(response_10_native) < len(
response_4096_native
), f"Native response 10 should be shorter than native response 4096, 10 `max_new_tokens`: {n_tokens(response_10_native)}, 4096 `max_new_tokens`: {n_tokens(response_4096_native)}"
# chat implementation from langchain_huggingface
chat_model_10 = ChatHuggingFace(llm=llm_10)
chat_model_4096 = ChatHuggingFace(llm=llm_4096)
# chat implementation for 10 tokens
response_10 = chat_model_10.invoke(messages)
print(f"Response 10: {n_tokens(response_10.content)} tokens")
actual_response_tokens_10 = response_10.response_metadata.get(
"token_usage"
).completion_tokens
print(
f"Actual response 10: {actual_response_tokens_10} tokens (always 100 for some reason!)"
)
# chat implementation for 4096 tokens
response_4096 = chat_model_4096.invoke(messages)
print(f"Response 4096: {n_tokens(response_4096.content)} tokens")
actual_response_tokens_4096 = response_4096.response_metadata.get(
"token_usage"
).completion_tokens
print(
f"Actual response 4096: {actual_response_tokens_4096} tokens (always 100 for some reason!)"
)
# assert that the responses are different lengths, which fails because the token usage is always 100
print("-" * 20)
print(f"Output for 10 tokens: {response_10.content}")
print("-" * 20)
print(f"Output for 4096 tokens: {response_4096.content}")
print("-" * 20)
assert len(response_10.content) < len(
response_4096.content
), f"Response 10 should be shorter than response 4096, 10 `max_new_tokens`: {n_tokens(response_10.content)}, 4096 `max_new_tokens`: {n_tokens(response_4096.content)}"
```
This is the output from the script:
```
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
The number of tokens in the sample is 1809
Native response 10: 11 tokens
Native response 4096: 445 tokens
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
Response 10: 101 tokens
Actual response 10: 100 tokens (always 100 for some reason!)
Response 4096: 101 tokens
Actual response 4096: 100 tokens (always 100 for some reason!)
--------------------
Output for 10 tokens: The text announces the launch of a new partner package called `langchain_huggingface` in LangChain, jointly maintained by Hugging Face and LangChain. This package aims to bring the power of Hugging Face's latest developments into LangChain and keep it up-to-date. The package was created by the community, and by becoming a partner package, the time it takes to bring new features from Hugging Face's ecosystem to LangChain's users will be reduced.
The package integrates seamlessly with Lang
--------------------
Output for 4096 tokens: The text announces the launch of a new partner package called `langchain_huggingface` in LangChain, jointly maintained by Hugging Face and LangChain. This package aims to bring the power of Hugging Face's latest developments into LangChain and keep it up-to-date. The package was created by the community, and by becoming a partner package, the time it takes to bring new features from Hugging Face's ecosystem to LangChain's users will be reduced.
The package integrates seamlessly with Lang
--------------------
```
### Error Message and Stack Trace (if applicable)
AssertionError: Response 10 should be shorter than response 4096, 10 `max_new_tokens`: 101, 4096 `max_new_tokens`: 101
### Description
There seems to be an issues when using `langchain_huggingface.llms.huggingface_endpoint.HuggingFaceEndpoint` together with the `langchain_huggingface.chat_models.huggingface.ChatHuggingFace` implementation.
When just using the `HuggingFaceEndpoint`, the parameter `max_new_tokens` is properly implemented, while this does not work properly when wrapping inside `ChatHuggingFace(llm=...)`. The latter implementation always returns a response of 100 tokens, and I am unable to get this to work properly after searching the docs + source code.
I have created a reproducible example using `meta-llama/Meta-Llama-3-70B-Instruct` (as this model is also supported for serverless).
### System Info
System Information
------------------
> OS: Darwin
> OS Version: Darwin Kernel Version 23.5.0: Wed May 1 20:19:05 PDT 2024; root:xnu-10063.121.3~5/RELEASE_ARM64_T8112
> Python Version: 3.12.3 (main, Apr 9 2024, 08:09:14) [Clang 15.0.0 (clang-1500.3.9.4)]
Package Information
-------------------
> langchain_core: 0.2.10
> langchain: 0.2.6
> langchain_community: 0.2.5
> langsmith: 0.1.82
> langchain_anthropic: 0.1.15
> langchain_aws: 0.1.7
> langchain_huggingface: 0.0.3
> langchain_openai: 0.1.9
> langchain_text_splitters: 0.2.2
> langchainhub: 0.1.20
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph
> langserve | ChatHuggingFace + HuggingFaceEndpoint does not properly implement `max_new_tokens` | https://api.github.com/repos/langchain-ai/langchain/issues/23586/comments | 4 | 2024-06-27T14:17:33Z | 2024-07-13T12:15:29Z | https://github.com/langchain-ai/langchain/issues/23586 | 2,378,296,008 | 23,586 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
from langchain_community.agent_toolkits import create_sql_agent
from langchain.agents.agent_toolkits import SQLDatabaseToolkit
llm = HuggingFaceEndpoint(
endpoint_url="endpoint_url",
max_new_tokens=512,
top_k=10,
top_p=0.95,
typical_p=0.95,
temperature=0.01,
repetition_penalty=1.03,
)
db = SQLDatabase.from_uri("sqlite:///Chinook.db?isolation_level=IMMEDIATE")
toolkit = SQLDatabaseToolkit(db=db,llm=llm)
agent_executor = create_asql_agent(
llm=llm,
toolkit=toolkit,
verbose=True,
agent_type=AgentType.ZERO_SHOT_REACT_DESCRIPTION
)
agent_executor.invoke(
"How many genres are there?"
)
### Error Message and Stack Trace (if applicable)
> Entering new SQL Agent Executor chain...
I need to know the table that contains the genres.
Action: sql_db_list_tables
Action Input:
ObservationAlbum, Artist, Customer, Employee, Genre, Invoice, InvoiceLine, MediaType, Playlist, PlaylistTrack, Track Now I know the table that contains the genres is Genre.
Action: sql_db_schema
Action Input: Genre
ObservationError: table_names {'Genre\nObservation'} not found in database I made a mistake, I should remove the Observation part.
Action: sql_db_schema
Action Input: Genre
ObservationError: table_names {'Genre\nObservation'} not found in database I made another mistake, I should remove the newline character.
Action: sql_db_schema
Action Input: Genre
ObservationError: table_names {'Genre\nObservation'} not found in database I made another mistake, I should remove the newline character and the Observation part.
Action: sql_db_schema
Action Input: Genre
ObservationError: table_names {'Genre\nObservation'} not found in database I made another mistake, I should remove the newline character and the Observation part and the curly brackets.
Action: sql_db_schema
Action Input: Genre
ObservationError: table_names {'Genre\nObservation'} not found in database I made another mistake, I should remove the newline character and the Observation part and the curly brackets and the single quotes.
Action: sql_db_schema
Action Input: Genre
ObservationError: table_names {'Genre\nObservation'} not found in database I made another mistake, I should remove the newline character and the Observation part and the curly brackets and the single quotes and the \n.
Action: sql_db_schema
Action Input: Genre
ObservationError: table_names {'Genre\nObservation'} not found in database I made another mistake, I should remove the newline character and the Observation part and the curly brackets and the single quotes and the \n and the space.
Action: sql_db_schema
Action Input: Genre
ObservationError: table_names {'Genre\nObservation'} not found in database I made another mistake, I should remove the newline character and the Observation part and the curly brackets and the single quotes and the \n and the space and the error message.
Action: sql_db_schema
Action Input: Genre
ObservationError: table_names {'Genre\nObservation'} not found in database I made another mistake, I should remove the newline character and the Observation part and the curly brackets and the single quotes and the \n and the space and the error message and the table_names.
Action: sql_db_schema
Action Input: Genre
ObservationError: table_names {'Genre\nObservation'} not found in database I made another mistake, I should remove the newline character and the Observation part and the curly brackets and the single quotes and the \n and the space and the error message and the table_names and the Observation.
Action: sql_db_schema
Action Input: Genre
ObservationError: table_names {'Genre\nObservation'} not found in database I made another mistake, I should remove the newline character and the Observation part and the curly brackets and the single quotes and the \n and the space and the error message and the table_names and the Observation and the Error.
Action: sql_db_schema
Action Input: Genre
ObservationError: table_names {'Genre\nObservation'} not found in database I made another mistake, I should remove the newline character and the Observation part and the curly brackets and the single quotes and the \n and the space and the error message and the table_names and the Observation and the Error and the colon.
Action: sql_db_schema
Action Input: Genre
ObservationError: table_names {'Genre\nObservation'} not found in database I made another mistake, I should remove the newline character and the Observation part and the curly brackets and the single quotes and the \n and the space and the error message and the table_names and the Observation and the Error and the colon and the table_names.
Action: sql_db_schema
Action Input: Genre
ObservationError: table_names {'Genre\nObservation'} not found in database
> Finished chain.
{'input': 'How many genres are there?',
'output': 'Agent stopped due to iteration limit or time limit.'}
### Description
SQL Agent extracts the table name with \n line breaker and next line word 'Observation' as can be seen as 'Genre\nObservation'
### System Info
System Information
------------------
> OS: Linux
> OS Version: #1 SMP PREEMPT_DYNAMIC Fri May 24 14:06:39 UTC 2024
> Python Version: 3.10.12 (main, Nov 20 2023, 15:14:05) [GCC 11.4.0]
Package Information
-------------------
> langchain_core: 0.2.10
> langchain: 0.2.6
> langchain_community: 0.2.6
> langsmith: 0.1.82
> langchain_experimental: 0.0.62
> langchain_huggingface: 0.0.3
> langchain_mistralai: 0.1.8
> langchain_openai: 0.1.10
> langchain_text_splitters: 0.2.2
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph
> langserve | SQL Agent extracts the table name with \n linebreaker and next line word 'Observation' | https://api.github.com/repos/langchain-ai/langchain/issues/23585/comments | 2 | 2024-06-27T13:43:13Z | 2024-06-30T20:10:27Z | https://github.com/langchain-ai/langchain/issues/23585 | 2,378,200,472 | 23,585 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
def embed_documents(self, texts: List[str]) -> List[List[float]]:
text_features = []
for text in texts:
# Tokenize the text
tokenized_text = self.tokenizer(text).to('cuda')
def embed_image(self, uris: List[str]) -> List[List[float]]:
try:
from PIL import Image as _PILImage
except ImportError:
raise ImportError("Please install the PIL library: pip install pillow")
# Open images directly as PIL images
pil_images = [_PILImage.open(uri) for uri in uris]
image_features = []
for pil_image in pil_images:
# Preprocess the image for the model
preprocessed_image = self.preprocess(pil_image).unsqueeze(0).to('cuda')
### Error Message and Stack Trace (if applicable)
no gpu support yet!
### Description
no gpu support yet!
### System Info
no gpu support yet! | langchain_experimental openclip no gpu | https://api.github.com/repos/langchain-ai/langchain/issues/23567/comments | 0 | 2024-06-27T05:57:59Z | 2024-06-27T06:00:40Z | https://github.com/langchain-ai/langchain/issues/23567 | 2,377,224,742 | 23,567 |
[
"langchain-ai",
"langchain"
] | ### URL
https://python.langchain.com/v0.2/docs/tutorials/rag/
### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
https://python.langchain.com/v0.2/docs/tutorials/rag/#retrieval-and-generation-generate
docs say any LangChain LLM or ChatModel could be substituted in.So where i can find a new model exclude methioned in the doc.
i want to use local model.
Like model = AutoModelForCausalLM.from_pretrained("mistralai/Mistral-7B-Instruct-v0.3", device_map="auto")
### Idea or request for content:
as a beginner,i dont know the difference between ChatModel and model loading by from_pretrained,but one output right,another output error | DOC: how can i find a new chatmodel to substitute metioned in the docs | https://api.github.com/repos/langchain-ai/langchain/issues/23566/comments | 4 | 2024-06-27T05:28:13Z | 2024-06-28T06:14:57Z | https://github.com/langchain-ai/langchain/issues/23566 | 2,377,169,955 | 23,566 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
memory = ConversationBufferMemory(memory_key="chat_history")
chat_history=[]
if co.count_documents(query) != 0:
for i in range(0, len(co.find(query)[0]["content"]), 1):
if i % 2 == 0:
chat_history.append(HumanMessage(content=co.find(query)[0]["content"][i]))
else:
chat_history.append(AIMessage(content=co.find(query)[0]["content"][i]))
memory.chat_memory=chat_history
llm = OLLAMA(model=language_model)
print(memory.chat_memory)
tools = load_tools(["google-serper"], llm=llm)
agent = initialize_agent(tools, llm, agent=AgentType.CHAT_CONVERSATIONAL_REACT_DESCRIPTION, verbose=True,memory=memory)
xx=agent.run(content)
### Error Message and Stack Trace (if applicable)
ValueError: variable chat_history should be a list of base messages
### Description
我就是想将memory加载到agent中,加载到conversationchain时都没问题
### System Info
windows
latest | agent的memory如何添加?我尝试了许多方法,始终报错variable chat_history should be a list of base messages | https://api.github.com/repos/langchain-ai/langchain/issues/23563/comments | 2 | 2024-06-27T03:46:38Z | 2024-06-28T03:25:59Z | https://github.com/langchain-ai/langchain/issues/23563 | 2,376,933,707 | 23,563 |
[
"langchain-ai",
"langchain"
] | ### URL
https://python.langchain.com/v0.2/docs/langserve/#1-create-new-app-using-langchain-cli-command
### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
The docs miss describing that remote runnables can take a timeout value as input during declaration. The default value is 5 seconds and so if llms take longer than that to respond, there is an error.
https://github.com/langchain-ai/langchainjs/blob/00c7ff15957bf2a5223cfc62878f94bafe9ded22/langchain/src/runnables/remote.ts#L180
This is more relevant for local development.
### Idea or request for content:
Add in the description for the optional options parameter which takes the timeout value in the docs. I can make a pull request if needed. | DOC: Lack of description of options (and thereby timeout) parameter in RemoteRunnable constructor. | https://api.github.com/repos/langchain-ai/langchain/issues/23537/comments | 0 | 2024-06-26T13:18:25Z | 2024-06-26T13:21:18Z | https://github.com/langchain-ai/langchain/issues/23537 | 2,375,331,115 | 23,537 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
code from this link:
https://python.langchain.com/v0.1/docs/use_cases/query_analysis/techniques/routing/
```
import getpass
import os
os.environ["OPENAI_API_KEY"] = getpass.getpass()
# Optional, uncomment to trace runs with LangSmith. Sign up here: https://smith.langchain.com.
# os.environ["LANGCHAIN_TRACING_V2"] = "true"
# os.environ["LANGCHAIN_API_KEY"] = getpass.getpass()
```
```
from typing import Literal
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.pydantic_v1 import BaseModel, Field
from langchain_openai import ChatOpenAI
class RouteQuery(BaseModel):
"""Route a user query to the most relevant datasource."""
datasource: Literal["python_docs", "js_docs", "golang_docs"] = Field(
...,
description="Given a user question choose which datasource would be most relevant for answering their question",
)
llm = ChatOpenAI(model="gpt-3.5-turbo-0125", temperature=0)
structured_llm = llm.with_structured_output(RouteQuery)
system = """You are an expert at routing a user question to the appropriate data source.
Based on the programming language the question is referring to, route it to the relevant data source."""
prompt = ChatPromptTemplate.from_messages(
[
("system", system),
("human", "{question}"),
]
)
router = prompt | structured_llm
```
```
question = """Why doesn't the following code work:
from langchain_core.prompts import ChatPromptTemplate
prompt = ChatPromptTemplate.from_messages(["human", "speak in {language}"])
prompt.invoke("french")
"""
router.invoke({"question": question})
```
### Error Message and Stack Trace (if applicable)
```
---------------------------------------------------------------------------
UnprocessableEntityError Traceback (most recent call last)
Cell In[6], line 8
1 question = """Why doesn't the following code work:
2
3 from langchain_core.prompts import ChatPromptTemplate
(...)
6 prompt.invoke("french")
7 """
----> 8 router.invoke({"question": question})
File c:\Users\MYUSERNAME\AppData\Local\miniconda3\envs\llm-local\lib\site-packages\langchain_core\runnables\base.py:2399, in RunnableSequence.invoke(self, input, config)
2397 try:
2398 for i, step in enumerate(self.steps):
-> 2399 input = step.invoke(
2400 input,
2401 # mark each step as a child run
2402 patch_config(
2403 config, callbacks=run_manager.get_child(f"seq:step:{i+1}")
2404 ),
2405 )
2406 # finish the root run
2407 except BaseException as e:
File c:\Users\MYUSERNAME\AppData\Local\miniconda3\envs\llm-local\lib\site-packages\langchain_core\runnables\base.py:4433, in RunnableBindingBase.invoke(self, input, config, **kwargs)
4427 def invoke(
4428 self,
4429 input: Input,
4430 config: Optional[RunnableConfig] = None,
4431 **kwargs: Optional[Any],
4432 ) -> Output:
-> 4433 return self.bound.invoke(
4434 input,
4435 self._merge_configs(config),
4436 **{**self.kwargs, **kwargs},
4437 )
File c:\Users\MYUSERNAME\AppData\Local\miniconda3\envs\llm-local\lib\site-packages\langchain_core\language_models\chat_models.py:170, in BaseChatModel.invoke(self, input, config, stop, **kwargs)
159 def invoke(
160 self,
161 input: LanguageModelInput,
(...)
165 **kwargs: Any,
166 ) -> BaseMessage:
167 config = ensure_config(config)
168 return cast(
169 ChatGeneration,
--> 170 self.generate_prompt(
171 [self._convert_input(input)],
172 stop=stop,
173 callbacks=config.get("callbacks"),
174 tags=config.get("tags"),
175 metadata=config.get("metadata"),
176 run_name=config.get("run_name"),
177 run_id=config.pop("run_id", None),
178 **kwargs,
179 ).generations[0][0],
180 ).message
File c:\Users\MYUSERNAME\AppData\Local\miniconda3\envs\llm-local\lib\site-packages\langchain_core\language_models\chat_models.py:599, in BaseChatModel.generate_prompt(self, prompts, stop, callbacks, **kwargs)
591 def generate_prompt(
592 self,
593 prompts: List[PromptValue],
(...)
596 **kwargs: Any,
597 ) -> LLMResult:
598 prompt_messages = [p.to_messages() for p in prompts]
--> 599 return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs)
File c:\Users\MYUSERNAME\AppData\Local\miniconda3\envs\llm-local\lib\site-packages\langchain_core\language_models\chat_models.py:456, in BaseChatModel.generate(self, messages, stop, callbacks, tags, metadata, run_name, run_id, **kwargs)
454 if run_managers:
455 run_managers[i].on_llm_error(e, response=LLMResult(generations=[]))
--> 456 raise e
457 flattened_outputs = [
458 LLMResult(generations=[res.generations], llm_output=res.llm_output) # type: ignore[list-item]
459 for res in results
460 ]
461 llm_output = self._combine_llm_outputs([res.llm_output for res in results])
File c:\Users\MYUSERNAME\AppData\Local\miniconda3\envs\llm-local\lib\site-packages\langchain_core\language_models\chat_models.py:446, in BaseChatModel.generate(self, messages, stop, callbacks, tags, metadata, run_name, run_id, **kwargs)
443 for i, m in enumerate(messages):
444 try:
445 results.append(
--> 446 self._generate_with_cache(
447 m,
448 stop=stop,
449 run_manager=run_managers[i] if run_managers else None,
450 **kwargs,
451 )
452 )
453 except BaseException as e:
454 if run_managers:
File c:\Users\MYUSERNAME\AppData\Local\miniconda3\envs\llm-local\lib\site-packages\langchain_core\language_models\chat_models.py:671, in BaseChatModel._generate_with_cache(self, messages, stop, run_manager, **kwargs)
669 else:
670 if inspect.signature(self._generate).parameters.get("run_manager"):
--> 671 result = self._generate(
672 messages, stop=stop, run_manager=run_manager, **kwargs
673 )
674 else:
675 result = self._generate(messages, stop=stop, **kwargs)
File c:\Users\MYUSERNAME\AppData\Local\miniconda3\envs\llm-local\lib\site-packages\langchain_openai\chat_models\base.py:543, in BaseChatOpenAI._generate(self, messages, stop, run_manager, **kwargs)
541 message_dicts, params = self._create_message_dicts(messages, stop)
542 params = {**params, **kwargs}
--> 543 response = self.client.create(messages=message_dicts, **params)
544 return self._create_chat_result(response)
File c:\Users\MYUSERNAME\AppData\Local\miniconda3\envs\llm-local\lib\site-packages\openai\_utils\_utils.py:277, in required_args.<locals>.inner.<locals>.wrapper(*args, **kwargs)
275 msg = f"Missing required argument: {quote(missing[0])}"
276 raise TypeError(msg)
--> 277 return func(*args, **kwargs)
File c:\Users\MYUSERNAME\AppData\Local\miniconda3\envs\llm-local\lib\site-packages\openai\resources\chat\completions.py:590, in Completions.create(self, messages, model, frequency_penalty, function_call, functions, logit_bias, logprobs, max_tokens, n, presence_penalty, response_format, seed, stop, stream, stream_options, temperature, tool_choice, tools, top_logprobs, top_p, user, extra_headers, extra_query, extra_body, timeout)
558 @required_args(["messages", "model"], ["messages", "model", "stream"])
559 def create(
560 self,
(...)
588 timeout: float | httpx.Timeout | None | NotGiven = NOT_GIVEN,
589 ) -> ChatCompletion | Stream[ChatCompletionChunk]:
--> 590 return self._post(
591 "/chat/completions",
592 body=maybe_transform(
593 {
594 "messages": messages,
595 "model": model,
596 "frequency_penalty": frequency_penalty,
597 "function_call": function_call,
598 "functions": functions,
599 "logit_bias": logit_bias,
600 "logprobs": logprobs,
601 "max_tokens": max_tokens,
602 "n": n,
603 "presence_penalty": presence_penalty,
604 "response_format": response_format,
605 "seed": seed,
606 "stop": stop,
607 "stream": stream,
608 "stream_options": stream_options,
609 "temperature": temperature,
610 "tool_choice": tool_choice,
611 "tools": tools,
612 "top_logprobs": top_logprobs,
613 "top_p": top_p,
614 "user": user,
615 },
616 completion_create_params.CompletionCreateParams,
617 ),
618 options=make_request_options(
619 extra_headers=extra_headers, extra_query=extra_query, extra_body=extra_body, timeout=timeout
620 ),
621 cast_to=ChatCompletion,
622 stream=stream or False,
623 stream_cls=Stream[ChatCompletionChunk],
624 )
File c:\Users\MYUSERNAME\AppData\Local\miniconda3\envs\llm-local\lib\site-packages\openai\_base_client.py:1240, in SyncAPIClient.post(self, path, cast_to, body, options, files, stream, stream_cls)
1226 def post(
1227 self,
1228 path: str,
(...)
1235 stream_cls: type[_StreamT] | None = None,
1236 ) -> ResponseT | _StreamT:
1237 opts = FinalRequestOptions.construct(
1238 method="post", url=path, json_data=body, files=to_httpx_files(files), **options
1239 )
-> 1240 return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls))
File c:\Users\MYUSERNAME\AppData\Local\miniconda3\envs\llm-local\lib\site-packages\openai\_base_client.py:921, in SyncAPIClient.request(self, cast_to, options, remaining_retries, stream, stream_cls)
912 def request(
913 self,
914 cast_to: Type[ResponseT],
(...)
919 stream_cls: type[_StreamT] | None = None,
920 ) -> ResponseT | _StreamT:
--> 921 return self._request(
922 cast_to=cast_to,
923 options=options,
924 stream=stream,
925 stream_cls=stream_cls,
926 remaining_retries=remaining_retries,
927 )
File c:\Users\MYUSERNAME\AppData\Local\miniconda3\envs\llm-local\lib\site-packages\openai\_base_client.py:1020, in SyncAPIClient._request(self, cast_to, options, remaining_retries, stream, stream_cls)
1017 err.response.read()
1019 log.debug("Re-raising status error")
-> 1020 raise self._make_status_error_from_response(err.response) from None
1022 return self._process_response(
1023 cast_to=cast_to,
1024 options=options,
(...)
1027 stream_cls=stream_cls,
1028 )
UnprocessableEntityError: Error code: 422 - {'detail': [{'type': 'enum', 'loc': ['body', 'tool_choice', 'str-enum[ChatCompletionToolChoiceOptionEnum]'], 'msg': "Input should be 'none' or 'auto'", 'input': 'required', 'ctx': {'expected': "'none' or 'auto'"}}, {'type': 'model_attributes_type', 'loc': ['body', 'tool_choice', 'ChatCompletionNamedToolChoice'], 'msg': 'Input should be a valid dictionary or object to extract fields from', 'input': 'required'}]}
```
### Description
When using the routing example shown in the langchain docs, it only works if the "langchain-openai" version is 0.1.8 or lower. The newest versions (0.1.9+) break this logic. Routers are used in my workflow and this is preventing me from upgrading my packages. Please either revert the breaking changes or provide new documentation to support this type of routing functionality.
### System Info
langchain ==0.2.3
langchain-chroma ==0.1.1
langchain-community ==0.2.0
langchain-core ==0.2.3
langchain-experimental ==0.0.59
langchain-google-genai ==1.0.4
langchain-google-vertexai ==1.0.4
langchain-openai ==0.1.10
langchain-text-splitters ==0.2.0
langchainhub ==0.1.15
langgraph ==0.1.1
openai ==1.27.0
platform: windows
python version 3.10.10 | Routing Example Does Not Work with langchain-openai > 0.1.8 | https://api.github.com/repos/langchain-ai/langchain/issues/23536/comments | 5 | 2024-06-26T13:00:33Z | 2024-06-30T15:56:05Z | https://github.com/langchain-ai/langchain/issues/23536 | 2,375,280,952 | 23,536 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [x] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
class Shell(BaseModel):
command: str = Field(..., description="The shell command to be executed.")
script_args: Optional[List[str]] = Field(default=None, description="Optional arguments for the shell command.")
inputs: Optional[str] = Field(default=None, description="User inputs for the command, e.g., for Python scripts.")
@tool("shell_tool",args_schema=Shell)
def shell_tool(command, script_args,inputs) -> str:
"""
Execute the given shell command and return its output.
example with shell args: shell_tool('python foo.py',["Hello World","Welcome"],None)
example with user inputs (When the script has input("Enter a nnumber")): shell_tool('python add.py',None,'5\\n6\\n')
example for simple case: shell_tool('python foo.py',None,None)
"""
try:
safe_command = shlex.split(command)
if script_args:
safe_command.extend(script_args)
result = subprocess.Popen(safe_command,stdin=subprocess.PIPE,stdout=subprocess.PIPE,stderr=subprocess.PIPE,text=True)
stdout,stderr=result.communicate(input=inputs)
if result.returncode != 0:
return f"Error: {stderr}"
return stdout
except Exception as e:
return f"Exception occurred: {str(e)}"
shell_tool.invoke('python sum.py',None,'5\n')
### Error Message and Stack Trace (if applicable)
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
Cell In[15], [line 1](vscode-notebook-cell:?execution_count=15&line=1)
----> [1](vscode-notebook-cell:?execution_count=15&line=1) shell_tool.invoke('python sum.py',None,'5\n')
TypeError: BaseTool.invoke() takes from 2 to 3 positional arguments but 4 were given
### Description
I executed this tool. but I get this error. But when I remove the decorator, the @tool decorator (now its a normal function) it works but once the decorator is on it's a tool and when I invoke it gives me this error. The same error (incomplete output/response) is seen when I use this tool inside an agent. Can anyone help me
### System Info
langchain==0.2.5
langchain-community==0.2.5
langchain-core==0.2.9
langchain-experimental==0.0.61
langchain-groq==0.1.5
langchain-text-splitters==0.2.1
python 3.12.4
Windows 11 system
| Langchain Tools: TypeError: BaseTool.invoke() takes from 2 to 3 positional arguments but 4 were given | https://api.github.com/repos/langchain-ai/langchain/issues/23533/comments | 2 | 2024-06-26T12:10:08Z | 2024-06-26T13:29:44Z | https://github.com/langchain-ai/langchain/issues/23533 | 2,375,160,398 | 23,533 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
import langchain_core.prompts.chat
### Error Message and Stack Trace (if applicable)
_No response_
### Description
我修改了xml.py命名后,json.py又出现了相同的问题,我都搞不清楚到底是我的问题还是langchain本身的问题了,怎么会有json.py这样的命名呢?
### System Info
windows
latest | import xml.etree.ElementTree as ET ModuleNotFoundError: No module named 'xml.etree' | https://api.github.com/repos/langchain-ai/langchain/issues/23529/comments | 0 | 2024-06-26T11:24:33Z | 2024-06-26T11:24:33Z | https://github.com/langchain-ai/langchain/issues/23529 | 2,375,074,030 | 23,529 |
[
"langchain-ai",
"langchain"
] | ### URL
https://python.langchain.com/v0.2/docs/tutorials/llm_chain/
### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
_No response_
### Idea or request for content:
For novices using LangChain with LangServe it could be really helpfull if there was a second simple example showing how to make an application featuring not only model calling with a prompt template, but using also a vector database for retrieval. This will show how to build the chain for example to the new on LangChain people for which the page is for. Actually the whole docs doesn't contain simple example of LangSmith with retrieval like (`retriever = vectorstore.as_retriever()`). There is one exaple which could be pretty complex for learners here: "https://github.com/langchain-ai/langserve/blob/main/examples/conversational_retrieval_chain/server.py" | DOC: Add second exaple in Build a Simple LLM Application with LCEL docs page for better understanding | https://api.github.com/repos/langchain-ai/langchain/issues/23518/comments | 0 | 2024-06-26T07:45:02Z | 2024-06-26T07:47:42Z | https://github.com/langchain-ai/langchain/issues/23518 | 2,374,617,272 | 23,518 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
from langchain.agents import AgentExecutor, create_openai_tools_agent
```
### Error Message and Stack Trace (if applicable)
```
ImportError: cannot import name '_set_config_context' from 'langchain_core.runnables.config'
```
### Description
Following the [streaming agent document](https://python.langchain.com/v0.1/docs/modules/agents/how_to/streaming/) code.
I get the error when import the modules.
### System Info
System Information
------------------
> OS: Windows
> OS Version: 10.0.19045
> Python Version: 3.11.8 (tags/v3.11.8:db85d51, Feb 6 2024, 22:03:32) [MSC v.1937 64 bit (AMD64)]
Package Information
-------------------
> langchain_core: 0.2.10
> langchain: 0.2.6
> langchain_community: 0.2.1
> langsmith: 0.1.82
> langchain_text_splitters: 0.2.0
> langchainhub: 0.1.20
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph
> langserve | ImportError: cannot import name '_set_config_context' from 'langchain_core.runnables.config' | https://api.github.com/repos/langchain-ai/langchain/issues/23517/comments | 2 | 2024-06-26T06:31:25Z | 2024-06-26T10:49:12Z | https://github.com/langchain-ai/langchain/issues/23517 | 2,374,460,655 | 23,517 |
[
"langchain-ai",
"langchain"
] | ### URL
https://python.langchain.com/v0.2/docs/integrations/document_loaders/rst/
### Checklist
- [x] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
- How to download `UnstructuredRSTLoader` is not mentioned. When I checked the docs of `Unstructured`, I could not find the loader. The only thing I saw was [this](https://python.langchain.com/v0.2/docs/integrations/providers/unstructured/) and ended up doing `pip install unstructured`. But still can't use the code.
```py
from langchain_community.document_loaders import UnstructuredRSTLoader
loader = UnstructuredRSTLoader(file_path="test.rst", mode="elements")
docs = loader.load()
print(docs[0])
```
Error:
```
(venv) robin@robin:~/Desktop/playground/FURY-data-script$ python rstparser.py
Traceback (most recent call last):
File "/home/robin/Desktop/playground/FURY-data-script/venv/lib/python3.12/site-packages/unstructured/file_utils/file_conversion.py", line 16, in convert_file_to_text
text = pypandoc.convert_file(filename, target_format, format=source_format)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/robin/Desktop/playground/FURY-data-script/venv/lib/python3.12/site-packages/pypandoc/__init__.py", line 195, in convert_file
raise RuntimeError("source_file is not a valid path")
RuntimeError: source_file is not a valid path
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/robin/Desktop/playground/FURY-data-script/rstparser.py", line 4, in <module>
docs = loader.load()
^^^^^^^^^^^^^
File "/home/robin/Desktop/playground/FURY-data-script/venv/lib/python3.12/site-packages/langchain_core/document_loaders/base.py", line 29, in load
return list(self.lazy_load())
^^^^^^^^^^^^^^^^^^^^^^
File "/home/robin/Desktop/playground/FURY-data-script/venv/lib/python3.12/site-packages/langchain_community/document_loaders/unstructured.py", line 88, in lazy_load
elements = self._get_elements()
^^^^^^^^^^^^^^^^^^^^
File "/home/robin/Desktop/playground/FURY-data-script/venv/lib/python3.12/site-packages/langchain_community/document_loaders/rst.py", line 57, in _get_elements
return partition_rst(filename=self.file_path, **self.unstructured_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/robin/Desktop/playground/FURY-data-script/venv/lib/python3.12/site-packages/unstructured/documents/elements.py", line 593, in wrapper
elements = func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/home/robin/Desktop/playground/FURY-data-script/venv/lib/python3.12/site-packages/unstructured/file_utils/filetype.py", line 626, in wrapper
elements = func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/home/robin/Desktop/playground/FURY-data-script/venv/lib/python3.12/site-packages/unstructured/file_utils/filetype.py", line 582, in wrapper
elements = func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/home/robin/Desktop/playground/FURY-data-script/venv/lib/python3.12/site-packages/unstructured/chunking/dispatch.py", line 74, in wrapper
elements = func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/home/robin/Desktop/playground/FURY-data-script/venv/lib/python3.12/site-packages/unstructured/partition/rst.py", line 53, in partition_rst
html_text = convert_file_to_html_text_using_pandoc(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/robin/Desktop/playground/FURY-data-script/venv/lib/python3.12/site-packages/unstructured/file_utils/file_conversion.py", line 65, in convert_file_to_html_text_using_pandoc
return convert_file_to_text(
^^^^^^^^^^^^^^^^^^^^^
File "/home/robin/Desktop/playground/FURY-data-script/venv/lib/python3.12/site-packages/unstructured/utils.py", line 249, in wrapper
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/home/robin/Desktop/playground/FURY-data-script/venv/lib/python3.12/site-packages/unstructured/file_utils/file_conversion.py", line 25, in convert_file_to_text
supported_source_formats, _ = pypandoc.get_pandoc_formats()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/robin/Desktop/playground/FURY-data-script/venv/lib/python3.12/site-packages/pypandoc/__init__.py", line 546, in get_pandoc_formats
_ensure_pandoc_path()
File "/home/robin/Desktop/playground/FURY-data-script/venv/lib/python3.12/site-packages/pypandoc/__init__.py", line 797, in _ensure_pandoc_path
raise OSError("No pandoc was found: either install pandoc and add it\n"
OSError: No pandoc was found: either install pandoc and add it
to your PATH or or call pypandoc.download_pandoc(...) or
install pypandoc wheels with included pandoc.
(venv) robin@robin:~/Desktop/playground/FURY-data-script$ python rstparser.py
Traceback (most recent call last):
File "/home/robin/Desktop/playground/FURY-data-script/rstparser.py", line 4, in <module>
docs = loader.load()
^^^^^^^^^^^^^
File "/home/robin/Desktop/playground/FURY-data-script/venv/lib/python3.12/site-packages/langchain_core/document_loaders/base.py", line 29, in load
return list(self.lazy_load())
^^^^^^^^^^^^^^^^^^^^^^
File "/home/robin/Desktop/playground/FURY-data-script/venv/lib/python3.12/site-packages/langchain_community/document_loaders/unstructured.py", line 88, in lazy_load
elements = self._get_elements()
^^^^^^^^^^^^^^^^^^^^
File "/home/robin/Desktop/playground/FURY-data-script/venv/lib/python3.12/site-packages/langchain_community/document_loaders/rst.py", line 57, in _get_elements
return partition_rst(filename=self.file_path, **self.unstructured_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/robin/Desktop/playground/FURY-data-script/venv/lib/python3.12/site-packages/unstructured/documents/elements.py", line 593, in wrapper
elements = func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/home/robin/Desktop/playground/FURY-data-script/venv/lib/python3.12/site-packages/unstructured/file_utils/filetype.py", line 626, in wrapper
elements = func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/home/robin/Desktop/playground/FURY-data-script/venv/lib/python3.12/site-packages/unstructured/file_utils/filetype.py", line 582, in wrapper
elements = func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/home/robin/Desktop/playground/FURY-data-script/venv/lib/python3.12/site-packages/unstructured/chunking/dispatch.py", line 74, in wrapper
elements = func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/home/robin/Desktop/playground/FURY-data-script/venv/lib/python3.12/site-packages/unstructured/partition/rst.py", line 53, in partition_rst
html_text = convert_file_to_html_text_using_pandoc(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/robin/Desktop/playground/FURY-data-script/venv/lib/python3.12/site-packages/unstructured/file_utils/file_conversion.py", line 65, in convert_file_to_html_text_using_pandoc
return convert_file_to_text(
^^^^^^^^^^^^^^^^^^^^^
File "/home/robin/Desktop/playground/FURY-data-script/venv/lib/python3.12/site-packages/unstructured/utils.py", line 249, in wrapper
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/home/robin/Desktop/playground/FURY-data-script/venv/lib/python3.12/site-packages/unstructured/file_utils/file_conversion.py", line 16, in convert_file_to_text
text = pypandoc.convert_file(filename, target_format, format=source_format)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/robin/Desktop/playground/FURY-data-script/venv/lib/python3.12/site-packages/pypandoc/__init__.py", line 200, in convert_file
return _convert_input(discovered_source_files, format, 'path', to, extra_args=extra_args,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/robin/Desktop/playground/FURY-data-script/venv/lib/python3.12/site-packages/pypandoc/__init__.py", line 364, in _convert_input
_ensure_pandoc_path()
File "/home/robin/Desktop/playground/FURY-data-script/venv/lib/python3.12/site-packages/pypandoc/__init__.py", line 797, in _ensure_pandoc_path
raise OSError("No pandoc was found: either install pandoc and add it\n"
OSError: No pandoc was found: either install pandoc and add it
to your PATH or or call pypandoc.download_pandoc(...) or
install pypandoc wheels with included pandoc.
```
I later tried `pip install "unstructured[all-docs]"` but it started downloading `torch` at which point I gave up.
### Idea or request for content:
Things to be added:
- How to download the library.
- Langchain docs should have more details regarding the loader instead of linking to `unstructured`, the docs linked are [outdated](https://unstructured-io.github.io/unstructured/bricks.html#partition-rst) and moved. | DOC: <Issue related to /v0.2/docs/integrations/document_loaders/rst/> | https://api.github.com/repos/langchain-ai/langchain/issues/23515/comments | 0 | 2024-06-26T06:23:28Z | 2024-08-08T07:08:54Z | https://github.com/langchain-ai/langchain/issues/23515 | 2,374,432,928 | 23,515 |
[
"langchain-ai",
"langchain"
] | ### URL
https://api.python.langchain.com/en/latest/chains/langchain.chains.conversation.base.ConversationChain.html
### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
The right way to add history to a chat to restart the chat from a middle point is missing the clarity. The documentation points towards **PromptTemplate**
```
history = {'input': 'What is life?', 'history': 'Human: What is life?\nAI: {}', 'response': '{ "Life" : {\n "Definition" : "A complex and multifaceted phenomenon characterized by the presence of organization, metabolism, homeostasis, and reproduction.",\n "Context" : ["Biology", "Philosophy", "Psychology"],\n "Subtopics" : [\n {"Self-awareness": "The capacity to have subjective experiences, such as sensations, emotions, and thoughts."},\n {"Evolutionary perspective": "A process driven by natural selection, genetic drift, and other mechanisms that shape the diversity of life on Earth."},\n {"Quantum perspective": "A realm where quantum mechanics and general relativity intersect, potentially influencing the emergence of consciousness."}\n ]\n} }'}
PROMPT_TEMPLATE = """
{history}
"""
custom_prompt = PromptTemplate(
input_variables=["history"], template=PROMPT_TEMPLATE
)
chain = ConversationChain(
prompt=custom_prompt,
llm=llm,
memory=ConversationBufferMemory()
)
prompt = "What is life?"
answer = chain.invoke(input=prompt)
```
>
> Error:
> miniconda3/lib/python3.11/site-packages/pydantic/v1/main.py", line 341, in __init__
> raise validation_error
> pydantic.v1.error_wrappers.ValidationError: 1 validation error for ConversationChain
> __root__
> Got unexpected prompt input variables. The prompt expects ['history'], but got ['history'] as inputs from memory, and input as the normal input key. (type=value_error)
### Idea or request for content:
Please provide the right and straightforward way to induce history in a conversation. | DOC: Right way to initialize with past history of conversation | https://api.github.com/repos/langchain-ai/langchain/issues/23511/comments | 0 | 2024-06-26T03:08:58Z | 2024-07-03T19:41:13Z | https://github.com/langchain-ai/langchain/issues/23511 | 2,374,088,042 | 23,511 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [x] I used the GitHub search to find a similar question and didn't find it.
- [ ] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
The following code:
def test_chain(chain):
test_queries = [
"What is the capital of France?",
"Explain the process of photosynthesis.",
]
for query in test_queries:
try:
logging.info(f"Running query: {query}")
response = chain.invoke(query)
logging.info(f"Query: {query}")
logging.info(f"Response: {response}")
print(f"Query: {query}")
print(f"Response: {response}\n")
except Exception as e:
logging.error(
f"An error occurred while processing the query '{query}': {e}")
traceback.print_exc()
if __name__ == "__main__":
chain = main()
test_chain(chain)
### Error Message and Stack Trace (if applicable)
TypeError('can only concatenate str (not "ChatPromptValue") to str')Traceback (most recent call last):
File "/usr/local/lib/python3.10/site-packages/langchain_core/runnables/base.py", line 1626, in _call_with_config
context.run(
File "/usr/local/lib/python3.10/site-packages/langchain_core/runnables/config.py", line 347, in call_func_with_variable_args
return func(input, **kwargs) # type: ignore[call-arg]
File "/usr/local/lib/python3.10/site-packages/langchain_core/runnables/passthrough.py", line 456, in _invoke
**self.mapper.invoke(
File "/usr/local/lib/python3.10/site-packages/langchain_core/runnables/base.py", line 3142, in invoke
output = {key: future.result() for key, future in zip(steps, futures)}
File "/usr/local/lib/python3.10/site-packages/langchain_core/runnables/base.py", line 3142, in <dictcomp>
output = {key: future.result() for key, future in zip(steps, futures)}
File "/usr/local/lib/python3.10/concurrent/futures/_base.py", line 458, in result
return self.__get_result()
File "/usr/local/lib/python3.10/concurrent/futures/_base.py", line 403, in __get_result
raise self._exception
File "/usr/local/lib/python3.10/concurrent/futures/thread.py", line 58, in run
result = self.fn(*self.args, **self.kwargs)
File "/usr/local/lib/python3.10/site-packages/langchain_core/runnables/base.py", line 2499, in invoke
input = step.invoke(
File "/usr/local/lib/python3.10/site-packages/langchain_core/runnables/base.py", line 3963, in invoke
return self._call_with_config(
File "/usr/local/lib/python3.10/site-packages/langchain_core/runnables/base.py", line 1626, in _call_with_config
context.run(
File "/usr/local/lib/python3.10/site-packages/langchain_core/runnables/config.py", line 347, in call_func_with_variable_args
return func(input, **kwargs) # type: ignore[call-arg]
File "/usr/local/lib/python3.10/site-packages/langchain_core/runnables/base.py", line 3837, in _invoke
output = call_func_with_variable_args(
File "/usr/local/lib/python3.10/site-packages/langchain_core/runnables/config.py", line 347, in call_func_with_variable_args
return func(input, **kwargs) # type: ignore[call-arg]
File "/usr/local/lib/python3.10/site-packages/transformers/pipelines/text_generation.py", line 263, in __call__
return super().__call__(text_inputs, **kwargs)
File "/usr/local/lib/python3.10/site-packages/transformers/pipelines/base.py", line 1243, in __call__
return self.run_single(inputs, preprocess_params, forward_params, postprocess_params)
File "/usr/local/lib/python3.10/site-packages/transformers/pipelines/base.py", line 1249, in run_single
model_inputs = self.preprocess(inputs, **preprocess_params)
File "/usr/local/lib/python3.10/site-packages/transformers/pipelines/text_generation.py", line 288, in preprocess
prefix + prompt_text,
TypeError: can only concatenate str (not "ChatPromptValue") to str
### Description
I expect to see an answer generated by the llm, but always end up running into this error: TypeError('can only concatenate str (not "ChatPromptValue") to str')
Even though the chain is valid.
### System Info
pip freeze | grep langchain
langchain==0.1.13
langchain-community==0.0.31
langchain-core==0.1.52
langchain-openai==0.1.1
langchain-qdrant==0.1.1
langchain-text-splitters==0.0.2 | Issue with RunnableAssign<answer>: TypeError('can only concatenate str (not "ChatPromptValue") to str')Traceback (most recent call last): | https://api.github.com/repos/langchain-ai/langchain/issues/23505/comments | 3 | 2024-06-25T21:47:08Z | 2024-07-02T00:52:56Z | https://github.com/langchain-ai/langchain/issues/23505 | 2,373,718,098 | 23,505 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [x] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```
@tool
def request_bing(query : str) -> str:
"""
Searches the internet for additional information.
Specifically useful when you need to answer questions about current events or the current state of the world.
Prefer Content related to finance.
"""
url = "https://api.bing.microsoft.com/v7.0/search"
headers = {"Ocp-Apim-Subscription-Key": os.getenv("AZURE_KEY")}
params = {"q": query}
response = requests.get(url, headers=headers, params=params)
response.raise_for_status()
data = response.json()
snippets_list = [result['snippet'] for result in data['webPages']['value']]
snippets = "\n".join(snippets_list)
return snippets
```
### Error Message and Stack Trace (if applicable)
```openai.APIError: The model produced invalid content.```
### Description
I'm using langchain ReAct agent + tools and starting from Jun 23rd seems to receive lots of exceptions.
```openai.APIError: The model produced invalid content.```
Suspecting that openai side changed something on function calling? Could you please shed some lights on it?
I'm using gpt-4o as the llm. Bing search tool defined as above.
### System Info
langchain==0.2.4
langchain-community==0.2.4
langchain-core==0.2.6
langchain-openai==0.1.8
langchain-text-splitters==0.2.1 | openai.APIError: The model produced invalid content. | https://api.github.com/repos/langchain-ai/langchain/issues/23407/comments | 6 | 2024-06-25T16:18:40Z | 2024-07-05T04:42:37Z | https://github.com/langchain-ai/langchain/issues/23407 | 2,373,099,869 | 23,407 |
[
"langchain-ai",
"langchain"
] | ### URL
https://python.langchain.com/v0.2/docs/how_to/extraction_examples/
### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
Here's a revised version of your report description:
I attempted the example provided in the given link. While it executed flawlessly with OpenAI, I encountered a `TypeError` when running it with Cohere.
The error message was: `TypeError: BaseCohere.chat() received an unexpected keyword argument 'method'`.
### Idea or request for content:
I believe the document is not current with the Cohere chat and requires some amendments. | DOC: <Issue related to /v0.2/docs/how_to/extraction_examples/> | https://api.github.com/repos/langchain-ai/langchain/issues/23396/comments | 2 | 2024-06-25T11:29:23Z | 2024-06-26T11:11:20Z | https://github.com/langchain-ai/langchain/issues/23396 | 2,372,440,766 | 23,396 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
from langchain.text_splitter import RecursiveCharacterTextSplitter
if __name__ == "__main__":
# Wrong behaviour, using \s instead of regular space
splitter_keep = RecursiveCharacterTextSplitter(
separators=[r"\s"],
keep_separator=False,
is_separator_regex=True,
chunk_size=15,
chunk_overlap=0,
strip_whitespace=False)
assert splitter_keep.split_text("Hello world")[0] == r"Hello\sworld"
# Expected behaviour, keeping regular space
splitter_no_keep = RecursiveCharacterTextSplitter(
separators=[r"\s"],
keep_separator=True,
is_separator_regex=True,
chunk_size=15,
chunk_overlap=0,
strip_whitespace=False)
assert splitter_no_keep.split_text("Hello world")[0] == r"Hello world"
```
### Error Message and Stack Trace (if applicable)
_No response_
### Description
I am trying to use the `langchain` library to split a test using regex separators. I expect the output strings to contain the original separators, but what happens is that when using the `keep_separator` flag as `False` it uses the regex value instead of the original separator.
Possible code pointer where the problem might be coming from: [libs/text-splitters/langchain_text_splitters/character.py#L98](https://github.com/langchain-ai/langchain/blob/master/libs/text-splitters/langchain_text_splitters/character.py#L98)
### System Info
langchain==0.2.5
langchain-core==0.2.9
langchain-text-splitters==0.2.1
Platform: Apple M1 Pro
macOS: 14.5 (23F79)
python version: Python 3.12.3
| RecursiveCharacterTextSplitter uses regex value instead of original separator when merging and keep_separator is false | https://api.github.com/repos/langchain-ai/langchain/issues/23394/comments | 2 | 2024-06-25T09:39:09Z | 2024-06-25T13:26:20Z | https://github.com/langchain-ai/langchain/issues/23394 | 2,372,195,599 | 23,394 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
!pip install --upgrade langchain e2b langchain-community
2. Set up the environment variables for E2B and OpenAI API keys.
3. Run the following python code:
```
from langchain_community.tools import E2BDataAnalysisTool
import os
from langchain.agents import AgentType, initialize_agent
from langchain_openai import ChatOpenAI
os.environ["E2B_API_KEY"] = "<E2B_API_KEY>"
os.environ["OPENAI_API_KEY"] = "<OPENAI_API_KEY>"
def save_artifact(artifact):
print("New matplotlib chart generated:", artifact.name)
file = artifact.download()
basename = os.path.basename(artifact.name)
with open(f"./charts/{basename}", "wb") as f:
f.write(file)
e2b_data_analysis_tool = E2BDataAnalysisTool(
env_vars={"MY_SECRET": "secret_value"},
on_stdout=lambda stdout: print("stdout:", stdout),
on_stderr=lambda stderr: print("stderr:", stderr),
on_artifact=save_artifact,
)
```
### Error Message and Stack Trace (if applicable)
Error Message
_ImportError: cannot import name 'DataAnalysis' from 'e2b' (c:\Users\sarthak kaushik\OneDrive\Desktop\Test_Project_Python\e2b\myenv\Lib\site-packages\e2b\__init__.py)
The above exception was the direct cause of the following exception:
ImportError: Unable to import e2b, please install with `pip install e2b`_
### Description
When trying to use the _**E2BDataAnalysisTool**_ from the _**langchain_community.tools**_ module, I'm encountering an **ImportError.** The error suggests that the DataAnalysis class cannot be imported from the e2b package.
**Expected Behavior:**
The E2BDataAnalysisTool should initialize without any import errors.
**Additional Context**
I have already installed the e2b package as suggested in the error message, but the issue persists.
**Possible Solution**
It seems that there might be a discrepancy between the expected structure of the e2b package and what's actually installed. Could there be a version mismatch or a change in the package structure that hasn't been reflected in the LangChain community tools?
### System Info
System Information
------------------
> OS: Windows
> OS Version: 10.0.19045
> Python Version: 3.12.3
Package Information
-------------------
> langchain_core: 0.2.9
> langchain: 0.2.5
> langchain_community: 0.2.5
> langsmith: 0.1.82
> langchain_openai: 0.1.9
> langchain_text_splitters: 0.2.1 | E2B DataAnalysisTool() function not working correctly | https://api.github.com/repos/langchain-ai/langchain/issues/23392/comments | 3 | 2024-06-25T09:26:19Z | 2024-07-27T18:24:58Z | https://github.com/langchain-ai/langchain/issues/23392 | 2,372,167,852 | 23,392 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```
from langchain_core.messages import AIMessage
from langchain_openai import ChatOpenAI
c = ChatOpenAI()
test = AIMessage(content='Hello, this is a test AI message that contains text and tool use.', tool_calls=[{'name': 'test_tool_call', 'args': {}, 'id': 'test_tool_call_id'}])
c.get_num_tokens_from_messages(messages = [test])
```
### Error Message and Stack Trace (if applicable)
```
ValueError Traceback (most recent call last)
Cell In[4], [line 10](vscode-notebook-cell:?execution_count=4&line=10)
[6](vscode-notebook-cell:?execution_count=4&line=6) c = ChatOpenAI()
[8](vscode-notebook-cell:?execution_count=4&line=8) test = AIMessage(content='Hello, this is a test AI message that contains text and tool use.', tool_calls=[{'name': 'test_tool_call', 'args': {}, 'id': 'test_tool_call_id'}])
---> [10](vscode-notebook-cell:?execution_count=4&line=10) c.get_num_tokens_from_messages(messages = [test])
File ~/.cache/pypoetry/virtualenvs/learnwise-chat-RkYLlhmr-py3.12/lib/python3.12/site-packages/langchain_openai/chat_models/base.py:777, in BaseChatOpenAI.get_num_tokens_from_messages(self, messages)
[775](https://file+.vscode-resource.vscode-cdn.net/home/ldorigo/Dropbox/projects/learnwise/learnwise-monorepo/~/.cache/pypoetry/virtualenvs/learnwise-chat-RkYLlhmr-py3.12/lib/python3.12/site-packages/langchain_openai/chat_models/base.py:775) num_tokens += _count_image_tokens(*image_size)
[776](https://file+.vscode-resource.vscode-cdn.net/home/ldorigo/Dropbox/projects/learnwise/learnwise-monorepo/~/.cache/pypoetry/virtualenvs/learnwise-chat-RkYLlhmr-py3.12/lib/python3.12/site-packages/langchain_openai/chat_models/base.py:776) else:
--> [777](https://file+.vscode-resource.vscode-cdn.net/home/ldorigo/Dropbox/projects/learnwise/learnwise-monorepo/~/.cache/pypoetry/virtualenvs/learnwise-chat-RkYLlhmr-py3.12/lib/python3.12/site-packages/langchain_openai/chat_models/base.py:777) raise ValueError(
[778](https://file+.vscode-resource.vscode-cdn.net/home/ldorigo/Dropbox/projects/learnwise/learnwise-monorepo/~/.cache/pypoetry/virtualenvs/learnwise-chat-RkYLlhmr-py3.12/lib/python3.12/site-packages/langchain_openai/chat_models/base.py:778) f"Unrecognized content block type\n\n{val}"
[779](https://file+.vscode-resource.vscode-cdn.net/home/ldorigo/Dropbox/projects/learnwise/learnwise-monorepo/~/.cache/pypoetry/virtualenvs/learnwise-chat-RkYLlhmr-py3.12/lib/python3.12/site-packages/langchain_openai/chat_models/base.py:779) )
[780](https://file+.vscode-resource.vscode-cdn.net/home/ldorigo/Dropbox/projects/learnwise/learnwise-monorepo/~/.cache/pypoetry/virtualenvs/learnwise-chat-RkYLlhmr-py3.12/lib/python3.12/site-packages/langchain_openai/chat_models/base.py:780) else:
[781](https://file+.vscode-resource.vscode-cdn.net/home/ldorigo/Dropbox/projects/learnwise/learnwise-monorepo/~/.cache/pypoetry/virtualenvs/learnwise-chat-RkYLlhmr-py3.12/lib/python3.12/site-packages/langchain_openai/chat_models/base.py:781) # Cast str(value) in case the message value is not a string
[782](https://file+.vscode-resource.vscode-cdn.net/home/ldorigo/Dropbox/projects/learnwise/learnwise-monorepo/~/.cache/pypoetry/virtualenvs/learnwise-chat-RkYLlhmr-py3.12/lib/python3.12/site-packages/langchain_openai/chat_models/base.py:782) # This occurs with function messages
[783](https://file+.vscode-resource.vscode-cdn.net/home/ldorigo/Dropbox/projects/learnwise/learnwise-monorepo/~/.cache/pypoetry/virtualenvs/learnwise-chat-RkYLlhmr-py3.12/lib/python3.12/site-packages/langchain_openai/chat_models/base.py:783) num_tokens += len(encoding.encode(value))
ValueError: Unrecognized content block type
{'type': 'function', 'id': 'test_tool_call_id', 'function': {'name': 'test_tool_call', 'arguments': '{}'}}
```
### Description
Caused by the new `isinstance` block in https://github.com/langchain-ai/langchain/pull/23147/files
### System Info
System Information
------------------
> OS: Linux
> OS Version: #1 SMP PREEMPT_DYNAMIC Thu, 13 Jun 2024 16:25:55 +0000
> Python Version: 3.12.4 (main, Jun 7 2024, 06:33:07) [GCC 14.1.1 20240522]
Package Information
-------------------
> langchain_core: 0.2.9
> langchain: 0.2.5
> langchain_community: 0.2.5
> langsmith: 0.1.81
> langchain_cli: 0.0.24
> langchain_cohere: 0.1.8
> langchain_mongodb: 0.1.6
> langchain_openai: 0.1.9
> langchain_text_splitters: 0.2.1
> langchainhub: 0.1.20
> langgraph: 0.1.1
> langserve: 0.2.1 | [Regression] ChatOpenAI.get_num_tokens_from_messages breaks with tool calls since version 0.1.9 | https://api.github.com/repos/langchain-ai/langchain/issues/23388/comments | 1 | 2024-06-25T07:59:41Z | 2024-06-25T20:27:48Z | https://github.com/langchain-ai/langchain/issues/23388 | 2,371,973,835 | 23,388 |
[
"langchain-ai",
"langchain"
] | Proposal for a new feature below by @baptiste-pasquier
### Checked
- [X] I searched existing ideas and did not find a similar one
- [X] I added a very descriptive title
- [X] I've clearly described the feature request and motivation for it
### Feature request
Add the ability to filter out documents with a similarity score less than a score_threshold in the `MultiVectorRetriever`.
### Motivation
The `VectorStoreRetriever` base class has a `"similarity_score_threshold"` option for `search_type`, which adds the ability to filter out any documents with a similarity score less than a score_threshold by calling the `.similarity_search_with_relevance_scores()` method instead of `.similarity_search()`.
This feature is not implementend in the `MultiVectorRetriever` class.
### Proposal (If applicable)
In the `_get_relevant_documents` method of `MultiVectorRetriever`
Replace :
https://github.com/langchain-ai/langchain/blob/b20c2640dac79551685b8aba095ebc6125df928c/libs/langchain/langchain/retrievers/multi_vector.py#L63-L68
With :
```python
if self.search_type == "similarity":
sub_docs = self.vectorstore.similarity_search(query, **self.search_kwargs)
elif self.search_type == "similarity_score_threshold":
sub_docs_and_similarities = (
self.vectorstore.similarity_search_with_relevance_scores(
query, **self.search_kwargs
)
)
sub_docs = [sub_doc for sub_doc, _ in sub_docs_and_similarities]
elif self.search_type == "mmr":
sub_docs = self.vectorstore.max_marginal_relevance_search(
query, **self.search_kwargs
)
else:
raise ValueError(f"search_type of {self.search_type} not allowed.")
```
As in the `VectorStoreRetriever` base class :
https://github.com/langchain-ai/langchain/blob/b20c2640dac79551685b8aba095ebc6125df928c/libs/core/langchain_core/vectorstores.py#L673-L687
_Originally posted by @baptiste-pasquier in https://github.com/langchain-ai/langchain/discussions/19404_ | Add "similarity_score_threshold" option for MultiVectorRetriever class | https://api.github.com/repos/langchain-ai/langchain/issues/23387/comments | 2 | 2024-06-25T07:42:59Z | 2024-06-26T16:52:57Z | https://github.com/langchain-ai/langchain/issues/23387 | 2,371,940,278 | 23,387 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [x] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```
def insert_data_into_vector_db(file_path, source_column=None):
"""
Main function to get data from the source, create embeddings, and insert them into the database.
"""
output = None
logger.info("Embedding Started...")
logger.info(f"Collection Name: {COLLECTION_NAME}")
split_docs, err = fetch_data_from_source(file_path, source_column)
#failure
if split_docs is None:
print('None')
else:
print('value')
for docs in split_docs:
print(docs)
PGVector.from_documents(
embedding=EMBEDDINGS_FUNCTION,
documents=split_docs,
collection_name=COLLECTION_NAME,
connection_string=CONNECTION_STRING,
use_jsonb=True
)
if err:
output = f"Embedding failed with the error - {err}"
logger.error(output)
else:
output = "Embedding Completed..."
logger.info(output)
return output
```
### Error Message and Stack Trace (if applicable)
"Traceback (most recent call last):\n File \"/var/task/app.py\", line 216, in lambda_handler\n return handle_valid_file(event, safe_filename, file_path, file_content)\n File \"/var/task/app.py\", line 133, in handle_valid_file\n body = handle_uploaded_file_success(\n File \"/var/task/app.py\", line 90, in handle_uploaded_file_success\n output = insert_data_into_vector_db(file_path, source_column)\n File \"/var/task/ai/embeddings/create.py\", line 128, in insert_data_into_vector_db\n PGVector.from_documents(\n File \"/var/task/langchain_community/vectorstores/pgvector.py\", line 1139, in from_documents\n return cls.from_texts(\n File \"/var/task/langchain_community/vectorstores/pgvector.py\", line 1009, in from_texts\n embeddings = embedding.embed_documents(list(texts))\n File \"/var/task/langchain_community/embeddings/baichuan.py\", line 111, in embed_documents\n return self._embed(texts)\n File \"/var/task/langchain_community/embeddings/baichuan.py\", line 85, in _embed\n response.raise_for_status()\n File \"/var/task/requests/models.py\", line 1024, in raise_for_status\n raise HTTPError(http_error_msg, response=self)\nrequests.exceptions.HTTPError: 400 Client Error: for url: http://api.baichuan-ai.com/v1/embeddings"
### Description
I'm encountering a `400 Client Error` when attempting to embed documents using `langchain_community.embeddings` with `PGVector`, preventing successful embedding.
In a separate script where I didn't use `PGVector`, document embedding worked properly. This suggests there may be an issue specifically with `PGVector`.
### System Info
langchain==0.2.5
langchain-community==0.2.5
langchain-core==0.2.9
langchain-openai==0.1.9
langchain-text-splitters==0.2.1
Platform : Ubuntu
Python Version : 3.10.13 | PGVector.from_documents is not working | https://api.github.com/repos/langchain-ai/langchain/issues/23386/comments | 0 | 2024-06-25T07:16:33Z | 2024-06-29T15:24:23Z | https://github.com/langchain-ai/langchain/issues/23386 | 2,371,888,192 | 23,386 |
[
"langchain-ai",
"langchain"
] | ### URL
https://python.langchain.com/v0.2/docs/how_to/extraction_examples/
### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
Getting following error:
Traceback (most recent call last):
File "Z:\llm_images\extract_info.py", line 148, in <module>
response = chain.invoke(
^^^^^^^^^^^^^
File "Z:\conda_env\ccs\Lib\site-packages\langchain_core\runnables\base.py", line 2504, in invoke
input = step.invoke(input, config)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "Z:\conda_env\ccs\Lib\site-packages\langchain_core\output_parsers\base.py", line 169, in invoke
return self._call_with_config(
^^^^^^^^^^^^^^^^^^^^^^^
File "Z:\conda_env\ccs\Lib\site-packages\langchain_core\runnables\base.py", line 1598, in _call_with_config
context.run(
File "Z:\conda_env\ccs\Lib\site-packages\langchain_core\runnables\config.py", line 380, in call_func_with_variable_args
return func(input, **kwargs) # type: ignore[call-arg]
^^^^^^^^^^^^^^^^^^^^^
File "Z:\conda_env\ccs\Lib\site-packages\langchain_core\output_parsers\base.py", line 170, in <lambda>
lambda inner_input: self.parse_result(
^^^^^^^^^^^^^^^^^^
File "Z:\conda_env\ccs\Lib\site-packages\langchain_core\output_parsers\openai_tools.py", line 196, in parse_result
pydantic_objects.append(name_dict[res["type"]](**res["args"]))
~~~~~~~~~^^^^^^^^^^^^^
KeyError: 'Person'
### Idea or request for content:
_No response_ | DOC: <Issue related to /v0.2/docs/how_to/extraction_examples/> | https://api.github.com/repos/langchain-ai/langchain/issues/23383/comments | 11 | 2024-06-25T06:07:12Z | 2024-06-25T09:22:16Z | https://github.com/langchain-ai/langchain/issues/23383 | 2,371,735,246 | 23,383 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
Below is the example code to reproduce the issue:
```
def fetch_config_from_header(config: Dict[str, Any], req: Request) -> Dict[str, Any]:
""" All supported types: 'name', 'cache', 'verbose', 'callbacks', 'tags', 'metadata', 'custom_get_token_ids', 'callback_manager', 'client', 'async_client', 'model_name', 'temperature', 'model_kwargs', 'openai_api_key', 'openai_api_base', 'openai_organization', 'openai_proxy', 'request_timeout', 'max_retries', 'streaming', 'n', 'max_tokens', 'tiktoken_model_name', 'default_headers', 'default_query', 'http_client', 'http_async_client']"""
config = config.copy()
configurable = config.get("configurable", {})
if "x-model-name" in req.headers:
configurable["model_name"] = req.headers["x-model-name"]
else:
raise HTTPException(401, "No model name provided")
if "x-api-key" in req.headers:
configurable["default_headers"] = {
"Content-Type":"application/json",
"api-key": req.headers["x-api-key"]
}
else:
raise HTTPException(401, "No API key provided")
if "x-model-kwargs" in req.headers:
configurable["model_kwargs"] = json.loads(req.headers["x-model-kwargs"])
else:
raise HTTPException(401, "No model arguments provided")
configurable["openai_api_base"] = f"https://someendpoint.com/{req.headers['x-model-name']}"
config["configurable"] = configurable
return config
chat_model = ChatOpenAI(
model_name = "some_model",
model_kwargs = {},
default_headers = {},
openai_api_key = "placeholder",
openai_api_base = "placeholder").configurable_fields(
model_name = ConfigurableField(id="model_name"),
model_kwargs = ConfigurableField(id="model_kwargs"),
default_headers = ConfigurableField(id="default_headers"),
openai_api_base = ConfigurableField(id="openai_api_base"),
)
chain = prompt_template | chat_model | StrOutputParser()
add_routes(
app,
chain.with_types(input_type=InputChat),
path="/some_chain",
disabled_endpoints=["playground"],
per_req_config_modifier=fetch_config_from_header,
)
```
### Error Message and Stack Trace (if applicable)
I attached only the relevant part of the traceback
```
Traceback (most recent call last):
File "/venv/lib/python3.12/site-packages/uvicorn/protocols/http/httptools_impl.py", line 399, in run_asgi
result = await app( # type: ignore[func-returns-value]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/venv/lib/python3.12/site-packages/uvicorn/middleware/proxy_headers.py", line 70, in __call__
return await self.app(scope, receive, send)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/venv/lib/python3.12/site-packages/fastapi/applications.py", line 1054, in __call__ await super().__call__(scope, receive, send)
File "/venv/lib/python3.12/site-packages/starlette/applications.py", line 123, in __call__
await self.middleware_stack(scope, receive, send)
File "/venv/lib/python3.12/site-packages/starlette/middleware/errors.py", line 186, in __call__
raise exc
File "/venv/lib/python3.12/site-packages/starlette/middleware/errors.py", line 164, in __call__
await self.app(scope, receive, _send)
File "/venv/lib/python3.12/site-packages/starlette/middleware/exceptions.py", line 65, in __call__
await wrap_app_handling_exceptions(self.app, conn)(scope, receive, send)
File "/venv/lib/python3.12/site-packages/starlette/_exception_handler.py", line 64, in wrapped_app
raise exc
File "/venv/lib/python3.12/site-packages/starlette/_exception_handler.py", line 53, in wrapped_app
await app(scope, receive, sender)
File "/venv/lib/python3.12/site-packages/starlette/routing.py", line 756, in __call__
await self.middleware_stack(scope, receive, send)
File "/venv/lib/python3.12/site-packages/starlette/routing.py", line 776, in app
await route.handle(scope, receive, send)
File "/venv/lib/python3.12/site-packages/starlette/routing.py", line 297, in handle
await self.app(scope, receive, send)
File "/venv/lib/python3.12/site-packages/starlette/routing.py", line 77, in app
await wrap_app_handling_exceptions(app, request)(scope, receive, send)
File "/venv/lib/python3.12/site-packages/starlette/_exception_handler.py", line 64, in wrapped_app
raise exc
File "/venv/lib/python3.12/site-packages/starlette/_exception_handler.py", line 53, in wrapped_app
await app(scope, receive, sender)
File "/venv/lib/python3.12/site-packages/starlette/routing.py", line 72, in app
response = await func(request)
^^^^^^^^^^^^^^^^^^^
File "/venv/lib/python3.12/site-packages/fastapi/routing.py", line 278, in app
raw_response = await run_endpoint_function(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/venv/lib/python3.12/site-packages/fastapi/routing.py", line 191, in run_endpoint_function
return await dependant.call(**values)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/venv/lib/python3.12/site-packages/langserve/server.py", line 530, in invoke
return await api_handler.invoke(request)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/venv/lib/python3.12/site-packages/langserve/api_handler.py", line 835, in invoke
output = await invoke_coro
^^^^^^^^^^^^^^^^^
File "/venv/lib/python3.12/site-packages/langchain_core/runnables/base.py", line 4585, in ainvoke
return await self.bound.ainvoke(
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/venv/lib/python3.12/site-packages/langchain_core/runnables/base.py", line 2541, in ainvoke
input = await step.ainvoke(input, config)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/venv/lib/python3.12/site-packages/langchain_core/runnables/configurable.py", line 123, in ainvoke
return await runnable.ainvoke(input, config, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/venv/lib/python3.12/site-packages/langchain_core/language_models/chat_models.py", line 191, in ainvoke
llm_result = await self.agenerate_prompt(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/venv/lib/python3.12/site-packages/langchain_core/language_models/chat_models.py", line 611, in agenerate_prompt
return await self.agenerate(
^^^^^^^^^^^^^^^^^^^^^
File "/venv/lib/python3.12/site-packages/langchain_core/language_models/chat_models.py", line 570, in agenerate
raise exceptions[0]
File "/venv/lib/python3.12/site-packages/langchain_core/language_models/chat_models.py", line 757, in _agenerate_with_cache
result = await self._agenerate(
^^^^^^^^^^^^^^^^^^^^^^
File "/venv/lib/python3.12/site-packages/langchain_openai/chat_models/base.py", line 667, in _agenerate
response = await self.async_client.create(messages=message_dicts, **params)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
```
### Description
In https://github.com/langchain-ai/langchain/blob/master/libs/core/langchain_core/language_models/chat_models.py, the kwargs still has information in agenerate_prompt() as shown below.
```
async def agenerate_prompt(
self,
prompts: List[PromptValue],
stop: Optional[List[str]] = None,
callbacks: Callbacks = None,
**kwargs: Any,
) -> LLMResult:
prompt_messages = [p.to_messages() for p in prompts]
return await self.agenerate(
prompt_messages, stop=stop, callbacks=callbacks, **kwargs
)
```
values of `prompt_messages` and `kwargs` in agenerate_prompt()
```
langchain_core.language_models.chat_model.py BaseChatModel.agenerate_prompt
prompt_messages: [[SystemMessage(content='some messages')]]
kwargs: {'tags': [], 'metadata': {'__useragent': 'python-requests/2.32.3', '__langserve_version': '0.2.2', '__langserve_endpoint': 'invoke', 'model_name': 'some_model', 'openai_api_base': 'https://someendpoint.com/some_model', 'run_name': None, 'run_id': None}
```
However when agenerate() is called from agenerate_prompt(), the kwargs is empty as shown below.
```
async def agenerate(
self,
messages: List[List[BaseMessage]],
stop: Optional[List[str]] = None,
callbacks: Callbacks = None,
*,
tags: Optional[List[str]] = None,
metadata: Optional[Dict[str, Any]] = None,
run_name: Optional[str] = None,
run_id: Optional[uuid.UUID] = None,
**kwargs: Any,
) -> LLMResult:
"""Asynchronously pass a sequence of prompts to a model and return generations.
This method should make use of batched calls for models that expose a batched
API.
Use this method when you want to:
1. take advantage of batched calls,
2. need more output from the model than just the top generated value,
3. are building chains that are agnostic to the underlying language model
type (e.g., pure text completion models vs chat models).
Args:
messages: List of list of messages.
stop: Stop words to use when generating. Model output is cut off at the
first occurrence of any of these substrings.
callbacks: Callbacks to pass through. Used for executing additional
functionality, such as logging or streaming, throughout generation.
**kwargs: Arbitrary additional keyword arguments. These are usually passed
to the model provider API call.
Returns:
An LLMResult, which contains a list of candidate Generations for each input
prompt and additional model provider-specific output.
"""
params = self._get_invocation_params(stop=stop, **kwargs)
```
Values of `params` and `kwargs`
```
langchain_core.language_models.chat_models.py BaseChatModel.agenerate
params: {'model': 'some_model', 'model_name': 'some_model', 'stream': False, 'n': 1, 'temperature': 0.7, 'user': 'some_user', '_type': 'openai-chat', 'stop': None}
kwargs: {}
```
### System Info
```
langchain==0.2.5
langchain-community==0.2.5
langchain-core==0.2.9
langchain-experimental==0.0.60
langchain-openai==0.1.9
langchain-text-splitters==0.2.1
langgraph==0.1.1
langserve==0.2.2
langsmith==0.1.82
openai==1.35.3
platform = linux
python version = 3.12.4
``` | BaseChatModel.agenerate_prompt() not passing kwargs correctly to BaseChatModel.agenerate() | https://api.github.com/repos/langchain-ai/langchain/issues/23381/comments | 1 | 2024-06-25T04:48:29Z | 2024-06-25T09:51:30Z | https://github.com/langchain-ai/langchain/issues/23381 | 2,371,644,357 | 23,381 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
from langchain_aws import ChatBedrock
from langchain_experimental.llms.anthropic_functions import AnthropicFunctions
from dotenv import load_dotenv
load_dotenv()
# Initialize the LLM with the required parameters
llm = ChatBedrock(
model_id="anthropic.claude-3-haiku-20240307-v1:0",
model_kwargs={"temperature": 0.1},
region_name="us-east-1"
)
# Initialize AnthropicFunctions with the LLM
base_model = AnthropicFunctions(llm=llm)
# Define the function parameters for the model
functions = [
{
"name": "get_current_weather",
"description": "Get the current weather in a given location",
"parameters": {
"type": "object",
"properties": {
"location": {
"type": "string",
"description": "The city and state, e.g. San Francisco, CA",
},
"unit": {
"type": "string",
"enum": ["celsius", "fahrenheit"],
},
},
"required": ["location"],
},
}
]
# Bind the functions to the model without causing keyword conflicts
model = base_model.bind(
functions=functions,
function_call={"name": "get_current_weather"}
)
# Invoke the model with the provided input
res = model.invoke("What's the weather in San Francisco?")
# Extract and print the function call from the response
function_call = res.additional_kwargs.get("function_call")
print("function_call", function_call)
### Error Message and Stack Trace (if applicable)
TypeError: langchain_core.language_models.chat_models.BaseChatModel.generate_prompt() got multiple values for keyword argument 'callbacks'
### Description
I try to use function calling in Anthropic's models through Bedrock. Help me fix problem!!
### System Info
I use latest version | TypeError: langchain_core.language_models.chat_models.BaseChatModel.generate_prompt() got multiple values for keyword argument 'callbacks' | https://api.github.com/repos/langchain-ai/langchain/issues/23379/comments | 2 | 2024-06-25T04:08:08Z | 2024-07-08T17:14:54Z | https://github.com/langchain-ai/langchain/issues/23379 | 2,371,597,422 | 23,379 |
[
"langchain-ai",
"langchain"
] | ### URL
https://python.langchain.com/v0.2/docs/tutorials/sql_qa/
### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
```python
from langchain_community.tools.sql_database.tool import QuerySQLDataBaseTool
execute_query = QuerySQLDataBaseTool(db=db)
write_query = create_sql_query_chain(llm, db)
chain = write_query l execute_query
chain.invoke({"question": "How many employees are there"})
```
The result generated by write_query is not entirely SQL, so execute_query will report an error.
My way :
```python
chain = write_query | extract_sql |execute_query
def extract_sql(txt: str) -> str:
code_block_pattern = r'```sql(.*?)```'
code_blocks = re.findall(code_block_pattern, txt, re.DOTALL)
if code_blocks:
return code_blocks[0]
else:
return ""
```
At the same time, prompt words will also be modified
Use the following format to return:
"""
```sql
write sql here
```
"""
### Idea or request for content:
_No response_ | DOC:Some questions about Execute SQL query | https://api.github.com/repos/langchain-ai/langchain/issues/23378/comments | 1 | 2024-06-25T02:35:56Z | 2024-07-08T02:06:44Z | https://github.com/langchain-ai/langchain/issues/23378 | 2,371,502,868 | 23,378 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```
from langchain_openai import ChatOpenAI
from langchain_community.graphs import RdfGraph
from langchain.chains import GraphSparqlQAChain
end_point = "https://brickschema.org/schema/1.0.3/Brick.ttl"
graph = RdfGraph(query_endpoint=end_point, standard="ttl")
```
### Error Message and Stack Trace (if applicable)
```
ValueError: Invalid standard. Supported standards are: ('rdf', 'rdfs', 'owl').
```
### Description
There isn't any support for RdfGraph to read ttl file format.
.ttl format is Terse RDF Triple Language, which is another way to serialize the RDF language. .ttl format is a much preferable choice due to its human-readable syntax.
.ttl format is also known as Turtle format.
### System Info
```
pip install langchain==0.2.5
pip install langchain-openai==0.1.9
pip install rdflib==7.0.0
``` | When will langchain_community.graphs.RdfGraph support reading .ttl serialization format ? | https://api.github.com/repos/langchain-ai/langchain/issues/23372/comments | 1 | 2024-06-24T21:09:44Z | 2024-07-17T20:15:40Z | https://github.com/langchain-ai/langchain/issues/23372 | 2,371,098,095 | 23,372 |
[
"langchain-ai",
"langchain"
] | ### URL
https://python.langchain.com/v0.2/docs/integrations/tools/google_search/
### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
The current documentation for langchain_community.utilities.google_search.GoogleSearchAPIWrapper appears to be outdated and deprecated. The class GoogleSearchAPIWrapper is marked as deprecated since version 0.0.33, yet there are detailed instructions for using it which may confuse users.
### Idea or request for content:
Clearly mark the class as deprecated at the beginning of the documentation.
Remove or simplify the setup instructions to reflect the deprecated status.
Provide alternative recommendations or direct users to updated tools and methods if available. | Issue related to /v0.2/docs/integrations/tools/google_search/ | https://api.github.com/repos/langchain-ai/langchain/issues/23371/comments | 0 | 2024-06-24T21:08:11Z | 2024-06-24T22:23:01Z | https://github.com/langchain-ai/langchain/issues/23371 | 2,371,095,092 | 23,371 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```
from langchain.agents import AgentExecutor, create_tool_calling_agent
from langchain.callbacks.base import BaseCallbackHandler
class CustomCallbackHandler(BaseCallbackHandler):
def on_agent_finish(self, finish, **kwargs):
if "Agent stopped due to iteration limit or time limit" in finish.return_values.get('output', ''):
finish.return_values['output'] = "I'm having difficulty finding an answer. Please rephrase your question."
class ChatBot:
def __init__(self, llm, tasks):
self.llm = llm
self.tasks = tasks
self.agent = self._init_agent()
def _init_agent(self):
tools = get_tools() # Function to get the tools
agent = create_tool_calling_agent(llm=self.llm, tools=tools, callback_handler=CustomCallbackHandler())
return agent
def send_message(self, message):
if not message.strip():
return "You didn't ask a question. How can I assist you further?"
response = self.agent.run(message)
if "tool was called" in response:
# Logically verify if the tool was actually called
# This is where the inconsistency occurs
print("Tool call indicated but not verified")
return response
# Usage
llm = "mock_llm_instance" # Replace with actual LLM instance
tasks = ["task1", "task2"]
chatbot = ChatBot(llm, tasks)
response = chatbot.send_message("Trigger tool call")
print(response)
```
### Error Message and Stack Trace (if applicable)
_No response_
### Description
The chatbot sometimes responds with a message indicating that a tool was called, even though the tool was not actually executed. This inconsistency suggests an issue within the Langchain library, particularly in the agent's tool-calling mechanism.
**Steps to Reproduce:**
1. Initiate a session with the chatbot.
2. Send a message that should trigger a tool call.
3. Observe the response: it sometimes indicates the tool was called, but in reality, it wasn't.
**Expected Behavior:**
1. When the chatbot indicates that a tool was called, the tool should actually be executed and return a result.
**Actual Behavior:**
1. The chatbot occasionally indicates that a tool was called without actually executing it.
### System Info
langchain==0.2.3
langchain-community==0.0.33
langchain-core==0.2.4
langchain-openai==0.1.8
langchain-text-splitters==0.2.0
langchainhub==0.1.15
| Langchain hallucinates and responds without actually calling the tool | https://api.github.com/repos/langchain-ai/langchain/issues/23365/comments | 2 | 2024-06-24T18:54:19Z | 2024-06-24T19:03:03Z | https://github.com/langchain-ai/langchain/issues/23365 | 2,370,889,775 | 23,365 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [ ] I used the GitHub search to find a similar question and didn't find it.
- [ ] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
import json
import gradio as gr
import typing_extensions
import os
import boto3
from langchain.prompts.prompt import PromptTemplate
from langchain_community.graphs import Neo4jGraph
from langchain.chains import GraphCypherQAChain
from langchain.memory import ConversationBufferMemory
from langchain.globals import set_debug
from langchain_community.chat_models import BedrockChat
set_debug(True)
LANGCHAIN_TRACING_V2="true" # false
LANGCHAIN_ENDPOINT="https://api.smith.langchain.com"
LANGCHAIN_PROJECT=os.getenv("LANGCHAIN_PROJECT")
LANGCHAIN_API_KEY=os.getenv("LANGCHAIN_API_KEY")
# pass
REGION_NAME = os.getenv("AWS_REGION")
SERVICE_NAME = os.getenv("AWS_SERVICE_NAME")
AWS_ACCESS_KEY_ID = os.getenv("AWS_ACCESS_KEY_ID")
AWS_SECRET_ACCESS_KEY = os.getenv("AWS_SECRET_ACCESS_KEY")
NEO4J_URI = os.getenv("NEO4J_URI")
NEO4J_USERNAME = os.getenv("NEO4J_USERNAME")
NEO4J_PASSWORD = os.getenv("NEO4J_PASSWORD")
bedrock = boto3.client(
service_name=SERVICE_NAME,
region_name=REGION_NAME,
endpoint_url=f'https://{SERVICE_NAME}.{REGION_NAME}.amazonaws.com',
aws_access_key_id = AWS_ACCESS_KEY_ID,
aws_secret_access_key = AWS_SECRET_ACCESS_KEY
)
CYPHER_GENERATION_TEMPLATE = """You are an expert Neo4j Cypher translator who understands the question in english and convert to Cypher strictly based on the Neo4j Schema provided and following the instructions below:
<instructions>
* Generate Cypher query compatible ONLY for Neo4j Version 5
* Do not use EXISTS, SIZE keywords in the cypher. Use alias when using the WITH keyword
* Please do not use same variable names for different nodes and relationships in the query.
* Use only Nodes and relationships mentioned in the schema
* Always enclose the Cypher output inside 3 backticks (```)
* Always do a case-insensitive and fuzzy search for any properties related search. Eg: to search for a Person name use `toLower(c.name) contains 'neo4j'`
* ypher is NOT SQL. So, do not mix and match the syntaxes.
* Every Cypher query always starts with a MATCH keyword.
* Always do fuzzy search for any properties related search. Eg: when the user asks for "karn" instead of "karna", make sure to search for a Person name using use `toLower(c.name) contains 'karn'`
* Always understand the gender of the Person node and map relationship accordingly. Eg: when asked Who is Karna married to, search for HUSBAND_OF relationship coming out of Karna instead of WIFE_OF relationship.
</instructions>
Schema:
<schema>
{schema}
</schema>
The samples below follow the instructions and the schema mentioned above. So, please follow the same when you generate the cypher:
<samples>
Human: Who is the husband of Kunti?
Assistant: ```MATCH (p:Person)-[:WIFE_OF]->(husband:Person) WHERE toLower(p.name) contains "kunti" RETURN husband.name```
Human: Who are the parents of Karna?
Assistant: ```MATCH (p1:Person)<-[:FATHER_OF]-(father:Person) OPTIONAL MATCH (p2:Person)<-[:MOTHER_OF]-(mother:Person) WHERE toLower(p1.name) contains "karna" OR toLower(p2.name) contains "karna" RETURN coalesce(father.name, mother.name) AS parent_name```
Human: Who is Kunti married to?
Assistant: ```MATCH (p:Person)-[:WIFE_OF]->(husband:Person) WHERE toLower(p.name) contains "kunti" RETURN husband.name```
Human: Who killed Ghatotakach?
Assistant: ```MATCH (killer:Person)-[:KILLED]->(p:Person) WHERE toLower(p.name) contains "ghatotakach" RETURN killer.name```
Human: Who are the siblings of Karna?
Assistant: ```MATCH (p1:Person)<-[:FATHER_OF]-(father)-[:FATHER_OF]->(sibling) WHERE sibling <> p1 and toLower(p1.name) contains "karna" RETURN sibling.name AS SiblingName UNION MATCH (p2:Person)<-[:MOTHER_OF]-(mother)-[:MOTHER_OF]->(sibling) WHERE sibling <> p2 and toLower(p2.name) contains "karna" RETURN sibling.name AS SiblingName```
Human: Tell me the names of top 5 characters in Mahabharata.
Assistant: ```MATCH (p:Person) WITH p, COUNT(*) AS rel_count RETURN p, COUNT(*) AS rel_count ORDER BY rel_count DESC LIMIT 5```
</samples>
Human: {question}
Assistant:
"""
CYPHER_GENERATION_PROMPT = PromptTemplate(
input_variables=["schema","question"], validate_template=True, template=CYPHER_GENERATION_TEMPLATE
)
graph = Neo4jGraph(
url=NEO4J_URI,
username=NEO4J_USERNAME,
password=NEO4J_PASSWORD,
)
llm = BedrockChat(
model_id="anthropic.claude-v2",
client=bedrock,
model_kwargs = {
"temperature":0,
"top_k":1, "top_p":0.1,
"anthropic_version":"bedrock-2023-05-31",
"max_tokens_to_sample": 2048
}
)
chain = GraphCypherQAChain.from_llm(
llm,
graph=graph,
cypher_prompt=CYPHER_GENERATION_PROMPT,
verbose=True,
return_direct=True
)
def chat(que):
r = chain.invoke(que)
print(r)
summary_prompt_tpl = f"""Human:
Fact: {json.dumps(r['result'])}
* Summarise the above fact as if you are answering this question "{r['query']}"
* When the fact is not empty, assume the question is valid and the answer is true
* Do not return helpful or extra text or apologies
* Just return summary to the user. DO NOT start with Here is a summary
* List the results in rich text format if there are more than one results
Assistant:
"""
return llm.invoke(summary_prompt_tpl).content
memory = ConversationBufferMemory(memory_key = "chat_history", return_messages = True)
def chat_response(input_text,history):
try:
return chat(input_text)
except:
# a bit of protection against exposed error messages
# we could log these situations in the backend to revisit later in development
return "I'm sorry, there was an error retrieving the information you requested."
# Define your custom CSS
custom_css = """
/* Custom CSS for the chat interface */
.gradio-container {
# background: #f0f0f0; /* Change background color */
border: 0
border-radius: 15px; /* Add border radius */
}
.primary.svelte-cmf5ev{
background: linear-gradient(90deg, #9848FC 0%, #DC8855 100%);
# background-clip: text;
# -webkit-background-clip: text;
# -webkit-text-fill-color: transparent;
}
.v-application .secondary{
background-color: #EEEEEE !important
}
# /* Custom CSS for the chat input */
# .gradio-chat-input input[type="text"] {
# background-color: #ffffff; /* Change input background color */
# border-radius: 5px; /* Add border radius */
# border: 1px solid #cccccc; /* Change border color */
# }
# /* Custom CSS for the chat button */
# .gradio-chat-input button {
# # background-color: #ff0000; /* Change button background color */
# # border-radius: 5px; /* Add border radius */
# # color: #ffffff; /* Change text color */
# background: linear-gradient(90deg, #9848FC 0%, #DC8855 100%);
# background-clip: text;
# -webkit-background-clip: text;
# -webkit-text-fill-color: transparent;
# }
"""
interface = gr.ChatInterface(fn = chat_response,
theme = "soft",
chatbot = gr.Chatbot(height=430),
undo_btn = None,
clear_btn = "\U0001F5D1 Clear Chat",
css=custom_css,
examples = ["Who killed Ghatotakach?",
"Who are the parents of Karna?",
"Who are the kids of Kunti?",
"Who are the siblings of Karna?",
"Tell me the names of top 5 characters in Mahabharata.",
"Why did the Mahabharata war happen?",
"Who killed Karna, and why?",
"Why did the Pandavas have to go live in the forest for 12 years?",
"How did the Pandavas receive knowledge from sages and saintly persons during their time in the forest?",
#"What were the specific austerities that Arjuna had to perform in the Himalayan mountains to please Lord Shiva?",
#"How did Lord Krishna's presence in the forest affect the Pandavas' experience during their exile?",
"What were the specific challenges and difficulties that Yudhisthira and his brothers faced in their daily lives as inhabitants of the forest?",
#"How did Bhima cope with the challenges of living as an ascetic in the forest? Did he face any particular difficulties or struggles during their time in exile?"
])
# Launch the interface
interface.launch(share=True)
### Error Message and Stack Trace (if applicable)
"ValueError('Error raised by bedrock service: An error occurred (ValidationException) when calling the InvokeModel operation: Malformed input request: #: subject must not be valid against schema {\"required\":[\"messages\"]}#: extraneous key [max_tokens_to_sample] is not permitted, please reformat your input and try again.')Traceback (most recent call last):\n\n\n File \"/usr/local/lib/python3.10/site-packages/langchain_community/llms/bedrock.py\", line 545, in _prepare_input_and_invoke\n response = self.client.invoke_model(**request_options)\n\n\n File \"/usr/local/lib/python3.10/site-packages/botocore/client.py\", line 565, in _api_call\n return self._make_api_call(operation_name, kwargs)\n\n\n File \"/usr/local/lib/python3.10/site-packages/botocore/client.py\", line 1021, in _make_api_call\n raise error_class(parsed_response, operation_name)\n\n\nbotocore.errorfactory.ValidationException: An error occurred (ValidationException) when calling the InvokeModel operation: Malformed input request: #: subject must not be valid against schema {\"required\":[\"messages\"]}#: extraneous key [max_tokens_to_sample] is not permitted, please reformat your input and try again.\n\n\n\nDuring handling of the above exception, another exception occurred:\n\n\n\nTraceback (most recent call last):\n\n\n File \"/usr/local/lib/python3.10/site-packages/langchain/chains/base.py\", line 156, in invoke\n self._call(inputs, run_manager=run_manager)\n\n\n File \"/usr/local/lib/python3.10/site-packages/langchain_community/chains/graph_qa/cypher.py\", line 316, in _call\n generated_cypher = self.cypher_generation_chain.run(\n\n\n File \"/usr/local/lib/python3.10/site-packages/langchain_core/_api/deprecation.py\", line 168, in warning_emitting_wrapper\n return wrapped(*args, **kwargs)\n\n\n File \"/usr/local/lib/python3.10/site-packages/langchain/chains/base.py\", line 600, in run\n return self(args[0], callbacks=callbacks, tags=tags, metadata=metadata)[\n\n\n File \"/usr/local/lib/python3.10/site-packages/langchain_core/_api/deprecation.py\", line 168, in warning_emitting_wrapper\n return wrapped(*args, **kwargs)\n\n\n File \"/usr/local/lib/python3.10/site-packages/langchain/chains/base.py\", line 383, in __call__\n return self.invoke(\n\n\n File \"/usr/local/lib/python3.10/site-packages/langchain/chains/base.py\", line 166, in invoke\n raise e\n\n\n File \"/usr/local/lib/python3.10/site-packages/langchain/chains/base.py\", line 156, in invoke\n self._call(inputs, run_manager=run_manager)\n\n\n File \"/usr/local/lib/python3.10/site-packages/langchain/chains/llm.py\", line 126, in _call\n response = self.generate([inputs], run_manager=run_manager)\n\n\n File \"/usr/local/lib/python3.10/site-packages/langchain/chains/llm.py\", line 138, in generate\n return self.llm.generate_prompt(\n\n\n File \"/usr/local/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py\", line 599, in generate_prompt\n return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs)\n\n\n File \"/usr/local/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py\", line 456, in generate\n raise e\n\n\n File \"/usr/local/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py\", line 446, in generate\n self._generate_with_cache(\n\n\n File \"/usr/local/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py\", line 671, in _generate_with_cache\n result = self._generate(\n\n\n File \"/usr/local/lib/python3.10/site-packages/langchain_community/chat_models/bedrock.py\", line 300, in _generate\n completion, usage_info = self._prepare_input_and_invoke(\n\n\n File \"/usr/local/lib/python3.10/site-packages/langchain_community/llms/bedrock.py\", line 552, in _prepare_input_and_invoke\n raise ValueError(f\"Error raised by bedrock service: {e}\")\n\n\nValueError: Error raised by bedrock service: An error occurred (ValidationException) when calling the InvokeModel operation: Malformed input request: #: subject must not be valid against schema {\"required\":[\"messages\"]}#: extraneous key [max_tokens_to_sample] is not permitted, please reformat your input and try again."
### Description
I am trying to integrate Bedrock Runtime with Langchain but it is continuously failing with `ValueError: Error raised by bedrock service: An error occurred (ValidationException) when calling the InvokeModel operation: Malformed input request: #: subject must not be valid against schema {\"required\":[\"messages\"]}#: extraneous key [max_tokens_to_sample] is not permitted, please reformat your input and try again.`
This does seem like the issue mentioned [here](https://repost.aws/questions/QUcTMammSKSL-mTcrz-JW4OA/bedrock-error-when-calling) and somewhat related to issue mentioned [here](https://github.com/langchain-ai/langchain/issues/11130). However, I am unable to find any code example of implementing Bedrock Runtime with Langchain and how to update the prompt and LLM accordingly. Any help in this regard would be much appreciated.
### System Info
neo4j-driver
gradio==4.29.0
langchain==0.2.5
awscli
langchain-community
langchain-aws
botocore
boto3 | ValueError: Error raised by bedrock service: An error occurred (ValidationException) when calling the InvokeModel operation: Malformed input request: #: subject must not be valid against schema {\"required\":[\"messages\"]}#: extraneous key [max_tokens_to_sample] is not permitted, please reformat your input and try again. | https://api.github.com/repos/langchain-ai/langchain/issues/23352/comments | 2 | 2024-06-24T13:20:30Z | 2024-06-25T03:43:12Z | https://github.com/langchain-ai/langchain/issues/23352 | 2,370,242,732 | 23,352 |
[
"langchain-ai",
"langchain"
] | ### URL
https://python.langchain.com/v0.2/docs/integrations/chat/
### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
_No response_
### Idea or request for content:
_No response_ | DOC: <Issue related to /v0.2/docs/integrations/chat/> | https://api.github.com/repos/langchain-ai/langchain/issues/23346/comments | 0 | 2024-06-24T08:43:45Z | 2024-06-24T08:46:12Z | https://github.com/langchain-ai/langchain/issues/23346 | 2,369,618,335 | 23,346 |
[
"langchain-ai",
"langchain"
] | ### URL
https://python.langchain.com/v0.2/docs/integrations/tools/wikidata/
### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
The current documentation does not cover the positional arguments required by the WikidataAPIWrapper. It needs the arguments wikidata_mw and wikidata_rest.
Link to faulty documentation page: https://python.langchain.com/v0.2/docs/integrations/tools/wikidata/
### Idea or request for content:
Please update the code with the required arguments, and please explain from where we can get it. If needed, I can work on it as well. Thanks! | DOC: Missing positional arguments in Wikidata Langchain documentation. Path: /v0.2/docs/integrations/tools/wikidata/ | https://api.github.com/repos/langchain-ai/langchain/issues/23344/comments | 1 | 2024-06-24T07:59:20Z | 2024-06-27T06:04:49Z | https://github.com/langchain-ai/langchain/issues/23344 | 2,369,519,742 | 23,344 |
[
"langchain-ai",
"langchain"
] | ### URL
https://python.langchain.com/v0.2/docs/integrations/providers/replicate/
### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
This page doesn't include import statements, e.g. `from langchain_community.llms import Replicate`.
### Idea or request for content:
Add that line at the top | The documentation for the Integration for Replicate is missing import statements | https://api.github.com/repos/langchain-ai/langchain/issues/23342/comments | 0 | 2024-06-24T07:28:40Z | 2024-06-24T07:31:10Z | https://github.com/langchain-ai/langchain/issues/23342 | 2,369,458,309 | 23,342 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
text_splitter = SemanticChunker(HuggingFaceEmbeddings(), breakpoint_threshold_type="gradient")
### Error Message and Stack Trace (if applicable)
```
File "/Users/guertethiaf/Documents/jamstack/muniai/anabondsbackend/main.py", line 33, in chunk_text_semantically
text_splitter = SemanticChunker(HuggingFaceEmbeddings(), breakpoint_threshold_type="gradient")
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/langchain_experimental/text_splitter.py", line 124, in __init__
self.breakpoint_threshold_amount = BREAKPOINT_DEFAULTS[
^^^^^^^^^^^^^^^^^^^^
KeyError: 'gradient'
```
### Description
When trying to use the SemanticChunker with 'gradient' as a breakpoint_threshold_type I noticed it always gave me a key error.
After checking `/Library/Frameworks/Python.framework/Versions/3.12/lib/python3.12/site-packages/langchain_experimental/text_splitter.py` I noticed the option wasn't present.
It's present in the repository and was merged I think a week ago.
### System Info
System Information
------------------
> OS: Darwin
> OS Version: Darwin Kernel Version 23.3.0: Wed Dec 20 21:30:27 PST 2023; root:xnu-10002.81.5~7/RELEASE_ARM64_T8103
> Python Version: 3.12.4 (v3.12.4:8e8a4baf65, Jun 6 2024, 17:33:18) [Clang 13.0.0 (clang-1300.0.29.30)]
Package Information
-------------------
> langchain_core: 0.2.9
> langchain: 0.2.5
> langchain_community: 0.2.5
> langsmith: 0.1.81
> langchain_experimental: 0.0.61
> langchain_huggingface: 0.0.3
> langchain_openai: 0.1.9
> langchain_text_splitters: 0.2.1
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph
> langserve
| The gradient option in SemanticChunker (langchain_experimental) is not available when installing from pip | https://api.github.com/repos/langchain-ai/langchain/issues/23340/comments | 1 | 2024-06-23T20:03:37Z | 2024-06-24T11:40:37Z | https://github.com/langchain-ai/langchain/issues/23340 | 2,368,869,398 | 23,340 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
The following code is used to create a Ranker object which downloads the model into a custom directory and then FlashrankRerank is initialized.
```python
def create_ms_marco_mini_llm():
logger.info("Download model ms_marco_mini_llm")
model_name = "ms-marco-MiniLM-L-12-v2"
ranker = Ranker(model_name=model_name, max_length=1024, cache_dir=model_dir)
return FlashrankRerank(client=ranker, model=model_name)
```
The validate_environment method in FlashrankRerank goes to create another Ranker object causing it to download model into a different directory
```python
@root_validator(pre=True)
def validate_environment(cls, values: Dict) -> Dict:
"""Validate that api key and python package exists in environment."""
try:
from flashrank import Ranker
except ImportError:
raise ImportError(
"Could not import flashrank python package. "
"Please install it with `pip install flashrank`."
)
values["model"] = values.get("model", DEFAULT_MODEL_NAME)
values["client"] = Ranker(model_name=values["model"])
return values
```
### Error Message and Stack Trace (if applicable)
_No response_
### Description
FlashrankRerank should check if the client is already initialized before creating a ranker.
### System Info
langchain==0.2.5
langchain-anthropic==0.1.15
langchain-aws==0.1.6
langchain-community==0.2.5
langchain-core==0.2.7
langchain-experimental==0.0.61
langchain-openai==0.1.8
langchain-text-splitters==0.2.1
openinference-instrumentation-langchain==0.1.19 | FlashrankRerank validate_environment creates a Ranker client even if a custom client is passed | https://api.github.com/repos/langchain-ai/langchain/issues/23338/comments | 1 | 2024-06-23T18:43:30Z | 2024-06-24T11:11:21Z | https://github.com/langchain-ai/langchain/issues/23338 | 2,368,826,147 | 23,338 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
# https://python.langchain.com/v0.2/docs/how_to/message_history/
from langchain_community.chat_message_histories import SQLChatMessageHistory
from langchain_core.messages import HumanMessage
from langchain_core.runnables.history import RunnableWithMessageHistory
def get_session_history(session_id):
return SQLChatMessageHistory(session_id, "sqlite:///memory.db")
runnable_with_history = RunnableWithMessageHistory(
llm,
get_session_history,
)
runnable_with_history.invoke(
[HumanMessage(content="hi - im bob!")],
config={"configurable": {"session_id": "1"}},
)
runnable_with_history.invoke(
[HumanMessage(content="what is my name?")],
config={"configurable": {"session_id": "1"}},
)
### Error Message and Stack Trace (if applicable)
Error in RootListenersTracer.on_llm_end callback: AttributeError("'str' object has no attribute 'type'")
<img width="1122" alt="image" src="https://github.com/langchain-ai/langchain/assets/31367145/1db9de46-0822-4c78-80e1-6a6ea8633f21">
### Description
I'm following the document https://python.langchain.com/v0.2/docs/how_to/message_history/ then got this error.
### System Info
from conda list
langchain 0.2.1 pypi_0 pypi
langchain-chroma 0.1.1 pypi_0 pypi
langchain-community 0.2.1 pypi_0 pypi
langchain-core 0.2.2 pypi_0 pypi
langchain-openai 0.1.8 pypi_0 pypi
langchain-text-splitters 0.2.0 pypi_0 pypi
langgraph 0.0.66 pypi_0 pypi
langserve 0.2.2 pypi_0 pypi
langsmith 0.1.63 pyhd8ed1ab_0 conda-forge | Error in RootListenersTracer.on_llm_end callback: AttributeError("'str' object has no attribute 'type'") | https://api.github.com/repos/langchain-ai/langchain/issues/23311/comments | 2 | 2024-06-23T08:47:47Z | 2024-06-23T09:59:18Z | https://github.com/langchain-ai/langchain/issues/23311 | 2,368,450,526 | 23,311 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
The following code:
```python
from langchain_community.embeddings import HuggingFaceEmbeddings
from langchain_community.vectorstores import DuckDB
db = DuckDB.from_texts(
['a', 'v', 'asdfadf', '893yhrfa'],
HuggingFaceEmbeddings())
```
### Error Message and Stack Trace (if applicable)
```
CatalogException Traceback (most recent call last)
Cell In[3], line 2
1 texts = ['a', 'v', 'asdfadf', '893yhrfa']
----> 2 db = DuckDB.from_texts(texts,
3 HuggingFaceEmbeddings())
4 db.similarity_search('ap', k=5)
File /opt/conda/lib/python3.10/site-packages/langchain_community/vectorstores/duckdb.py:287, in DuckDB.from_texts(cls, texts, embedding, metadatas, **kwargs)
278 instance = DuckDB(
279 connection=connection,
280 embedding=embedding,
(...)
284 table_name=table_name,
285 )
286 # Add texts and their embeddings to the DuckDB vector store
--> 287 instance.add_texts(texts, metadatas=metadatas, **kwargs)
289 return instance
File /opt/conda/lib/python3.10/site-packages/langchain_community/vectorstores/duckdb.py:194, in DuckDB.add_texts(self, texts, metadatas, **kwargs)
191 if have_pandas:
192 # noinspection PyUnusedLocal
193 df = pd.DataFrame.from_dict(data) # noqa: F841
--> 194 self._connection.execute(
195 f"INSERT INTO {self._table_name} SELECT * FROM df",
196 )
197 return ids
CatalogException: Catalog Error: Table with name df does not exist!
Did you mean "pg_am"?
LINE 1: INSERT INTO embeddings SELECT * FROM df
```
### Description
* I am trying to use DuckDB as vector storage
* I expect to get a vector storage instance connected to DuckDB
* Instead it throw error on initialization
### System Info
System Information
------------------
> OS: Linux
> OS Version: #1 SMP Tue Dec 19 13:14:11 UTC 2023
> Python Version: 3.10.13 | packaged by conda-forge | (main, Dec 23 2023, 15:36:39) [GCC 12.3.0]
Package Information
-------------------
> langchain_core: 0.2.9
> langchain: 0.2.5
> langchain_community: 0.2.5
> langsmith: 0.1.81
> langchain_text_splitters: 0.2.1
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph
> langserve | SQL problem in langchain-community `langchain_community.vectorstores.duckdb`:194 | https://api.github.com/repos/langchain-ai/langchain/issues/23308/comments | 3 | 2024-06-23T02:21:20Z | 2024-07-01T13:12:07Z | https://github.com/langchain-ai/langchain/issues/23308 | 2,368,150,297 | 23,308 |
[
"langchain-ai",
"langchain"
] | ### URL
https://python.langchain.com/v0.2/docs/tutorials/chatbot/
### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
would be nice to understand what those "Parent run 181a1f04-9176-4837-80e8-ce74866775a2 not found for run ad402c5a-8341-4c62-ac58-cdf923b3b9ec. Treating as a root run." messages mean...
Are they harmless?
### Idea or request for content:
_No response_ | DOC: <Issue related to /v0.2/docs/tutorials/chatbot/> | https://api.github.com/repos/langchain-ai/langchain/issues/23307/comments | 1 | 2024-06-22T16:46:20Z | 2024-06-25T18:59:03Z | https://github.com/langchain-ai/langchain/issues/23307 | 2,367,902,893 | 23,307 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
class MyVanna(ChromaDB_VectorStore, GoogleGeminiChat):
def __init__(self, config=None):
ChromaDB_VectorStore.__init__(self, config=config)
GoogleGeminiChat.__init__(self, config=config)
vn = MyVanna(config={
"path": "../....,
"api_key": os.getenv("GOOGLE_API_KEY"),
"model_name": "gemini-1.5-pro",
}
)
@tool('vanna_tool')
def vanna_tool(qa : str):
"....."
....
return .....
class Response(BaseModel):
"""The format for your final response by the agent should be in a Json format agent_response and sql_query"""
agent_response: str = Field(description="""..... """)
sql_query: str = Field("", description="The full sql query returned by the `vanna_tool` agent and '' if no response.")
tools = [vanna_tool]
llm = ChatVertexAI(model="gemini-pro")
...
parser = PydanticOutputParser(pydantic_object=Response)
prompt.partial_variables = {"format_instructions": parser.get_format_instructions()}
agent = create_tool_calling_agent(llm, tools, prompt)
agent_executor = AgentExecutor(
agent=agent,
tools=tools,
verbose=True,
return_intermediate_steps=True
)
### Error Message and Stack Trace (if applicable)
in case of openai:
```> Entering new AgentExecutor chain...
Invoking: `vanna_tool` with `{'qa': "Display the '...' column values for the week between 2 to 8 in 2024."}`
...
``` **(right output)**
in case of Gemini:
```
> Entering new AgentExecutor chain...
```json
{"agent_response": "Here are the ... values for weeks 2 to 8 in 2024:\n\n```\n opec \nweek \n2 77680000 \n3 77900000 \n4 78110000 \n5 78310000 \n6 78500000 \n7 78680000 \n```\n\nI hope this is helpful!", "sql_query": "SELECT ...FROM ...WHERE week BETWEEN 2 AND 8 AND YEAR_ = 2024;"}
``` **(random output without invoking 'vanna_tool'**
### Description
asking question that should invoke vanna_tool
when using openai or mistral, it works.
when using any gemini model, it will create random response without entering 'vanna_tool' ..
### System Info
langchain==0.2.5
langchain-community==0.2.5
langchain-core==0.2.9
langchain-experimental==0.0.61
langchain-openai==0.1.9
langchain-google-genai==1.0.6
langchain-google-vertexai==1.0.5
google-generativeai==0.5.4 | AgentExecutor is not choosing the tool (vanna_tool) when using any Gemini model (it is working properly when using OpenAI or Mistral) | https://api.github.com/repos/langchain-ai/langchain/issues/23298/comments | 2 | 2024-06-22T07:58:50Z | 2024-06-22T09:42:35Z | https://github.com/langchain-ai/langchain/issues/23298 | 2,367,677,918 | 23,298 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```Python
from langchain_core.output_parsers import JsonOutputParser
msg = "what queries must i run?"
class Step(BaseModel):
step_name: str = Field(
description="...")
tool_to_use: str = Field(description="...")
tool_input: str = Field(description="...")
depends_on: List[str] = Field(
description="...")
class PlanOutput(BaseModel):
task: str = Field(description="...")
steps: List[Step] = Field(description="...")
parser = JsonOutputParser(pydantic_object=PlanOutput)
llm = ChatOpenAI(...)
chain = ChatPromptTemplate.from_messages([('user': '...{input} Your output must follow this format: {format}' | llm | parser
chain.invoke({'format': plan_parser.get_format_instructions(), "input": msg})
```
### Error Message and Stack Trace (if applicable)
2024-06-22 11:21:03,116 - agent.py - 90 - ERROR - Traceback (most recent call last):
File "/root/classifier/.venv/lib/python3.9/site-packages/langchain_core/output_parsers/json.py", line 66, in parse_result
return parse_json_markdown(text)
File "/root/classifier/.venv/lib/python3.9/site-packages/langchain_core/utils/json.py", line 147, in parse_json_markdown
return _parse_json(json_str, parser=parser)
File "/root/classifier/.venv/lib/python3.9/site-packages/langchain_core/utils/json.py", line 160, in _parse_json
return parser(json_str)
File "/root/classifier/.venv/lib/python3.9/site-packages/langchain_core/utils/json.py", line 120, in parse_partial_json
return json.loads(s, strict=strict)
File "/usr/lib/python3.9/json/__init__.py", line 359, in loads
return cls(**kw).decode(s)
File "/usr/lib/python3.9/json/decoder.py", line 337, in decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
File "/usr/lib/python3.9/json/decoder.py", line 355, in raw_decode
raise JSONDecodeError("Expecting value", s, err.value) from None
json.decoder.JSONDecodeError: Expecting value: line 6 column 22 (char 109)
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/mnt/d/python_projects/azure-openai-qa-bot/nat-sql/src/agent.py", line 69, in talk
for s in ap.app.stream({"task": inp, 'session_id': sid}, config=args):
File "/root/classifier/.venv/lib/python3.9/site-packages/langgraph/pregel/__init__.py", line 963, in stream
_panic_or_proceed(done, inflight, step)
File "/root/classifier/.venv/lib/python3.9/site-packages/langgraph/pregel/__init__.py", line 1489, in _panic_or_proceed
raise exc
File "/usr/lib/python3.9/concurrent/futures/thread.py", line 58, in run
result = self.fn(*self.args, **self.kwargs)
File "/root/classifier/.venv/lib/python3.9/site-packages/langgraph/pregel/retry.py", line 66, in run_with_retry
task.proc.invoke(task.input, task.config)
File "/root/classifier/.venv/lib/python3.9/site-packages/langchain_core/runnables/base.py", line 2399, in invoke
input = step.invoke(
File "/root/classifier/.venv/lib/python3.9/site-packages/langgraph/utils.py", line 95, in invoke
ret = context.run(self.func, input, **kwargs)
File "/mnt/d/python_projects/azure-openai-qa-bot/nat-sql/src/action_plan.py", line 138, in _plan_steps
plan = self.planner.invoke({"task": state['task'], 'chat_history': hist if not self.no_mem else [],
File "/root/classifier/.venv/lib/python3.9/site-packages/langchain_core/runnables/base.py", line 2399, in invoke
input = step.invoke(
File "/root/classifier/.venv/lib/python3.9/site-packages/langchain_core/output_parsers/base.py", line 169, in invoke
return self._call_with_config(
File "/root/classifier/.venv/lib/python3.9/site-packages/langchain_core/runnables/base.py", line 1509, in _call_with_config
context.run(
File "/root/classifier/.venv/lib/python3.9/site-packages/langchain_core/runnables/config.py", line 365, in call_func_with_variable_args
return func(input, **kwargs) # type: ignore[call-arg]
File "/root/classifier/.venv/lib/python3.9/site-packages/langchain_core/output_parsers/base.py", line 170, in <lambda>
lambda inner_input: self.parse_result(
File "/root/classifier/.venv/lib/python3.9/site-packages/langchain_core/output_parsers/json.py", line 69, in parse_result
raise OutputParserException(msg, llm_output=text) from e
langchain_core.exceptions.OutputParserException: Invalid json output: ```json
{
"task": "what Queries tmust i run",
"steps": [
{
"step_name": "Step#1",
"tool_to_use": Document_Search_Tool,
"tool_input": "What queries must I run?",
"depends_on": []
}
]
}
```
```
### Description
Sometimes, despite adding a JSON output parser to the LLM chain, the LLM may enclose the generated JSON within ``` json... ``` tags.
This causes the JSON output parser to fail. It would be nice if the parser could check for this enclosure and remove it before parsing the JSON.

### System Info
```
langchain==0.2.1
langchain-chroma==0.1.1
langchain-cli==0.0.24
langchain-community==0.2.1
langchain-core==0.2.3
langchain-openai==0.1.8
langchain-text-splitters==0.2.0
langchain-visualizer==0.0.33
langchainhub==0.1.19
``` | JsonOutputParser fails at times when the LLM encloses the output JSON within ``` json ... ``` | https://api.github.com/repos/langchain-ai/langchain/issues/23297/comments | 1 | 2024-06-22T06:06:18Z | 2024-06-23T09:37:38Z | https://github.com/langchain-ai/langchain/issues/23297 | 2,367,595,218 | 23,297 |
[
"langchain-ai",
"langchain"
] | ### URL
https://python.langchain.com/v0.2/docs/how_to/multimodal_prompts/
### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
The examples in this page show the image as part of the message as:
```json
{"type" : "image_url", "image_url" : "data:image/jpeg;base64,{image_data}"}
```
However, this will result in a 400 response from OpenAI because the `image_url` value must be an object and not a string. This is the proper schema:
```json
{"type" : "image_url", "image_url" : {"url" : "data:image/jpeg;base64,{image_data}"}}
```
### Idea or request for content:
Rewrite the documentation examples to have the proper schema. I believe the other [multi modal page](https://python.langchain.com/v0.2/docs/how_to/multimodal_inputs/) has the correct one.
[OpenAI documentation reference](https://platform.openai.com/docs/guides/vision/quick-start) | DOC: <Issue related to /v0.2/docs/how_to/multimodal_prompts/> Image message structure is incorrect for OpenAI | https://api.github.com/repos/langchain-ai/langchain/issues/23294/comments | 2 | 2024-06-22T01:35:54Z | 2024-06-24T15:58:52Z | https://github.com/langchain-ai/langchain/issues/23294 | 2,367,440,695 | 23,294 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
from this tutorial: https://python.langchain.com/v0.2/docs/tutorials/sql_qa/
from langchain_community.agent_toolkits import SQLDatabaseToolkit
from langchain_core.messages import SystemMessage
from langchain_core.messages import HumanMessage
from langgraph.prebuilt import create_react_agent
toolkit = SQLDatabaseToolkit(db=db, llm=llm)
tools = toolkit.get_tools()
SQL_PREFIX = """You are an agent designed to interact with a SQL database.
Given an input question, create a syntactically correct SQLite query to run, then look at the results of the query and return the answer.
Unless the user specifies a specific number of examples they wish to obtain, always limit your query to at most 5 results.
You can order the results by a relevant column to return the most interesting examples in the database.
Never query for all the columns from a specific table, only ask for the relevant columns given the question.
You have access to tools for interacting with the database.
Only use the below tools. Only use the information returned by the below tools to construct your final answer.
You MUST double check your query before executing it. If you get an error while executing a query, rewrite the query and try again.
DO NOT make any DML statements (INSERT, UPDATE, DELETE, DROP etc.) to the database.
To start you should ALWAYS look at the tables in the database to see what you can query.
Do NOT skip this step.
Then you should query the schema of the most relevant tables."""
system_message = SystemMessage(content=SQL_PREFIX)
agent_executor = create_react_agent(llm, tools, messages_modifier=system_message)
for s in agent_executor.stream(
{"messages": [HumanMessage(content="Which country's customers spent the most?")]}
):
print(s)
print("----")
### Error Message and Stack Trace (if applicable)
The llm returns hallucinated query result, but the generated sql query looks reasonable. no error messages were surfaced.
### Description
I am using the above code to create sql agent, the code runs, it generates reasonable sql queries, but the query results were all hallucinated, not the actual result based on the database. wondering how is the agent connected to db, since the agent arguments don't include db and why sql_db_query tool doesn't execute on the sql db.
### System Info
langchain==0.2.5
langchain-aws==0.1.6
langchain-chroma==0.1.0
langchain-community==0.2.5
langchain-core==0.2.7
langchain-experimental==0.0.51
langchain-text-splitters==0.2.1
@dosu | V2.0 create_react_agent doesn't execute generated query on sql database | https://api.github.com/repos/langchain-ai/langchain/issues/23293/comments | 2 | 2024-06-22T01:30:15Z | 2024-06-28T01:45:09Z | https://github.com/langchain-ai/langchain/issues/23293 | 2,367,431,738 | 23,293 |
[
"langchain-ai",
"langchain"
] | > @TheJerryChang will it also stop the llm`s completion process? I am using langchain llama cpp with conversationalretrievalchain
Did you solve this? It seems I'm facing the same issue.
> > @TheJerryChang will it also stop the llm`s completion process? I am using langchain llama cpp with conversationalretrievalchain
>
> For my use case, Im using Chat model and did not try with completion process. How do you do the streaming? Did you use chain.stream()? I personally think as long as it supports calling chain.stream() to proceed the streaming, it then could be interrupted by raising an exception during the iteration
I'm using the .astream_events method of AgentExecutor. I'm trying to figure out how to stream an agent response but also be able to cancel it before it finishes generating.
_Originally posted by @Spider-netizen in https://github.com/langchain-ai/langchain/issues/11959#issuecomment-1975388015_
Hi,
Can this be achieved?
Thanks. | Cancelling Llama CPP Generation Using Agent's astream_events | https://api.github.com/repos/langchain-ai/langchain/issues/23282/comments | 1 | 2024-06-21T20:51:36Z | 2024-07-26T04:27:17Z | https://github.com/langchain-ai/langchain/issues/23282 | 2,367,232,107 | 23,282 |
[
"langchain-ai",
"langchain"
] | ### Privileged issue
- [X] I am a LangChain maintainer, or was asked directly by a LangChain maintainer to create an issue here.
### Issue Content
### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
Dockerfile:
```
# LLM Installs
RUN pip3 install \
langchain \
langchain-community \
langchain-pinecone \
langchain-openai
```
Python Imports
``` python
import langchain
from langchain_community.document_loaders import PyPDFLoader, TextLoader, Docx2txtLoader, UnstructuredHTMLLoader
from langchain.text_splitter import RecursiveCharacterTextSplitter
from langchain_pinecone import PineconeVectorStore
from langchain_openai import OpenAIEmbeddings, OpenAI, ChatOpenAI
from langchain.prompts import PromptTemplate
from langchain.chains import RetrievalQAWithSourcesChain, ConversationalRetrievalChain
from langchain.memory import ConversationBufferMemory
from langchain.load.dump import dumps
```
### Error Message and Stack Trace (if applicable)
```
2024-05-30 13:28:59 from langchain_openai import OpenAIEmbeddings, OpenAI, ChatOpenAI
2024-05-30 13:28:59 File "/usr/local/lib/python3.9/site-packages/langchain_openai/__init__.py", line 1, in <module>
2024-05-30 13:28:59 from langchain_openai.chat_models import (
2024-05-30 13:28:59 File "/usr/local/lib/python3.9/site-packages/langchain_openai/chat_models/__init__.py", line 1, in <module>
2024-05-30 13:28:59 from langchain_openai.chat_models.azure import AzureChatOpenAI
2024-05-30 13:28:59 File "/usr/local/lib/python3.9/site-packages/langchain_openai/chat_models/azure.py", line 9, in <module>
2024-05-30 13:28:59 from langchain_core.language_models.chat_models import LangSmithParams
2024-05-30 13:28:59 ImportError: cannot import name 'LangSmithParams' from 'langchain_core.language_models.chat_models' (/usr/local/lib/python3.9/site-packages/langchain_core/language_models/chat_models.py)
```
### Description
I am trying to import langchain_openai with the newest version released last night (0.1.8) and it can not find the LangSmithParams module.
I move back a version with ``` langchain-openai==0.1.7 ``` and it works again. Something in this new update broke the import.
### System Info
Container is running python 3.9 on Rocky Linux 8
```
# Install dependecies
RUN dnf -y install epel-release
RUN dnf -y install \
httpd \
python39 \
unzip \
xz \
git-core \
ImageMagick \
wget
RUN pip3 install \
psycopg2-binary \
pillow \
lxml \
pycryptodomex \
six \
pytz \
jaraco.functools \
requests \
supervisor \
flask \
flask-cors \
flask-socketio \
mako \
boto3 \
botocore==1.34.33 \
gotenberg-client \
docusign-esign \
python-dotenv \
htmldocx \
python-docx \
beautifulsoup4 \
pypandoc \
pyetherpadlite \
html2text \
PyJWT \
sendgrid \
auth0-python \
authlib \
openai==0.27.7 \
pinecone-client==3.1.0 \
pinecone-datasets==0.7.0 \
tiktoken==0.4.0
# Installing LLM requirements
RUN pip3 install \
langchain \
langchain-community \
langchain-pinecone \
langchain-openai==0.1.7 \
pinecone-client \
pinecone-datasets \
unstructured \
poppler-utils \
tiktoken \
pypdf \
python-dotenv \
docx2txt
``` | langchain-openai==0.1.8 is completely broken | https://api.github.com/repos/langchain-ai/langchain/issues/23278/comments | 1 | 2024-06-21T19:24:45Z | 2024-07-24T19:05:47Z | https://github.com/langchain-ai/langchain/issues/23278 | 2,367,126,718 | 23,278 |
[
"langchain-ai",
"langchain"
] | ### Privileged issue
- [X] I am a LangChain maintainer, or was asked directly by a LangChain maintainer to create an issue here.
### Issue Content
### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
Dockerfile:
```
# LLM Installs
RUN pip3 install \
langchain \
langchain-community \
langchain-pinecone \
langchain-openai
```
Python Imports
``` python
import langchain
from langchain_community.document_loaders import PyPDFLoader, TextLoader, Docx2txtLoader, UnstructuredHTMLLoader
from langchain.text_splitter import RecursiveCharacterTextSplitter
from langchain_pinecone import PineconeVectorStore
from langchain_openai import OpenAIEmbeddings, OpenAI, ChatOpenAI
from langchain.prompts import PromptTemplate
from langchain.chains import RetrievalQAWithSourcesChain, ConversationalRetrievalChain
from langchain.memory import ConversationBufferMemory
from langchain.load.dump import dumps
```
### Error Message and Stack Trace (if applicable)
```
2024-05-30 13:28:59 from langchain_openai import OpenAIEmbeddings, OpenAI, ChatOpenAI
2024-05-30 13:28:59 File "/usr/local/lib/python3.9/site-packages/langchain_openai/__init__.py", line 1, in <module>
2024-05-30 13:28:59 from langchain_openai.chat_models import (
2024-05-30 13:28:59 File "/usr/local/lib/python3.9/site-packages/langchain_openai/chat_models/__init__.py", line 1, in <module>
2024-05-30 13:28:59 from langchain_openai.chat_models.azure import AzureChatOpenAI
2024-05-30 13:28:59 File "/usr/local/lib/python3.9/site-packages/langchain_openai/chat_models/azure.py", line 9, in <module>
2024-05-30 13:28:59 from langchain_core.language_models.chat_models import LangSmithParams
2024-05-30 13:28:59 ImportError: cannot import name 'LangSmithParams' from 'langchain_core.language_models.chat_models' (/usr/local/lib/python3.9/site-packages/langchain_core/language_models/chat_models.py)
```
### Description
I am trying to import langchain_openai with the newest version released last night (0.1.8) and it can not find the LangSmithParams module.
I move back a version with ``` langchain-openai==0.1.7 ``` and it works again. Something in this new update broke the import.
### System Info
Container is running python 3.9 on Rocky Linux 8
```
# Install dependecies
RUN dnf -y install epel-release
RUN dnf -y install \
httpd \
python39 \
unzip \
xz \
git-core \
ImageMagick \
wget
RUN pip3 install \
psycopg2-binary \
pillow \
lxml \
pycryptodomex \
six \
pytz \
jaraco.functools \
requests \
supervisor \
flask \
flask-cors \
flask-socketio \
mako \
boto3 \
botocore==1.34.33 \
gotenberg-client \
docusign-esign \
python-dotenv \
htmldocx \
python-docx \
beautifulsoup4 \
pypandoc \
pyetherpadlite \
html2text \
PyJWT \
sendgrid \
auth0-python \
authlib \
openai==0.27.7 \
pinecone-client==3.1.0 \
pinecone-datasets==0.7.0 \
tiktoken==0.4.0
# Installing LLM requirements
RUN pip3 install \
langchain \
langchain-community \
langchain-pinecone \
langchain-openai==0.1.7 \
pinecone-client \
pinecone-datasets \
unstructured \
poppler-utils \
tiktoken \
pypdf \
python-dotenv \
docx2txt
``` | langchain-openai==0.1.8 is now broken | https://api.github.com/repos/langchain-ai/langchain/issues/23277/comments | 1 | 2024-06-21T19:11:03Z | 2024-06-21T19:11:33Z | https://github.com/langchain-ai/langchain/issues/23277 | 2,367,109,149 | 23,277 |
[
"langchain-ai",
"langchain"
] | ### Privileged issue
- [X] I am a LangChain maintainer, or was asked directly by a LangChain maintainer to create an issue here.
### Issue Content
test | test | https://api.github.com/repos/langchain-ai/langchain/issues/23274/comments | 0 | 2024-06-21T18:49:28Z | 2024-07-01T15:04:50Z | https://github.com/langchain-ai/langchain/issues/23274 | 2,367,082,015 | 23,274 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
def set_embed(self, query: str) -> None:
# Load model configurations
load_dotenv(self.model_conf)
# Load app configurations
config = ConfigParser(interpolation=None)
config.read('app.ini')
apim_params = config['apim']
# Set env variables
os.environ["OPENAI_API_KEY"] = apim_params['OPENAI_API_KEY']
# Set emb_model name variables
#embed_model = os.getenv('EMBEDDING_MODEL_TYPE')
embed_model = os.getenv('MODEL_TYPE_EMBEDDING')
print(embed_model)
# Set apim request parameters
params: Mapping[str, str] = {
'api-version': os.getenv('OPENAI_API_VERSION')
}
headers: Mapping[str, str] = {
'Content-Type': apim_params['CONTENT_TYPE'],
'Ocp-Apim-Subscription-Key': os.getenv('OCP-APIM-SUBSCRIPTION-KEY')
}
client = httpx.Client(
base_url = os.getenv('AZURE_OPENAI_ENDPOINT'),
params = params,
headers = headers,
verify = apim_params['CERT_PATH']
)
print(client.params)
print(client.headers)
try:
# Load embedding model
self.embed = AzureOpenAIEmbeddings(
model = 'text-embedding-ada-002',
azure_deployment=embed_model,
azure_deployment='text-embedding-ada-002',
chunk_size=2048,
http_client=client)
print (self.embed)
result = self.embed.embed_query(query)
print (f'{embed_model} model initialized')
except Exception as e:
raise Exception(f'ApimUtils-set_embed : Error while initializing embedding model - {e}')
### Error Message and Stack Trace (if applicable)
ApimUtils-set_embed : Error while initializing embedding model - Error code: 400 - {'statusCode': 400, 'message': "Unable to parse and estimate tokens from incoming request. Please ensure incoming request is of one of the following types: 'Chat Completion', 'Completion', 'Embeddings' and works with current prompt estimation mode of 'Auto'."}
### Description
When using the AzureOpenAIEmbedding class with our Azure APIM in front of our Azure OpenAI services it breaks within our APIM policy which captures/calculates prompt/completion tokens from the request. We believe this is due to how the AzureOpenAIEmbedding class is sending a list of integers ex. b'{"input": [[3923, 374, 279, 4611, 96462, 46295, 58917, 30]], "model": "text-embedding-ada-002", "encoding_format": "base64"}' vs [str] from the query text.
### System Info
System Information
------------------
> OS: Windows
> OS Version: 10.0.22631
> Python Version: 3.11.5 (tags/v3.11.5:cce6ba9, Aug 24 2023, 14:38:34) [MSC v.1936 64 bit (AMD64)]
Package Information
-------------------
> langchain_core: 0.2.5
> langchain: 0.2.3
> langchain_community: 0.2.4
> langsmith: 0.1.75
> langchain_openai: 0.1.8
> langchain_text_splitters: 0.2.1 | LangChain AzureOpenAIEmbeddings issue when passing list of ints vs [str] | https://api.github.com/repos/langchain-ai/langchain/issues/23268/comments | 2 | 2024-06-21T16:08:45Z | 2024-07-10T11:18:04Z | https://github.com/langchain-ai/langchain/issues/23268 | 2,366,841,828 | 23,268 |
[
"langchain-ai",
"langchain"
] | ### Privileged issue
- [X] I am a LangChain maintainer, or was asked directly by a LangChain maintainer to create an issue here.
### Issue Content
Gemini now allows a developer to create a context cache with the system instructions, contents, tools, and model information already set, and then reference this context as part of a standard query. It must be explicitly cached (ie - it is not automatic as part of a request or reply) and a cache expiration can be set (and later changed).
It does not appear to be supported in Vertex AI at this time.
Open issues:
* Best paradigm to add to cache or integrate with LangChain history system
* Best paradigm to reference
References:
* AI Studio / genai: https://ai.google.dev/gemini-api/docs/caching?lang=python
* LangChain.js: https://github.com/langchain-ai/langchainjs/issues/5841 | google-genai [feature]: Context Caching | https://api.github.com/repos/langchain-ai/langchain/issues/23259/comments | 1 | 2024-06-21T12:50:57Z | 2024-08-06T16:51:24Z | https://github.com/langchain-ai/langchain/issues/23259 | 2,366,484,658 | 23,259 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
``` python
import os
import json
from pathlib import Path
from langchain_community.cache import SQLiteCache
from typing import Callable, List
model_list = [
'ChatAnthropic', # <- has several instance of this bug, not only SQLiteCache
'ChatBaichuan',
'ChatCohere',
'ChatCoze',
'ChatDeepInfra',
'ChatEverlyAI',
'ChatFireworks',
'ChatFriendli',
'ChatGooglePalm',
'ChatHunyuan',
'ChatLiteLLM',
'ChatOctoAI',
'ChatOllama',
'ChatOpenAI',
'ChatPerplexity',
'ChatYuan2',
'ChatZhipuAI'
# Below are the models I didn't test, as well as the reason why I haven't
# 'ChatAnyscale', # needs a model name
# 'ChatDatabricks', # needs some params
# 'ChatHuggingFace', # needs a modelname
# 'ChatJavelinAIGateway', # needs some params
# 'ChatKinetica', # not installed
# 'ChatKonko', # not installed
# 'ChatLiteLLMRouter', # needs router arg
# 'ChatLlamaCpp', #needs some params
# 'ChatMLflowAIGateway', # not installed
# 'ChatMaritalk', # needs some params
# 'ChatMlflow', # not installed
# 'ChatMLX', # needs some params
# 'ChatPremAI', # not installed
# 'ChatSparkLLM', # issue with api key
# 'ChatTongyi', # not installed
# 'ChatVertexAI', # not insalled
# 'ChatYandexGPT', # needs some params
]
# import the models
for m in model_list:
exec(f"from langchain_community.chat_models import {m}")
# set fake api keys
for m in model_list:
backend = m[4:].upper()
os.environ[f"{backend}_API_KEY"] = "aaaaaa"
os.environ[f"{backend}_API_TOKEN"] = "aaaaaa"
os.environ[f"{backend}_TOKEN"] = "aaaaaa"
os.environ["GOOGLE_API_KEY"] = "aaaaaa"
os.environ["HUNYUAN_APP_ID"] = "aaaaaa"
os.environ["HUNYUAN_SECRET_ID"] = "aaaaaa"
os.environ["HUNYUAN_SECRET_KEY"] = "aaaaaa"
os.environ["PPLX_API_KEY"] = "aaaaaa"
os.environ["IFLYTEK_SPARK_APP_ID"] = "aaaaaa"
os.environ["SPARK_API_KEY"] = "aaaaaa"
os.environ["DASHSCOPE_API_KEY"] = "aaaaaa"
os.environ["YC_API_KEY"] = "aaaaaa"
# create two brand new cache
Path("test_cache.db").unlink(missing_ok=True)
c1 = SQLiteCache(database_path="test_cache.db")
c2 = SQLiteCache(database_path="test_cache.db")
def recur_dict_check(val: dict) -> List[str]:
"find which object is causing the issue"
found = []
for k, v in val.items():
if " object at " in str(v):
if isinstance(v, dict):
found.append(recur_dict_check(v))
else:
found.append(v)
# flatten the list
out = []
for f in found:
if isinstance(f, list):
out.extend(f)
else:
out.append(f)
assert out
out = [str(o) for o in out]
return out
def check(chat_model: Callable, verbose: bool = False) -> bool:
"check a given chatmodel"
llm1 = chat_model(
cache=c1,
)
llm2 = chat_model(
cache=c2,
)
backend = llm1.get_lc_namespace()[-1]
str1 = llm1._get_llm_string().split("---")[0]
str2 = llm2._get_llm_string().split("---")[0]
if verbose:
print(f"LLM1:\n{str1}")
print(f"LLM2:\n{str2}")
if str1 == str2:
print(f"{backend.title()} does not have the bug")
return True
else:
print(f"{backend.title()} HAS the bug")
j1, j2 = json.loads(str1), json.loads(str2)
assert j1.keys() == j2.keys()
diff1 = recur_dict_check(j1)
diff2 = recur_dict_check(j2)
assert len(diff1) == len(diff2)
diffs = [str(v).split("object at ")[0] for v in diff1 + diff2]
assert all(diffs.count(elem) == 2 for elem in diffs)
print(f"List of buggy objects for model {backend.title()}:")
for d in diff1:
print(f" - {d}")
# for k, v in j1
return False
failed = []
for model in model_list:
if not check(locals()[model]):
failed.append(model)
print(f"The culprit is at least SQLiteCache repr string:\n{c1}\n{c2}")
c1.__class__.__repr__ = lambda x=None : "<langchain_community.cache.SQLiteCache>"
c2.__class__.__repr__ = lambda x=None : "<langchain_community.cache.SQLiteCache>"
print(f"Now fixed:\n{c1}\n{c2}\n")
# Anthropic still has issues
assert not check(locals()["ChatAnthropic"])
for model in failed:
if model == "ChatAnthropic": # anthropic actually has more issues!
continue
assert check(locals()[model]), model
print("Fixed it for most models!")
print(f"Models with the issue: {len(failed)} / {len(model_list)}")
for f in failed:
print(f" - {f}")
```
### Error Message and Stack Trace (if applicable)
_No response_
### Description
Being affected by [this bug](https://github.com/langchain-ai/langchain/issues/22389) in [my DocToolsLLM project](https://github.com/thiswillbeyourgithub/DocToolsLLM/) I ended up, instead of ChatLiteLLM for all models, using directly ChatOpenAI if the model asked is by openai anyway.
The other day I noticed that my SQLiteCcache was getting systematically ignored only by ChatOpenAI and ended up figuring out the culprit :
- To know if a value is present in the cache, the prompt AND a string characterizing the LLM is used.
- The method used to characterize the LLM is `_get_llm_string()`
- This method's implementation is inconsistent across chat models, causing outputs to contain the unfiltered __repr__ object of for example cache, callbacks etc.
- The issue is that for a lot of instance, the __repr__ returns something like `<langchain_community.cache.SQLiteCache object at SOME_ADRESS>`
- I found that manually setting the __repr__ of the superclass of those object is a viable workaround
To help you fix this ASAP I coded an loop that checks over all chat models and tells you what instance is causing the issue.
### System Info
python -m langchain_core.sys_info
System Information
------------------
> OS: Linux
> OS Version: #35~22.04.1-Ubuntu SMP PREEMPT_DYNAMIC Tue May 7 09:00:52 UTC 2
> Python Version: 3.11.7 (main, Jun 12 2024, 12:57:34) [GCC 11.4.0]
Package Information
-------------------
> langchain_core: 0.2.7
> langchain: 0.2.5
> langchain_community: 0.2.5
> langsmith: 0.1.77
> langchain_mistralai: 0.1.8
> langchain_openai: 0.1.8
> langchain_text_splitters: 0.2.1
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph
> langserve | BUG: Many chat models never uses SQLiteCache because of the cache instance's __repr__ method changes! | https://api.github.com/repos/langchain-ai/langchain/issues/23257/comments | 6 | 2024-06-21T10:57:12Z | 2024-06-21T15:22:28Z | https://github.com/langchain-ai/langchain/issues/23257 | 2,366,287,565 | 23,257 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
``` python
from dotenv import load_dotenv
import streamlit as st
from langchain_community.document_loaders import PyPDFLoader, WebBaseLoader
from langchain_text_splitters import RecursiveCharacterTextSplitter
from langchain_community.vectorstores import FAISS
from langchain_core.prompts import PromptTemplate
from langchain_openai import OpenAIEmbeddings, ChatOpenAI
from langchain_core.prompts import ChatPromptTemplate
from langchain.chains import ConversationalRetrievalChain, StuffDocumentsChain, LLMChain
from langchain.memory import ConversationBufferMemory
import os
load_dotenv()
def load_documents(global_pdf_path, external_pdf_path=None, input_url=None):
""" This functionality of loading global PDF knowledge base is currently placed inside load_documents function
which is not a feasible approach has to perform the global pdf load only once hence the below funcion need
to be placed somewhere else"""
# Load the global internal knowledge base PDF
global_loader = PyPDFLoader(global_pdf_path)
global_docs = global_loader.load()
documents = global_docs
if external_pdf_path:
# Load the external input knowledge base PDF
external_loader = PyPDFLoader(external_pdf_path)
external_docs = external_loader.load()
documents += external_docs
if input_url:
# Load URL content
url_loader = WebBaseLoader(input_url)
url_docs = url_loader.load()
documents += url_docs
# Split the documents into smaller chunks
text_splitter = RecursiveCharacterTextSplitter(chunk_size=1000, chunk_overlap=200)
split_docs = text_splitter.split_documents(documents)
return split_docs
def create_vector_store(documents):
embeddings = OpenAIEmbeddings(api_key=os.environ['OPENAI_API_KEY'])
vector_store = FAISS.from_documents(documents, embeddings)
return vector_store
def get_LLM_response(query, task, content_type):
llm = ChatOpenAI(api_key=os.environ['OPENAI_API_KEY'])
# Create a prompt template
prompt = ChatPromptTemplate.from_template(f"""You are a Marketing Assistant
<context>
{{context}}
</context>
Question: {{input}}""")
# Create document chain
document_chain = StuffDocumentsChain(
llm_chain=LLMChain(llm=llm, prompt=prompt),
document_variable_name="context"
)
retriever = vector_store.as_retriever()
question_generator_template = PromptTemplate(
input_variables=[
"chat_history",
"input_key",
],
template= (
"""
Combine the chat history and follow up question into a standalone question.
Chat History: {chat_history}
Follow up question: {question}
""")
)
question_generator_chain = LLMChain(
llm=llm,
prompt=question_generator_template,
)
# Create retrieval chain
retrieval_chain = ConversationalRetrievalChain(
combine_docs_chain=document_chain,
question_generator=question_generator_chain,
retriever=retriever,
memory=ConversationBufferMemory(memory_key="chat_history", input_key="")
)
# Get the response
response = retrieval_chain.invoke({"question": query, "context": documents, "input": ""})
return response["answer"]
# Code for Frontend begins here
st.set_page_config(page_title="Linkenite AI", page_icon='🤖', layout='centered', initial_sidebar_state='collapsed')
st.header("🤖Linkenite Marketing Assistant")
prompt = st.text_input("Enter the prompt")
task = st.selectbox("Please select the input you want to provide", ('PDF', 'URL'), key=1)
content_type = st.selectbox("Select the type of content you want to generate", ("Blog", "Technical Blog", "Whitepaper", "Case Studies", "LinkedIn Post", "Social Media Post"), key=2)
input_file = None
input_url = None
if task == 'PDF':
input_file = st.file_uploader("Upload a PDF file", type="pdf")
# Work on tracking the path of the uploaded pdf
elif task == 'URL':
input_url = st.text_input("Enter the URL")
submit = st.button("Generate")
if submit and (input_file or input_url):
global_pdf_path = "input_kb.pdf"
external_pdf_path = None
if input_file:
# The input pdf file's path has to be used below inplace of "input_kb.pdf"
with open("input_kb.pdf", "wb") as f:
f.write(input_file.read())
external_pdf_path = "input_kb.pdf"
documents = load_documents(global_pdf_path, external_pdf_path, input_url)
vector_store = create_vector_store(documents)
context = " ".join([doc.page_content for doc in documents])
response = get_LLM_response(prompt, context, vector_store)
st.write(response)
def set_bg_from_url(url, opacity=1):
# Set background image using HTML and CSS
st.markdown(
f"""
<style>
body {{
background: url('{url}') no-repeat center center fixed;
background-size: cover;
opacity: {opacity};
}}
</style>
""",
unsafe_allow_html=True
)
# Set background image from URL
set_bg_from_url("https://cdn.create.vista.com/api/media/medium/231856778/stock-photo-smartphone-laptop-black-background-marketing-lettering-icons?token=", opacity=0.775)
```
### Error Message and Stack Trace (if applicable)
(venv) PS C:\Users\User\Desktop\Linkenite\MarketingAI MVP> streamlit run apporiginal.py
USER_AGENT environment variable not set, consider setting it to identify your requests.
C:\Users\User\Desktop\Linkenite\MarketingAI MVP\venv\Lib\site-packages\langchain_core\_api\deprecation.py:139: LangChainDeprecationWarning: The class `LLMChain` was deprecated in LangChain 0.1.17 and will be removed in 0.3.0. Use RunnableSequence, e.g., `prompt | llm` instead.
warn_deprecated(
C:\Users\User\Desktop\Linkenite\MarketingAI MVP\venv\Lib\site-packages\langchain_core\_api\deprecation.py:139: LangChainDeprecationWarning: The class `ConversationalRetrievalChain` was deprecated in LangChain 0.1.17 and will be removed in 0.3.0. Use create_history_aware_retriever together with create_retrieval_chain (see example in docstring) instead.
warn_deprecated(
2024-06-21 12:54:19.810 Uncaught app exception
Traceback (most recent call last):
File "C:\Users\User\Desktop\Linkenite\MarketingAI MVP\venv\Lib\site-packages\streamlit\runtime\scriptrunner\script_runner.py", line 589, in _run_script
exec(code, module.__dict__)
File "C:\Users\User\Desktop\Linkenite\MarketingAI MVP\apporiginal.py", line 147, in <module>
response = get_LLM_response(prompt, context, vector_store)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\User\Desktop\Linkenite\MarketingAI MVP\apporiginal.py", line 109, in get_LLM_response
response = retrieval_chain.invoke({"question": query, "context": documents, "input": ""})
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\User\Desktop\Linkenite\MarketingAI MVP\venv\Lib\site-packages\langchain\chains\base.py", line 166, in invoke
raise e
File "C:\Users\User\Desktop\Linkenite\MarketingAI MVP\venv\Lib\site-packages\langchain\chains\base.py", line 161, in invoke
final_outputs: Dict[str, Any] = self.prep_outputs(
^^^^^^^^^^^^^^^^^^
File "C:\Users\User\Desktop\Linkenite\MarketingAI MVP\venv\Lib\site-packages\langchain\chains\base.py", line 460, in prep_outputs
self.memory.save_context(inputs, outputs)
File "C:\Users\User\Desktop\Linkenite\MarketingAI MVP\venv\Lib\site-packages\langchain\memory\chat_memory.py", line 55, in save_context
input_str, output_str = self._get_input_output(inputs, outputs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
##File "C:\Users\User\Desktop\Linkenite\MarketingAI MVP\venv\Lib\site-packages\langchain\memory\chat_memory.py", line 51, in _get_input_output
return inputs[prompt_input_key], outputs[output_key]
~~~~~~^^^^^^^^^^^^^^^^^^
##KeyError: ''
### Description
I am trying to invoke a retrieval chain with three parameters passed {"question": query, "context": documents, "input": ""} the result throws a KeyError related to [ output_key ].
When I passed output_key like this {"question": query, "context": documents, "input": ", "output_key": ""} it gives another error.
The error comes from line 51 of langchain/memory/chat_memory.py ->
in _get_input_output
return inputs[prompt_input_key], outputs[output_key]
~~~~~~^^^^^^^^^^^^^^^^^^
KeyError: ''
Stopping...
### System Info
"pip freeze | grep langchain"
platform (windows)
python version (3.12.2)
System Information
------------------
> OS: Windows
> OS Version: 10.0.22631
> Python Version: 3.12.4 (tags/v3.12.4:8e8a4ba, Jun 6 2024, 19:30:16) [MSC v.1940 64 bit (AMD64)]
Package Information
-------------------
> langchain_core: 0.2.9
> langchain: 0.2.5
> langchain_community: 0.2.5
> langsmith: 0.1.77
> langchain_chroma: 0.1.1
> langchain_openai: 0.1.8
> langchain_text_splitters: 0.2.1
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph
> langserve | KeyError: '' : output_key not found [in _get_input_output return inputs[prompt_input_key], outputs[output_key]] | https://api.github.com/repos/langchain-ai/langchain/issues/23255/comments | 1 | 2024-06-21T09:54:31Z | 2024-06-24T13:26:06Z | https://github.com/langchain-ai/langchain/issues/23255 | 2,366,174,937 | 23,255 |
[
"langchain-ai",
"langchain"
] | ### URL
https://python.langchain.com/v0.2/docs/integrations/document_loaders/chatgpt_loader/
### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
When we run this code with the downloaded conversations.json file, we get the following error
---
File "C:\New_LLM_Camp\myenv\Lib\site-packages\langchain_community\document_loaders\chatgpt.py", line 54, in load
concatenate_rows(messages[key]["message"], title)
File "C:\New_LLM_Camp\myenv\Lib\site-packages\langchain_community\document_loaders\chatgpt.py", line 25, in concatenate_rows
date = datetime.datetime.fromtimestamp(message["create_time"]).strftime(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
TypeError: 'NoneType' object cannot be interpreted as an integer
---
The format of the export file for ChatGPT transcripts seems to be changing from what is defined in chatgpt.py.
Also, it seems to only work for text-only chats, and not for chats with images.
---
[chatgpt.py]
text = "".join(
[
concatenate_rows(messages[key]["message"], title)
for idx, key in enumerate(messages)
if not (
idx == 0
and messages[key]["message"]["author"]["role"] == "system"
)
]
)
---
### Idea or request for content:
_No response_ | Error when running ChatGPTLoader | https://api.github.com/repos/langchain-ai/langchain/issues/23252/comments | 0 | 2024-06-21T08:57:03Z | 2024-06-21T08:59:39Z | https://github.com/langchain-ai/langchain/issues/23252 | 2,366,065,661 | 23,252 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
In text_splitter.py (SemanticChunker)
```python
def _calculate_sentence_distances(
self, single_sentences_list: List[str]
) -> Tuple[List[float], List[dict]]:
"""Split text into multiple components."""
_sentences = [
{"sentence": x, "index": i} for i, x in enumerate(single_sentences_list)
]
sentences = combine_sentences(_sentences, self.buffer_size)
embeddings = self.embeddings.embed_documents(
[x["combined_sentence"] for x in sentences]
)
for i, sentence in enumerate(sentences):
sentence["combined_sentence_embedding"] = embeddings[i] << Failed here since embeddings size is less than i at a later point
return calculate_cosine_distances(sentences)
```
### Error Message and Stack Trace (if applicable)
```python
Traceback (most recent call last):
File "/Users/A72281951/telly/venv/ingestion/lib/python3.10/site-packages/click/core.py", line 1078, in main
rv = self.invoke(ctx)
File "/Users/A72281951/telly/venv/ingestion/lib/python3.10/site-packages/click/core.py", line 1434, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "/Users/A72281951/telly/venv/ingestion/lib/python3.10/site-packages/click/core.py", line 783, in invoke
return __callback(*args, **kwargs)
File "/Users/A72281951/telly/telly-backend/ingestion/main.py", line 132, in start
store.load_data_to_db(configured_spaces)
File "/Users/A72281951/telly/telly-backend/ingestion/common/utils.py", line 70, in wrapper
value = func(*args, **kwargs)
File "/Users/A72281951/telly/telly-backend/ingestion/agent/store/db.py", line 86, in load_data_to_db
for docs in self.ingest_data(spaces):
File "/Users/A72281951/telly/telly-backend/ingestion/agent/store/db.py", line 77, in ingest_data
documents.extend(self.chunker.split_documents(docs))
File "/Users/A72281951/telly/venv/ingestion/lib/python3.10/site-packages/langchain_experimental/text_splitter.py", line 258, in split_documents
return self.create_documents(texts, metadatas=metadatas)
File "/Users/A72281951/telly/venv/ingestion/lib/python3.10/site-packages/langchain_experimental/text_splitter.py", line 243, in create_documents
for chunk in self.split_text(text):
File "/Users/A72281951/telly/venv/ingestion/lib/python3.10/site-packages/langchain_experimental/text_splitter.py", line 201, in split_text
distances, sentences = self._calculate_sentence_distances(single_sentences_list)
File "/Users/A72281951/telly/venv/ingestion/lib/python3.10/site-packages/langchain_experimental/text_splitter.py", line 186, in _calculate_sentence_distances
sentence["combined_sentence_embedding"] = embeddings[i]
IndexError: list index out of range
```
### Description
* I am trying to chunk a list of documents and it fails with this
* I am using SemanticChunker from langchain-experimental~=0.0.61
* breakpoint_threshold = percentile and breakpoint_threshold amount = 95.0
### System Info
langchain==0.2.5
langchain-community==0.2.5
langchain-core==0.2.9
langchain-experimental==0.0.61
langchain-google-vertexai==1.0.5
langchain-postgres==0.0.8
langchain-text-splitters==0.2.1
Mac M3
Python 3.10.14 | SemanticChunker: list index out of range | https://api.github.com/repos/langchain-ai/langchain/issues/23250/comments | 7 | 2024-06-21T08:04:16Z | 2024-07-05T08:55:34Z | https://github.com/langchain-ai/langchain/issues/23250 | 2,365,969,512 | 23,250 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
llama3_groq_model = ChatGroq(temperature=0, groq_api_key="gsk_")
def run_tools(query):
serp_search_tool = serp_search
tools = [serp_search_tool]
tools_by_name = {tool.name:tool for tool in tools}
tool_calls=[]
while True:
model = llama3_groq_model
llm_with_tool = model.bind_tools(tools)
res = llm_with_tool.invoke(query)
tool_calls = res.tool_calls
break
if tool_calls:
name = tool_calls[-1]['name']
args = tool_calls[-1]['args']
print(f'Running Tool {name}...')
rs = tools_by_name[name].invoke(args)
else:
rs = res.content
name = ''
args = {}
return {'result': rs, 'last_tool_calls':tool_calls}
```
### Error Message and Stack Trace (if applicable)
Expected response:
content='' additional_kwargs={'tool_calls': [{'id': 'call_7d3a', 'function': {'arguments': '{"keyword":"đài quảng bình"}', 'name': 'serp_search'}, 'type': 'function'}]} response_metadata={'token_usage': {'completion_time': 0.128152054, 'completion_tokens': 47, 'prompt_time': 0.197270744, 'prompt_tokens': 932, 'queue_time': None, 'total_time': 0.32542279799999996, 'total_tokens': 979}, 'model_name': 'llama3-70b-8192', 'system_fingerprint': 'fp_c1a4bcec29', 'finish_reason': 'tool_calls', 'logprobs': None} id='run-faa4fae3-93ab-4a13-8e5b-9e2269c1594f-0' tool_calls=[{'name': 'serp_search', 'args': {'keyword': 'đài quảng bình'}, 'id': 'call_7d3a'}]
Not expected response:
content='assistant<|start_header_id|><|start_header_id|><|start_header_id|><|start_header_id|><|start_header_id|><|start_header_id|><|start_header_id|><|start_header_id|><|start_header_id|><|start_header_id|><|start_header_id|><|start_header_id|><|start_header_id|><|start_header_id|><|start_header_id|><|start_header_id|><|start_header_id|><|start_header_id|><|start_header_id|><|start_header_id|><|start_header_id|><|start_header_id|><|start_header_id|><|start_header_id|><|start_header_id|><|start_header_id|><|start_header_id|><|start_header_id|><|start_header_id|><|start_header_id|><|start_header_id|><|start_header_id|><|start_header_id|><|start_header_id|><|start_header_id|><|start_header_id|><|start_header_id|><|start_header_id|><|start_header_id|><|start_header_id|><|start_header_id|><|start_header_id|><|start_header_id|><|start_header_id|><|start_header_id|>' response_metadata={'token_usage': {'completion_time': 11.079835881, 'completion_tokens': 4000, 'prompt_time': 0.165631979, 'prompt_tokens': 935, 'queue_time': None, 'total_time': 11.24546786, 'total_tokens': 4935}, 'model_name': 'llama3-70b-8192', 'system_fingerprint': 'fp_2f30b0b571', 'finish_reason': 'length', 'logprobs': None} id='run-a89e17f6-bea2-4db6-8969-94820098f2dc-0'
### Description
when the model call tool, it sometime return response as expected, but sometime it will response like the error above.
The <|start_header_id|> is duplicate multiple time and it take too much time to finish. I have to wait for the response, do a check and rerun the process to get the right response.
Not only that, the success rate of normal chain which involve prompt, parser is low to. Good thing is lang graph do the rerun so i don't have to worry much about it, but it still take a lot of time.
I only encountered this problem since yesterday. Sooner than that, it worked flawlessly.
I updated langchain, langgraph, langsmith and langchain_groq to the last version yesterday. I think it cause the problem.
### System Info
aiohttp==3.9.5
aiosignal==1.3.1
alembic==1.13.1
annotated-types==0.6.0
anthropic==0.28.1
anyio==4.3.0
appdirs==1.4.4
asgiref==3.8.1
asttokens==2.4.1
async-timeout==4.0.3
attrs==23.2.0
Babel==2.15.0
backoff==2.2.1
bcrypt==4.1.3
beautifulsoup4==4.12.3
blinker==1.8.2
boto3==1.34.127
botocore==1.34.127
Brotli==1.1.0
bs4==0.0.2
build==1.2.1
cachetools==5.3.3
catalogue==2.0.10
certifi==2024.2.2
cffi==1.16.0
charset-normalizer==3.3.2
chroma-hnswlib==0.7.3
chromadb==0.4.24
click==8.1.7
coloredlogs==15.0.1
comm==0.2.2
courlan==1.1.0
crewai==0.28.8
crewai-tools==0.2.3
cryptography==42.0.6
dataclasses-json==0.6.5
dateparser==1.2.0
debugpy==1.8.1
decorator==5.1.1
defusedxml==0.7.1
Deprecated==1.2.14
deprecation==2.1.0
dirtyjson==1.0.8
distro==1.9.0
docstring-parser==0.15
embedchain==0.1.102
exceptiongroup==1.2.1
executing==2.0.1
faiss-cpu==1.8.0
faiss-gpu==1.7.2
fast-pytorch-kmeans==0.2.0.1
fastapi==0.110.3
filelock==3.14.0
flatbuffers==24.3.25
free-proxy==1.1.1
frozenlist==1.4.1
fsspec==2024.3.1
git-python==1.0.3
gitdb==4.0.11
GitPython==3.1.43
google==3.0.0
google-ai-generativelanguage==0.6.4
google-api-core==2.19.0
google-api-python-client==2.133.0
google-auth==2.29.0
google-auth-httplib2==0.2.0
google-cloud-aiplatform==1.50.0
google-cloud-bigquery==3.21.0
google-cloud-core==2.4.1
google-cloud-resource-manager==1.12.3
google-cloud-storage==2.16.0
google-crc32c==1.5.0
google-generativeai==0.5.4
google-resumable-media==2.7.0
googleapis-common-protos==1.63.0
gptcache==0.1.43
graphviz==0.20.3
greenlet==3.0.3
groq==0.5.0
grpc-google-iam-v1==0.13.0
grpcio==1.63.0
grpcio-status==1.62.2
h11==0.14.0
html2text==2024.2.26
htmldate==1.8.1
httpcore==1.0.5
httplib2==0.22.0
httptools==0.6.1
httpx==0.27.0
huggingface-hub==0.23.0
humanfriendly==10.0
idna==3.7
importlib-metadata==7.0.0
importlib_resources==6.4.0
iniconfig==2.0.0
instructor==0.5.2
ipykernel==6.29.4
ipython==8.24.0
itsdangerous==2.2.0
jedi==0.19.1
Jinja2==3.1.3
jiter==0.4.2
jmespath==1.0.1
joblib==1.4.2
jsonpatch==1.33
jsonpointer==2.4
jupyter_client==8.6.1
jupyter_core==5.7.2
jusText==3.0.0
kubernetes==29.0.0
lancedb==0.5.7
langchain==0.2.5
langchain-anthropic==0.1.11
langchain-aws==0.1.3
langchain-chroma==0.1.0
langchain-community==0.2.5
langchain-core==0.2.8
langchain-experimental==0.0.60
langchain-google-genai==1.0.3
langchain-groq==0.1.5
langchain-openai==0.1.6
langchain-text-splitters==0.2.1
langchainhub==0.1.18
langgraph==0.0.57
langsmith==0.1.80
lark==1.1.9
llama-index==0.10.36
llama-index-agent-openai==0.2.4
llama-index-cli==0.1.12
llama-index-embeddings-openai==0.1.9
llama-index-indices-managed-llama-cloud==0.1.6
llama-index-llms-openai==0.1.18
llama-index-multi-modal-llms-openai==0.1.5
llama-index-program-openai==0.1.6
llama-index-question-gen-openai==0.1.3
llama-index-readers-file==0.1.22
llama-index-readers-llama-parse==0.1.4
llama-parse==0.4.2
lxml==5.1.1
Mako==1.3.3
markdown-it-py==3.0.0
MarkupSafe==2.1.5
marshmallow==3.21.2
matplotlib-inline==0.1.7
mdurl==0.1.2
minify_html==0.15.0
mmh3==4.1.0
monotonic==1.6
mpmath==1.3.0
multidict==6.0.5
mutagen==1.47.0
mypy-extensions==1.0.0
nest-asyncio==1.6.0
networkx==3.3
nodeenv==1.8.0
numpy==1.26.4
nvidia-cublas-cu12==12.1.3.1
nvidia-cuda-cupti-cu12==12.1.105
nvidia-cuda-nvrtc-cu12==12.1.105
nvidia-cuda-runtime-cu12==12.1.105
nvidia-cudnn-cu12==8.9.2.26
nvidia-cufft-cu12==11.0.2.54
nvidia-curand-cu12==10.3.2.106
nvidia-cusolver-cu12==11.4.5.107
nvidia-cusparse-cu12==12.1.0.106
nvidia-nccl-cu12==2.20.5
nvidia-nvjitlink-cu12==12.4.127
nvidia-nvtx-cu12==12.1.105
oauthlib==3.2.2
onnxruntime==1.17.3
openai==1.25.1
opentelemetry-api==1.24.0
opentelemetry-exporter-otlp-proto-common==1.24.0
opentelemetry-exporter-otlp-proto-grpc==1.24.0
opentelemetry-exporter-otlp-proto-http==1.24.0
opentelemetry-instrumentation==0.45b0
opentelemetry-instrumentation-asgi==0.45b0
opentelemetry-instrumentation-fastapi==0.45b0
opentelemetry-proto==1.24.0
opentelemetry-sdk==1.24.0
opentelemetry-semantic-conventions==0.45b0
opentelemetry-util-http==0.45b0
orjson==3.10.2
outcome==1.3.0.post0
overrides==7.7.0
packaging==23.2
pandas==2.2.2
parso==0.8.4
pexpect==4.9.0
pillow==10.3.0
platformdirs==4.2.1
playwright==1.43.0
pluggy==1.5.0
posthog==3.5.0
prompt-toolkit==3.0.43
proto-plus==1.23.0
protobuf==4.25.3
psutil==5.9.8
ptyprocess==0.7.0
pulsar-client==3.5.0
pure-eval==0.2.2
py==1.11.0
pyarrow==16.0.0
pyarrow-hotfix==0.6
pyasn1==0.6.0
pyasn1_modules==0.4.0
pycparser==2.22
pycryptodomex==3.20.0
pydantic==2.7.1
pydantic_core==2.18.2
pyee==11.1.0
PyGithub==1.59.1
Pygments==2.18.0
PyJWT==2.8.0
pylance==0.9.18
PyNaCl==1.5.0
pyparsing==3.1.2
pypdf==4.2.0
PyPika==0.48.9
pyproject_hooks==1.1.0
pyright==1.1.361
pysbd==0.3.4
PySocks==1.7.1
pytest==8.2.0
python-dateutil==2.9.0.post0
python-dotenv==1.0.1
pytube==15.0.0
pytz==2024.1
PyYAML==6.0.1
pyzmq==26.0.3
random-user-agent==1.0.1
rank-bm25==0.2.2
ratelimiter==1.2.0.post0
redis==5.0.4
regex==2023.12.25
requests==2.31.0
requests-file==2.0.0
requests-oauthlib==2.0.0
retry==0.9.2
rich==13.7.1
rsa==4.9
s3transfer==0.10.1
safetensors==0.4.3
schema==0.7.7
scikit-learn==1.4.2
scipy==1.13.0
selenium==4.20.0
semver==3.0.2
sentence-transformers==2.7.0
shapely==2.0.4
six==1.16.0
smmap==5.0.1
sniffio==1.3.1
sortedcontainers==2.4.0
soupsieve==2.5
SQLAlchemy==2.0.29
stack-data==0.6.3
starlette==0.37.2
striprtf==0.0.26
sympy==1.12
tavily-python==0.3.3
tenacity==8.2.3
threadpoolctl==3.5.0
tiktoken==0.6.0
tld==0.13
tldextract==5.1.2
tokenizers==0.19.1
tomli==2.0.1
torch==2.3.0
tornado==6.4
tqdm==4.66.4
trafilatura==1.9.0
traitlets==5.14.3
transformers==4.40.1
trio==0.25.0
trio-websocket==0.11.1
triton==2.3.0
typer==0.9.4
types-requests==2.32.0.20240602
typing-inspect==0.9.0
typing_extensions==4.11.0
tzdata==2024.1
tzlocal==5.2
ujson==5.9.0
undetected-playwright==0.3.0
uritemplate==4.1.1
urllib3==2.2.1
uuid6==2024.1.12
uvicorn==0.29.0
uvloop==0.19.0
watchfiles==0.21.0
wcwidth==0.2.13
websocket-client==1.8.0
websockets==12.0
wrapt==1.16.0
wsproto==1.2.0
yarl==1.9.4
youtube-transcript-api==0.6.2
yt-dlp==2023.12.30
zipp==3.18.1
platform: Ubuntu 22.04 LTS
python version: 3.10.12 | Running Groq using llama3 model keep getting un-formatted output | https://api.github.com/repos/langchain-ai/langchain/issues/23248/comments | 2 | 2024-06-21T07:28:20Z | 2024-06-27T08:20:11Z | https://github.com/langchain-ai/langchain/issues/23248 | 2,365,908,695 | 23,248 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
import os
from dotenv import load_dotenv
from langchain_core.globals import set_debug
from langchain_core.messages import HumanMessage, SystemMessage
from langchain_openai import ChatOpenAI
set_debug(True)
load_dotenv()
model = ChatOpenAI(
api_key=os.getenv('OPENAI_API_KEY'),
base_url=os.getenv('OPENAI_BASE_URL'),
model="gpt-3.5-turbo"
)
messages = [
SystemMessage(content="Translate the following from English into Italian"),
HumanMessage(content="hi!"),
]
if __name__ == "__main__":
print(model.invoke(messages))
```
### Error Message and Stack Trace (if applicable)
```
[llm/start] [llm:ChatOpenAI] Entering LLM run with input:
{
"prompts": [
"System: Translate the following from English into Italian\nHuman: hi!"
]
}
```
### Description
Shouldn't it be like this?
```
[llm/start] [llm:ChatOpenAI] Entering LLM run with input:
{
"prompts": [
"System": "Translate the following from English into Italian",
"Human": "hi!"
]
}
```
### System Info
I have tried it in two conda envs:
---
langchain 0.2.5
windows 11
python Python 3.11.9
---
langchain 0.1.10
windows 11
python Python 3.11.7 | Misleading logs | https://api.github.com/repos/langchain-ai/langchain/issues/23239/comments | 2 | 2024-06-21T00:13:49Z | 2024-06-21T01:57:47Z | https://github.com/langchain-ai/langchain/issues/23239 | 2,365,465,210 | 23,239 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
https://github.com/langchain-ai/langchain/blob/bf7763d9b0210d182409d35f538ddb97c9d2c0ad/libs/core/langchain_core/tools.py#L291-L310
### Error Message and Stack Trace (if applicable)
_No response_
### Description
My function looks like `f(a:str, b:list[str])`
When llm returns a str looks like `"'A', ['B', 'C']" `
Here will be `input_args.validate({'a': "'A', ['B', 'C']"})` and it will never pass.
But simplly `input_args.validate(tool_input)` will works fine
### System Info
langchain 0.2.5
langchain_core 0.2.7
windows
python 3.12 | Wrong parse _parse_input when tool_input is str. | https://api.github.com/repos/langchain-ai/langchain/issues/23230/comments | 0 | 2024-06-20T17:55:55Z | 2024-06-20T17:58:27Z | https://github.com/langchain-ai/langchain/issues/23230 | 2,364,969,551 | 23,230 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
embedding = ZhipuAIEmbeddings(
api_key="xxx"
)
text = "This is a test query."
query_result = embedding.embed_query(text)
```
### Error Message and Stack Trace (if applicable)
Traceback (most recent call last):
File "/Users/edy/PycharmProjects/Jupyter-Notebook/langchain_V_0_2_0/vecotr_stores_and_retrievers2.py", line 35, in <module>
query_result = embedding.embed_query(text)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/edy/PycharmProjects/Jupyter-Notebook/venv/lib/python3.11/site-packages/langchain_community/embeddings/zhipuai.py", line 60, in embed_query
resp = self.embed_documents([text])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/edy/PycharmProjects/Jupyter-Notebook/venv/lib/python3.11/site-packages/langchain_community/embeddings/zhipuai.py", line 74, in embed_documents
resp = self._client.embeddings.create(model=self.model, input=texts)
^^^^^^^^^^^^^^^^^^^^^^^
AttributeError: 'FieldInfo' object has no attribute 'embeddings'
### Description
The _client field in the ZhipuAIEmbeddings class cannot be correctly initialized.
Rename _client to client and update the function embed_documents.
example:
```python
client: Any = Field(default=None, exclude=True)
values["client"] = ZhipuAI(api_key=values["api_key"])
def embed_documents(self, texts: List[str]) -> List[List[float]]:
resp = self.client.embeddings.create(model=self.model, input=texts)
embeddings = [r.embedding for r in resp.data]
return embeddings
```
### System Info
langchain==0.2.5
langchain-chroma==0.1.1
langchain-community==0.2.5
langchain-core==0.2.9
langchain-experimental==0.0.61
langchain-text-splitters==0.2.1
langchain-weaviate==0.0.2
platform mac
python version 3.11 | The ZhipuAIEmbeddings class is not working. | https://api.github.com/repos/langchain-ai/langchain/issues/23215/comments | 0 | 2024-06-20T09:35:13Z | 2024-06-20T13:04:52Z | https://github.com/langchain-ai/langchain/issues/23215 | 2,363,985,738 | 23,215 |
[
"langchain-ai",
"langchain"
] | ### Privileged issue
- [X] I am a LangChain maintainer, or was asked directly by a LangChain maintainer to create an issue here.
### Issue Content
test | test | https://api.github.com/repos/langchain-ai/langchain/issues/23195/comments | 0 | 2024-06-19T20:17:33Z | 2024-06-21T19:27:23Z | https://github.com/langchain-ai/langchain/issues/23195 | 2,363,068,260 | 23,195 |
[
"langchain-ai",
"langchain"
] | ### Privileged issue
- [X] I am a LangChain maintainer, or was asked directly by a LangChain maintainer to create an issue here.
### Issue Content
We want to add docstring linting to langchain-core, langchain, langchain-text-splitters, and partner packages. This requires adding this to each pacakages pyproject.toml
```toml
[tool.ruff.lint]
select = [
...
"D", # pydocstyle
]
[tool.ruff.lint.pydocstyle]
convention = "google"
[tool.ruff.per-file-ignores]
"tests/**" = ["D"] # ignore docstring checks for tests
```
this will likely cause a number of new linting errors which then need to be fixed. there should be a separate pr for each package. here's a reference for langchain-openai (linting errors have not yet been fixed) https://github.com/langchain-ai/langchain/pull/23187 | Add docstring linting to core, langchain, partner packages | https://api.github.com/repos/langchain-ai/langchain/issues/23188/comments | 1 | 2024-06-19T18:12:49Z | 2024-06-21T07:36:12Z | https://github.com/langchain-ai/langchain/issues/23188 | 2,362,917,741 | 23,188 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
from langchain_core.chat_history import BaseChatMessageHistory
```
### Error Message and Stack Trace (if applicable)
```python
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/christophebornet/Library/Caches/pypoetry/virtualenvs/ragstack-ai-langchain-F6idWdWf-py3.9/lib/python3.9/site-packages/langchain_core/chat_history.py", line 29, in <module>
from langchain_core.runnables import run_in_executor
File "/Users/christophebornet/Library/Caches/pypoetry/virtualenvs/ragstack-ai-langchain-F6idWdWf-py3.9/lib/python3.9/site-packages/langchain_core/runnables/__init__.py", line 39, in <module>
from langchain_core.runnables.history import RunnableWithMessageHistory
File "/Users/christophebornet/Library/Caches/pypoetry/virtualenvs/ragstack-ai-langchain-F6idWdWf-py3.9/lib/python3.9/site-packages/langchain_core/runnables/history.py", line 16, in <module>
from langchain_core.chat_history import BaseChatMessageHistory
ImportError: cannot import name 'BaseChatMessageHistory' from partially initialized module 'langchain_core.chat_history' (most likely due to a circular import) (/Users/christophebornet/Library/Caches/pypoetry/virtualenvs/ragstack-ai-langchain-F6idWdWf-py3.9/lib/python3.9/site-packages/langchain_core/chat_history.py)
```
### Description
On latest master, importing BaseChatMessageHistory fails because of a circular dependency. (it works with the very recent langchain-core 0.2.9)
I suspect this is caused by : https://github.com/langchain-ai/langchain/pull/23136
### System Info
langchain==0.2.5
langchain-astradb==0.3.3
langchain-community==0.2.5
langchain-core @ git+https://github.com/langchain-ai/langchain.git@4fe8403bfbb81e7780179a3b164aa22c694e2ece#subdirectory=libs/core
langchain-openai==0.1.8
langchain-text-splitters==0.2.1 | Crash due to circular dependency on BaseChatMessageHistory | https://api.github.com/repos/langchain-ai/langchain/issues/23175/comments | 2 | 2024-06-19T14:21:37Z | 2024-06-19T17:43:37Z | https://github.com/langchain-ai/langchain/issues/23175 | 2,362,515,421 | 23,175 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
import os
from langchain_community.chat_models.moonshot import MoonshotChat
os.environ["MOONSHOT_API_KEY"] = "{my_api_key}
chat = MoonshotChat()
```
### Error Message and Stack Trace (if applicable)
File "/foo/bar/venv/lib/python3.12/site-packages/langchain_community/chat_models/moonshot.py", line 45, in validate_environment
"api_key": values["moonshot_api_key"].get_secret_value(),
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
AttributeError: 'str' object has no attribute 'get_secret_value'
### Description


`get_from_dict_or_env` returns the type of `SecretStr` when api_key is set through the constructor (`MoonshotChat(api_key={})`), but when api_key is set through OS environment, which is mentioned in [docs](https://python.langchain.com/v0.2/docs/integrations/chat/moonshot/), it returns the type of `str`.
So the exception raised: AttributeError: 'str' object has no attribute 'get_secret_value'
Solution: we need to convert the result of `get_from_dict_or_env` , if it's an instance of `str`, then convert it to `SecretStr`
### System Info
AttributeError: 'str' object has no attribute 'get_secret_value' | MoonshotChat fails when setting the moonshot_api_key through the OS environment. | https://api.github.com/repos/langchain-ai/langchain/issues/23174/comments | 0 | 2024-06-19T14:13:12Z | 2024-06-19T16:28:25Z | https://github.com/langchain-ai/langchain/issues/23174 | 2,362,496,250 | 23,174 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
The following code using AzureSearch as vectorstore for Azure Cognitive Search always gives me invalid json format:
import os
from langchain_openai import AzureChatOpenAI
from langchain.retrievers.multi_query import MultiQueryRetriever
from langchain_openai import AzureOpenAIEmbeddings
from typing import List
from langchain.chains import LLMChain
from langchain.output_parsers import PydanticOutputParser
from langchain_core.prompts import PromptTemplate
from pydantic import BaseModel, Field
from langchain_community.vectorstores.azuresearch import AzureSearch
api_key = os.getenv("AZURE_OPENAI_API_KEY")
api_version = os.getenv("AZURE_API_VERSION")
azure_endpoint = os.getenv("AZURE_OPENAI_ENDPOINT")
vector_store_password = os.getenv("AZURE_SEARCH_ADMIN_KEY")
vector_store_address = os.getenv("AZURE_SEARCH_ENDPOINT")
service_name = os.getenv("AZURE_SERVICE_NAME")
azure_deployment="text-embedding-ada-002"
azure_openai_api_version=api_version
azure_endpoint=azure_endpoint
azure_openai_api_key=api_key
embeddings: AzureOpenAIEmbeddings = AzureOpenAIEmbeddings(
azure_deployment=azure_deployment,
openai_api_version=azure_openai_api_version,
azure_endpoint=azure_endpoint,
api_key=azure_openai_api_key,
)
index_name: str = "test-index"
vector_store: AzureSearch = AzureSearch(
azure_search_endpoint=vector_store_address,
azure_search_key=vector_store_password,
index_name=index_name,
embedding_function=embeddings.embed_query,
)
model= os.getenv('modelName')
# Output parser will split the LLM result into a list of queries
class LineList(BaseModel):
# "lines" is the key (attribute name) of the parsed output
lines: List[str] = Field(description="Lines of text")
class LineListOutputParser(PydanticOutputParser):
def __init__(self) -> None:
super().__init__(pydantic_object=LineList)
def parse(self, text: str) -> LineList:
lines = text.strip().split("\n")
return LineList(lines=lines)
output_parser = LineListOutputParser()
QUERY_PROMPT = PromptTemplate(
input_variables=["question"],
template="""You are an AI language model assistant. Your task is to generate five
different versions of the given user question to retrieve relevant documents from a vector
database. By generating multiple perspectives on the user question, your goal is to help
the user overcome some of the limitations of the distance-based similarity search.
Provide these alternative questions separated by newlines.
Original question: {question}""",
)
llm = AzureChatOpenAI(temperature=0,api_key=api_key,api_version=api_version,model=model)
llm_chain = LLMChain(llm=llm, prompt=QUERY_PROMPT, output_parser=output_parser)
retriever = MultiQueryRetriever(
retriever=vector_store.as_retriever(), llm_chain=llm_chain, parser_key="lines"
)
print(type(retriever.llm_chain.output_parser))
print(retriever.llm_chain.output_parser)
unique_docs = retriever.invoke(query="What is Llama-2?")
print(unique_docs)
### Error Message and Stack Trace (if applicable)
Exception has occurred: OutputParserException
langchain_core.exceptions.OutputParserException: Invalid json output: Can you provide information on Llama-2?
Could you explain the concept of Llama-2?
What does Llama-2 refer to?
File Python\Python312\Lib\json\decoder.py", line 353, in raw_decode
obj, end = self.scan_once(s, idx)
^^^^^^^^^^^^^^^^^^^^^^
StopIteration: 0
During handling of the above exception, another exception occurred:
File "Python\Python312\Lib\site-packages\langchain_core\output_parsers\json.py", line 66, in parse_result
return parse_json_markdown(text)
^^^^^^^^^^^^^^^^^^^^^^^^^
File \Python\Python312\Lib\site-packages\langchain_core\utils\json.py", line 147, in parse_json_markdown
return _parse_json(json_str, parser=parser)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File \Python\Python312\Lib\site-packages\langchain_core\utils\json.py", line 160, in _parse_json
return parser(json_str)
^^^^^^^^^^^^^^^^
File Python\Python312\Lib\site-packages\langchain_core\utils\json.py", line 120, in parse_partial_json
return json.loads(s, strict=strict)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File \Python\Python312\Lib\json\__init__.py", line 359, in loads
return cls(**kw).decode(s)
^^^^^^^^^^^^^^^^^^^
File \Python\Python312\Lib\json\decoder.py", line 337, in decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File \Python\Python312\Lib\json\decoder.py", line 355, in raw_decode
raise JSONDecodeError("Expecting value", s, err.value) from None
json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)
The above exception was the direct cause of the following exception:
File Python\Python312\Lib\site-packages\langchain_core\output_parsers\json.py", line 69, in parse_result
raise OutputParserException(msg, llm_output=text) from e
File Python\Python312\Lib\site-packages\langchain_core\output_parsers\pydantic.py", line 60, in parse_result
json_object = super().parse_result(result)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File Python\Python312\Lib\site-packages\langchain\chains\llm.py", line 284, in create_outputs
self.output_key: self.output_parser.parse_result(generation),
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File Python\Python312\Lib\site-packages\langchain\chains\llm.py", line 127, in _call
return self.create_outputs(response)[0]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File Python\Python312\Lib\site-packages\langchain\chains\base.py", line 166, in invoke
raise e
File Python\Python312\Lib\site-packages\langchain\chains\base.py", line 166, in invoke
raise e
File Python\Python312\Lib\site-packages\langchain\retrievers\multi_query.py", line 182, in generate_queries
response = self.llm_chain.invoke(
^^^^^^^^^^^^^^^^^^^^^^
File Python\Python312\Lib\site-packages\langchain\retrievers\multi_query.py", line 165, in _get_relevant_documents
queries = self.generate_queries(query, run_manager)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File Python\Python312\Lib\site-packages\langchain_core\retrievers.py", line 221, in invoke
raise e
File Python\Python312\Lib\site-packages\langchain_core\retrievers.py", line 221, in invoke
raise e
File Python\Python312\Lib\site-packages\langchain_core\retrievers.py", line 355, in get_relevant_documents
return self.invoke(query, config, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File Python\Python312\Lib\site-packages\langchain_core\_api\deprecation.py", line 168, in warning_emitting_wrapper
return wrapped(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^
unique_docs = retriever.get_relevant_documents(query="What is Llama-2")
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File Python\Python312\Lib\site-packages\langchain_core\output_parsers\json.py", line 69, in parse_result
raise OutputParserException(msg, llm_output=text) from e
langchain_core.exceptions.OutputParserException: Invalid json output: Can you provide information on Llama-2?
Could you explain the concept of Llama-2?
What does Llama-2 refer to?
### Description
I'm trying to use MultiQuery Retriver using vector store of Azure Cognitive Search. I'm following the example explained in Langchain documentation, https://python.langchain.com/v0.1/docs/modules/data_connection/retrievers/MultiQueryRetriever/ and used the vector store AzureSearch, Is there anything I miss using the vector_store for MultiQuery Retriever? I always see an error, langchain_core.exceptions.OutputParserException: Invalid json output.
### System Info
System Information
------------------
> OS: Windows
> OS Version: 10.0.19045
> Python Version: 3.12.2 (tags/v3.12.2:6abddd9, Feb 6 2024, 21:26:36) [MSC v.1937 64 bit (AMD64)]
Package Information
-------------------
> langchain_core: 0.2.9
> langchain: 0.2.5
> langchain_community: 0.0.13
> langsmith: 0.1.80
> langchain_chroma: 0.1.1
> langchain_openai: 0.1.8
> langchain_text_splitters: 0.2.1
> langgraph: 0.0.69
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langserve | MultiQuery Retriever Using AzureSearch vector store always returns a invalid json format error | https://api.github.com/repos/langchain-ai/langchain/issues/23171/comments | 2 | 2024-06-19T14:02:04Z | 2024-06-23T06:50:53Z | https://github.com/langchain-ai/langchain/issues/23171 | 2,362,472,977 | 23,171 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
Using the below code for semantic cache
```
from langchain.globals import set_llm_cache
from langchain_openai import OpenAI
from langchain.cache import RedisSemanticCache
from langchain_huggingface import HuggingFaceEmbeddings
import time, os
from langchain_openai import AzureChatOpenAI
llm = AzureChatOpenAI(<my credentials>)
huggingface_embedding = HuggingFaceEmbeddings(model_name="all-MiniLM-L6-v2")
set_llm_cache(
RedisSemanticCache(redis_url="redis://127.0.0.1:6379", embedding=huggingface_embedding)
)
question = "What is capital of Japan?"
res = llm.invoke(question)
```
both redis db and redis python client I installed.
```redis-5.0.6```
```redis-cli 7.2.5```
Still its getting the given error
```
[BUG]ValueError: Redis failed to connect: Redis cannot be used as a vector database without RediSearch >=2.4Please head to https://redis.io/docs/stack/search/quick_start/to know more about installing the RediSearch module within Redis Stack.
```
But the strange thing is, there is no **2.4** version available for the python client **RediSearch** in pypi
### Error Message and Stack Trace (if applicable)
_No response_
### Description
Using the below code for semantic cache
```
from langchain.globals import set_llm_cache
from langchain_openai import OpenAI
from langchain.cache import RedisSemanticCache
from langchain_huggingface import HuggingFaceEmbeddings
import time, os
from langchain_openai import AzureChatOpenAI
llm = AzureChatOpenAI(<my credentials>)
huggingface_embedding = HuggingFaceEmbeddings(model_name="all-MiniLM-L6-v2")
set_llm_cache(
RedisSemanticCache(redis_url="redis://127.0.0.1:6379", embedding=huggingface_embedding)
)
question = "What is capital of Japan?"
res = llm.invoke(question)
```
both redis db and redis python client I installed.
```redis-5.0.6```
```redis-cli 7.2.5```
Still its getting the given error
```
[BUG]ValueError: Redis failed to connect: Redis cannot be used as a vector database without RediSearch >=2.4Please head to https://redis.io/docs/stack/search/quick_start/to know more about installing the RediSearch module within Redis Stack.
```
But the strange thing is, there is no **2.4** version available for the python client **RediSearch** in pypi
### System Info
```
python 3.9
ubuntu machine
langchain==0.1.12
langchain-community==0.0.36
langchain-core==0.2.9
``` | [BUG]ValueError: Redis failed to connect: Redis cannot be used as a vector database without RediSearch >=2.4Please head to https://redis.io/docs/stack/search/quick_start/to know more about installing the RediSearch module within Redis Stack | https://api.github.com/repos/langchain-ai/langchain/issues/23168/comments | 1 | 2024-06-19T11:19:13Z | 2024-06-19T11:47:18Z | https://github.com/langchain-ai/langchain/issues/23168 | 2,362,099,035 | 23,168 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
from langchain_core.pydantic_v1 import BaseModel, Field
from langchain_openai import ChatOpenAI
from langchain_core.runnables import ConfigurableField
class Joke(BaseModel):
setup: str = Field(description="The setup of the joke")
punchline: str = Field(description="The punchline to the joke")
model = ChatOpenAI(model="gpt-3.5-turbo-0125", temperature=2, api_key="API-KEY").configurable_fields(
temperature=ConfigurableField(id="temperature", name="Temperature", description="The temperature of the model")
)
structured_llm = model.with_structured_output(Joke)
## This line does not raise an exception meaning the temperature field is not passed to the llm
structured_llm.with_config(configurable={"temperature" : 20}).invoke("Tell me a joke about cats")
## This raises exception as expected, as temperature is above 2
model.with_config(configurable={"temperature" : 20}).invoke("Tell me a joke about cats")
```
### Error Message and Stack Trace (if applicable)
_No response_
### Description
using `.with_config(configurable={})` together with `.with_structured_output()` in ChatOpenAI breaks propogating the configurable fields.
This issue might exist in other provider implementations too.
### System Info
langchain==0.2.3
langchain-anthropic==0.1.15
langchain-community==0.2.4
langchain-core==0.2.5
langchain-google-vertexai==1.0.5
langchain-openai==0.1.8
langchain-text-splitters==0.2.1 | ChatOpenAI with_structured output breaks Runnable ConfigurableFields | https://api.github.com/repos/langchain-ai/langchain/issues/23167/comments | 3 | 2024-06-19T11:15:24Z | 2024-06-21T10:49:35Z | https://github.com/langchain-ai/langchain/issues/23167 | 2,362,092,354 | 23,167 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
The following code of JSON Loader, which is there in Langchain Documentation
import { JSONLoader } from "langchain/document_loaders/fs/json";
const loader = new JSONLoader("src/document_loaders/example_data/example.json");
const docs = await loader.load();
### Error Message and Stack Trace (if applicable)
I am trying to initiate a conversation with json files where I want to load this json file content into a docs variable then I am performing required steps to ask questions on it using openai api and langchain. It is unable to understand the context and also failed to identify the properties and it's values.
Following is my json file.
{
"name": "OpenAI",
"description": "A research and deployment company focused on AI.",
"endpoints": [
{
"path": "/completions",
"method": "POST",
"required_parameters": ["model", "prompt"]
},
{
"path": "/edits",
"method": "POST",
"required_parameters": ["model", "input", "instruction"]
}
]
}
### Description
I asked it a question as :
"What are the method and required_parameters in /completions endpoint ?"
Output :

### System Info
Node js
Langchain
Windows | JSON Loader is not working as expected. | https://api.github.com/repos/langchain-ai/langchain/issues/23166/comments | 0 | 2024-06-19T10:37:20Z | 2024-06-19T10:39:51Z | https://github.com/langchain-ai/langchain/issues/23166 | 2,362,004,538 | 23,166 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```
# Check retrieval
query = "What are the EV / NTM and NTM rev growth for MongoDB, Cloudflare, and Datadog?"
docs = retriever_multi_vector_img.invoke(query, limit=6)
# We get 4 docs
len(docs)
```
### Error Message and Stack Trace (if applicable)
_No response_
### Description
`retriever_multi_vector_img.invoke(query)` no longer has a method to limit or increase the amount of docs returned and subsequently passed to the LLM. This is defaulted at 4 and no information can be found on the issue.
You can see the incorrect use in this cookbook: https://github.com/langchain-ai/langchain/blob/master/cookbook/Multi_modal_RAG.ipynb
```
# Check retrieval
query = "What are the EV / NTM and NTM rev growth for MongoDB, Cloudflare, and Datadog?"
docs = retriever_multi_vector_img.invoke(query, limit=6)
# We get 4 docs
len(docs)
```
Where 6 was the limit and 4 is returned. How can we enforce more docs to be returned?
### System Info
Using langchain_core 0.2.8 | Can't Specify Top-K retrieved Documents in Multimodal Retrievers using Invoke() | https://api.github.com/repos/langchain-ai/langchain/issues/23158/comments | 1 | 2024-06-19T03:37:39Z | 2024-07-01T07:01:34Z | https://github.com/langchain-ai/langchain/issues/23158 | 2,361,196,976 | 23,158 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```
from langchain_core.prompts import PromptTemplate
from langchain_core.pydantic_v1 import BaseModel, Field
from langchain_experimental.llms.ollama_functions import OllamaFunctions
from typing import Optional
import json
# Schema for structured response
class AuditorOpinion(BaseModel):
opinion: Optional[str] = Field(
None,
description="The auditor's opinion on the financial statements. Values are: 'Unqualified Opinion', "
"'Qualified Opinion', 'Adverse Opinion', 'Disclaimer of Opinion'."
)
def load_markdown_file(file_path):
with open(file_path, 'r') as file:
return file.read()
path = "data/auditor_opinion_1.md"
markdown_text = load_markdown_file(path)
# Prompt template
prompt = PromptTemplate.from_template(
"""
what is the auditor's opinion
Human: {question}
AI: """
)
# Chain
llm = OllamaFunctions(model="llama3", format="json", temperature=0)
structured_llm = llm.with_structured_output(AuditorOpinion)
chain = prompt | structured_llm
alex = chain.invoke(markdown_text)
response_dict = alex.dict()
# Serialize the dictionary to a JSON string with indentation for readability
readable_json = json.dumps(response_dict, indent=2, ensure_ascii=False)
# Print the readable JSON
print(readable_json)
```
### Error Message and Stack Trace (if applicable)
```
langchain_experimental/llms/ollama_functions.py", line 400, in _generate
raise ValueError(
ValueError: 'llama3' did not respond with valid JSON.
```
### Description
Trying to get structured output from markdown text using with_structured_output
### System Info
System Information
------------------
> OS: Darwin
> OS Version: Darwin Kernel Version 23.5.0: Wed May 1 20:14:38 PDT 2024; root:xnu-10063.121.3~5/RELEASE_ARM64_T6020
> Python Version: 3.9.6 (default, Feb 3 2024, 15:58:27)
[Clang 15.0.0 (clang-1500.3.9.4)]
Package Information
-------------------
> langchain_core: 0.2.9
> langchain: 0.2.5
> langchain_community: 0.2.5
> langsmith: 0.1.79
> langchain_experimental: 0.0.61
> langchain_google_genai: 1.0.5
> langchain_google_vertexai: 1.0.4
> langchain_mistralai: 0.1.8
> langchain_openai: 0.1.8
> langchain_text_splitters: 0.2.0
> langchainhub: 0.1.16
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph
> langserve
| /langchain_experimental/llms/ollama_functions.py", line 400, in _generate raise ValueError( ValueError: 'llama3' did not respond with valid JSON. | https://api.github.com/repos/langchain-ai/langchain/issues/23156/comments | 2 | 2024-06-19T02:34:35Z | 2024-06-24T06:09:34Z | https://github.com/langchain-ai/langchain/issues/23156 | 2,361,114,911 | 23,156 |
[
"langchain-ai",
"langchain"
] | ### URL
https://python.langchain.com/v0.2/docs/how_to/installation/
### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
Was very usefull
### Idea or request for content:
_No response_ | DOC: <Issue related to /v0.2/docs/how_to/installation/> | https://api.github.com/repos/langchain-ai/langchain/issues/23140/comments | 0 | 2024-06-18T22:28:56Z | 2024-06-18T22:31:22Z | https://github.com/langchain-ai/langchain/issues/23140 | 2,360,849,759 | 23,140 |
[
"langchain-ai",
"langchain"
] | ### URL
https://python.langchain.com/v0.2/docs/integrations/memory/zep_memory/
### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
Hi, Since this is in the v0.2 docs that this example should be written using the current langchain expression languange syntax. Or, is the OSS version of ZEP not compatible with LCEL? It's kind of confusing. Thanks
### Idea or request for content:
_No response_ | Out of date with LangChain Expression Language. DOC: <Issue related to /v0.2/docs/integrations/memory/zep_memory/> | https://api.github.com/repos/langchain-ai/langchain/issues/23129/comments | 0 | 2024-06-18T18:11:25Z | 2024-06-18T18:13:54Z | https://github.com/langchain-ai/langchain/issues/23129 | 2,360,427,696 | 23,129 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
I have inserted documents in a cosmos db with no-sql api, the insetion works well. The documents contains metadata (one of the fields is `claim_id`). I want to run a search but on a subset of documents by filtering on `claim_id`.
Here is the code. but it doesn't seem to work. It always returns results without taking into account the filtering, also the `k` holds it's default value 4.
```pyhton retriever = vector_search.as_retriever(
search_type='similarity',
search_kwargs={
'k': 3,
'filter': {"claim_id": 1}
}
)
from langchain.chains import RetrievalQA
qa_stuff = RetrievalQA.from_chain_type(
llm=llm,
chain_type="stuff",
retriever=retriever,
verbose=True,
return_source_documents=True,
)
query = "what is prompt engineering?"
response = qa_stuff.invoke(query)
print(response) ```
### Error Message and Stack Trace (if applicable)
no error, but unexpected behavior
### Description
I want to query on documents that have only claim_id=1 as metadata.
The returned result shows that the filtering does not work, it seems ignored
### System Info
ai21==2.6.0
ai21-tokenizer==0.10.0
aiohttp==3.9.5
aiosignal==1.3.1
annotated-types==0.7.0
anyio==4.4.0
asttokens==2.4.1
attrs==23.2.0
azure-core==1.30.2
azure-cosmos==4.7.0
certifi==2024.6.2
charset-normalizer==3.3.2
colorama==0.4.6
comm==0.2.2
dataclasses-json==0.6.7
debugpy==1.8.1
decorator==5.1.1
distro==1.9.0
executing==2.0.1
filelock==3.15.1
frozenlist==1.4.1
fsspec==2024.6.0
greenlet==3.0.3
h11==0.14.0
httpcore==1.0.5
httpx==0.27.0
huggingface-hub==0.23.4
idna==3.7
ipykernel==6.29.4
ipython==8.25.0
jedi==0.19.1
jsonpatch==1.33
jsonpointer==3.0.0
jupyter_client==8.6.2
jupyter_core==5.7.2
langchain==0.2.5
langchain-community==0.2.5
langchain-core==0.2.7
langchain-openai==0.1.8
langchain-text-splitters==0.2.1
langsmith==0.1.77
marshmallow==3.21.3
matplotlib-inline==0.1.7
multidict==6.0.5
mypy-extensions==1.0.0
nest-asyncio==1.6.0
numpy==1.26.4
openai==1.34.0
orjson==3.10.5
packaging==24.1
parso==0.8.4
platformdirs==4.2.2
prompt_toolkit==3.0.47
psutil==5.9.8
pure-eval==0.2.2
pydantic==2.7.4
pydantic_core==2.18.4
Pygments==2.18.0
python-dateutil==2.9.0.post0
python-dotenv==1.0.1
pywin32==306
PyYAML==6.0.1
pyzmq==26.0.3
regex==2024.5.15
requests==2.32.3
sentencepiece==0.2.0
six==1.16.0
sniffio==1.3.1
SQLAlchemy==2.0.30
stack-data==0.6.3
tenacity==8.4.1
tiktoken==0.7.0
tokenizers==0.19.1
tornado==6.4.1
tqdm==4.66.4
traitlets==5.14.3
typing-inspect==0.9.0
typing_extensions==4.12.2
urllib3==2.2.2
wcwidth==0.2.13
yarl==1.9.4
| Cannot filter with metadata with azure_cosmos_db_no_sql | https://api.github.com/repos/langchain-ai/langchain/issues/23089/comments | 1 | 2024-06-18T16:01:06Z | 2024-06-20T08:52:36Z | https://github.com/langchain-ai/langchain/issues/23089 | 2,360,210,529 | 23,089 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
from langchain.vectorstores.weaviate import Weaviate
vectorstore = Weaviate(
client=client,
index_name="coll_summary",
text_key="summary"
)
```
### Error Message and Stack Trace (if applicable)
```
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
Cell In[19], [line 1](vscode-notebook-cell:?execution_count=19&line=1)
----> [1](vscode-notebook-cell:?execution_count=19&line=1) vectorstore = Weaviate(
[2](vscode-notebook-cell:?execution_count=19&line=2) client=client,
[3](vscode-notebook-cell:?execution_count=19&line=3) index_name="coll_summary",
[4](vscode-notebook-cell:?execution_count=19&line=4) text_key="summary"
[5](vscode-notebook-cell:?execution_count=19&line=5) )
File ~/weaviate.py:105, in Weaviate.__init__(self, client, index_name, text_key, embedding, attributes, relevance_score_fn, by_text)
[100](https://file+.vscode-resource.vscode-cdn.net//weaviate.py:100) raise ImportError(
[101](https://file+.vscode-resource.vscode-cdn.net/weaviate.py:101) "Could not import weaviate python package. "
[102](https://file+.vscode-resource.vscode-cdn.net/weaviate.py:102) "Please install it with `pip install weaviate-client`."
[103](https://file+.vscode-resource.vscode-cdn.net/weaviate.py:103) )
[104](https://file+.vscode-resource.vscode-cdn.net/weaviate.py:104) if not isinstance(client, weaviate.Client):
--> [105](https://file+.vscode-resource.vscode-cdn.net/weaviate.py:105) raise ValueError(
[106](https://file+.vscode-resource.vscode-cdn.net/weaviate.py:106) f"client should be an instance of weaviate.Client, got {type(client)}"
[107](https://file+.vscode-resource.vscode-cdn.net/weaviate.py:107) )
[108](https://file+.vscode-resource.vscode-cdn.net/weaviate.py:108) self._client = client
[109](https://file+.vscode-resource.vscode-cdn.net/weaviate.py:109) self._index_name = index_name
ValueError: client should be an instance of weaviate.Client, got <class 'weaviate.client.WeaviateClient'>
```
### Description
it seems that a weaviate client now is the class `weaviate.client.WeaviateClient`, not `weaviate.Client`. this means that the instantiation of the vector store fails.
### System Info
```
System Information
------------------
> OS: Darwin
> OS Version: Darwin Kernel Version 20.6.0: Thu Jul 6 22:12:47 PDT 2023; root:xnu-7195.141.49.702.12~1/RELEASE_X86_64
> Python Version: 3.11.3 (main, May 24 2024, 22:45:35) [Clang 13.0.0 (clang-1300.0.29.30)]
Package Information
-------------------
> langchain_core: 0.2.8
> langchain: 0.2.5
> langchain_community: 0.2.5
> langsmith: 0.1.79
> langchain_text_splitters: 0.2.1
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph
> langserve
```
weaviate client library: `4.6.1` | Typechecking for `weaviate.Client` no longer up to date with class name in `weaviate-client`? | https://api.github.com/repos/langchain-ai/langchain/issues/23088/comments | 1 | 2024-06-18T15:48:25Z | 2024-08-05T02:46:51Z | https://github.com/langchain-ai/langchain/issues/23088 | 2,360,183,842 | 23,088 |
[
"langchain-ai",
"langchain"
] | ### URL
https://python.langchain.com/v0.2/docs/how_to/time_weighted_vectorstore/
### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
It's not noted in the documentation that TimeWeightedVectorStoreRetriever works with FAISS vectorDB only (if so?).
### Idea or request for content:
_No response_ | DOC: TimeWeightedVectorStoreRetriever works with FAISS VectorDB only | https://api.github.com/repos/langchain-ai/langchain/issues/23077/comments | 1 | 2024-06-18T12:25:05Z | 2024-07-09T18:47:21Z | https://github.com/langchain-ai/langchain/issues/23077 | 2,359,738,996 | 23,077 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
import pandas as pd
from langchain.agents import create_react_agent, AgentExecutor
from langchain.llms import VertexAI
from langchain_experimental.tools.python.tool import PythonAstREPLTool
from langchain.prompts import PromptTemplate
from langchain_google_vertexai import VertexAI
# --- Create a Sample DataFrame ---
df = pd.DataFrame({"Age": [25, 30, 35, 40], "Value": [10, 20, 30, 40]})
# --- Initialize Vertex AI ---
llm = VertexAI(model_name="gemini-pro", temperature=0)
# --- Tool ---
python_tool = PythonAstREPLTool(locals={"df": df})
# --- Chain-of-Thought Prompt Template ---
prompt_template = """Tu es un assistant expert en Pandas.
Tu dois répondre aux questions en utilisant **uniquement** le format suivant pour **chaque étape** de ton raisonnement :
tool_code Thought: [ta réflexion ici] Action: [l'outil que tu veux utiliser] Action Input: [le code à exécuter par l'outil] Observation: [le résultat de l'exécution du code]
Outils : {tool_names} {tools}
**Ces mots-clés ne doivent jamais être traduits ni transformés :**
- Action:
- Thought:
- Action Input:
- Observation:
Voici les colonnes disponibles dans le DataFrame : {df.columns}
Question: {input}
Thought: Pour répondre à cette question, je dois d'abord trouver le nom de la colonne qui contient les âges.
Action: python_repl_ast
Action Input: print(df.columns)
Observation:
Thought: Maintenant que j'ai la liste des colonnes, je peux utiliser la colonne 'Age' et la fonctionmean()de Pandas pour calculer la moyenne des âges.
Action: python_repl_ast
Action Input: print(df['Age'].mean())
Observation:
python {agent_scratchpad} """
prompt = PromptTemplate(
input_variables=["input", "agent_scratchpad", "df.columns"], template=prompt_template
)
# --- Create ReAct agent ---
react_agent = create_react_agent(
llm=llm, tools=[python_tool], prompt=prompt, stop_sequence=False
)
# --- Agent Executor ---
agent_executor = AgentExecutor(
agent=react_agent,
tools=[python_tool],
verbose=True,
handle_parsing_errors=True,
max_iterations=5,
)
# --- Main Execution Loop ---
test_questions = ["Calcule la moyenne des âges"]
for question in test_questions:
print(f"Question: {question}")
try:
response = agent_executor.invoke(
{"input": question, "df": df, "df.columns": df.columns}
)
print(f"Answer: {response['output']}")
except Exception as e:
print(f"An error occurred: {e}")
### Error Message and Stack Trace (if applicable)
** is not a valid tool, try one of [python_repl_ast].
### Description
I am encountering a persistent issue where the React agent fails to recognize and utilize the "python_repl_ast" tool correctly, despite it being defined in the list of tools.
Steps to Reproduce:
Define a Pandas DataFrame.
Initialize VertexAI from langchain_google_vertexai
Create the python_repl_asttool using PythonAstREPLTool and passing the DataFrame.
Define a prompt that includes instructions for the agent to use python_repl_ast to perform a calculation on the DataFrame (e.g., calculate the mean of a column).
Create the React agent using create_react_agentand passing the tool.
Run the agent with a question related to the DataFrame.
Expected Behavior:
The agent should correctly interpret the "Action" and "Action Input" instructions in the prompt, execute the Python code using python_repl_ast and return the result in the "Observation" section.
Actual Behavior:
The agent repeatedly returns the error message "** python_repl_ast
** is not a valid tool, try one of [python_repl_ast]."
### System Info
Windows, vs code, python 3.10, langchain 0.2.2 | React Agent Fails to Recognize "python_repl_ast" Tool | https://api.github.com/repos/langchain-ai/langchain/issues/23076/comments | 0 | 2024-06-18T12:12:31Z | 2024-06-18T12:15:09Z | https://github.com/langchain-ai/langchain/issues/23076 | 2,359,714,784 | 23,076 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
from azure.search.documents.indexes.models import (
SemanticSearch,
SemanticConfiguration,
SemanticPrioritizedFields,
SemanticField,
ScoringProfile,
SearchableField,
SearchField,
SearchFieldDataType,
SimpleField,
TextWeights,
)
index_fields = [
SimpleField(
name="id",
type=SearchFieldDataType.String,
key=True,
filterable=True,
),
SearchableField(
name="content",
type=SearchFieldDataType.String,
searchable=True,
),
SearchField(
name="content_vector",
type=SearchFieldDataType.Collection(SearchFieldDataType.Single),
searchable=True,
vector_search_dimensions=len(embeddings.embed_query("Text")),
vector_search_configuration="default",
vector_search_profile_name = "my-vector-profile"
),
SearchableField(
name="metadata",
type=SearchFieldDataType.String,
searchable=True,
)]
```
### Error Message and Stack Trace (if applicable)
```
Traceback (most recent call last):
File "d:\kf-ds-genai-python\kf_skybot_env\lib\site-packages\streamlit\runtime\scriptrunner\script_runner.py", line 600, in _run_script
exec(code, module.__dict__)
File "D:\kf-ds-genai-python\skybot\pages\Skybot_Home.py", line 42, in <module>
init_index()
File "D:\kf-ds-genai-python\skybot\vectorstore.py", line 87, in init_index
vector_store: AzureSearch = AzureSearch(
File "d:\kf-ds-genai-python\kf_skybot_env\lib\site-packages\langchain_community\vectorstores\azuresearch.py", line 310, in __init__
self.client = _get_search_client(
File "d:\kf-ds-genai-python\kf_skybot_env\lib\site-packages\langchain_community\vectorstores\azuresearch.py", line 220, in _get_search_client
index_client.create_index(index)
File "d:\kf-ds-genai-python\kf_skybot_env\lib\site-packages\azure\core\tracing\decorator.py", line 94, in wrapper_use_tracer
return func(*args, **kwargs)
File "d:\kf-ds-genai-python\kf_skybot_env\lib\site-packages\azure\search\documents\indexes\_search_index_client.py", line 219, in create_index
result = self._client.indexes.create(patched_index, **kwargs)
return func(*args, **kwargs)
File "d:\kf-ds-genai-python\kf_skybot_env\lib\site-packages\azure\search\documents\indexes\_generated\operations\_indexes_operations.py", line 402, in create
raise HttpResponseError(response=response, model=error)
azure.core.exceptions.HttpResponseError: (InvalidRequestParameter) The request is invalid. Details: definition : The vector field 'content_vector' must have the property 'vectorSearchProfile' set.
Code: InvalidRequestParameter
Message: The request is invalid. Details: definition : The vector field 'content_vector' must have the property 'vectorSearchProfile' set.
Exception Details: (InvalidField) The vector field 'content_vector' must have the property 'vectorSearchProfile' set. Parameters: definition
Code: InvalidField
Message: The vector field 'content_vector' must have the property 'vectorSearchProfile' set. Parameters: definition
```
### Description
I'm using the latest version of langchain==0.2.5 and azure-search-documents==11.4.0. But when I try to create an Azure Search Index by defining the index fields, I get the error " The vector field 'content_vector' must have the property 'vectorSearchProfile' set.". This error did not occur in the older versions of langchain and azure-search-documents. But I need to use the latest versions of these for certain features, and I'm not able to get around this issue.
### System Info
System Information
------------------
> OS: Windows
> OS Version: 10.0.19041
> Python Version: 3.9.2 (tags/v3.9.2:1a79785, Feb 19 2021, 13:44:55) [MSC v.1928 64 bit (AMD64)]
Package Information
-------------------
> langchain_core: 0.2.7
> langchain: 0.2.5
> langchain_community: 0.2.5
> langsmith: 0.1.77
> langchain_openai: 0.1.8
> langchain_text_splitters: 0.2.1
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph
> langserve
@hwchase17 | The vector field 'content_vector' must have the property 'vectorSearchProfile' set. | https://api.github.com/repos/langchain-ai/langchain/issues/23070/comments | 0 | 2024-06-18T07:33:33Z | 2024-06-18T07:36:55Z | https://github.com/langchain-ai/langchain/issues/23070 | 2,359,155,891 | 23,070 |
[
"langchain-ai",
"langchain"
] | ### URL
https://python.langchain.com/v0.2/docs/tutorials/rag/
### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
error say the data which llm receives,its datatype dont have a attribute:shape;
```
from langchain.chains import create_retrieval_chain
from langchain.chains.combine_documents import create_stuff_documents_chain
from langchain_core.prompts import ChatPromptTemplate
system_prompt = (
"You are an assistant for question-answering tasks. "
"Use the following pieces of retrieved context to answer "
"the question. If you don't know the answer, say that you "
"don't know. Use three sentences maximum and keep the "
"answer concise."
"\n\n"
"{context}"
)
prompt = ChatPromptTemplate.from_messages(
[
("system", system_prompt),
("human", "{input}"),
]
)
question_answer_chain = create_stuff_documents_chain(llm, prompt)
rag_chain = create_retrieval_chain(retriever, question_answer_chain)
response = rag_chain.invoke({"input": "What is Task Decomposition?"})
print(response["answer"])
```
like this
```
File "/home/desir/PycharmProjects/pdf_parse/rag/create_stuff_chain.py", line 28, in <module>
response = rag_chain.invoke({"input": "文章主旨"})
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/desir/soft/anaconda3/envs/pdf_parse/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 4573, in invoke
return self.bound.invoke(
^^^^^^^^^^^^^^^^^^
File "/home/desir/soft/anaconda3/envs/pdf_parse/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 2504, in invoke
input = step.invoke(input, config)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/desir/soft/anaconda3/envs/pdf_parse/lib/python3.11/site-packages/langchain_core/runnables/passthrough.py", line 469, in invoke
return self._call_with_config(self._invoke, input, config, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/desir/soft/anaconda3/envs/pdf_parse/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 1598, in _call_with_config
context.run(
File "/home/desir/soft/anaconda3/envs/pdf_parse/lib/python3.11/site-packages/langchain_core/runnables/config.py", line 380, in call_func_with_variable_args
return func(input, **kwargs) # type: ignore[call-arg]
^^^^^^^^^^^^^^^^^^^^^
File "/home/desir/soft/anaconda3/envs/pdf_parse/lib/python3.11/site-packages/langchain_core/runnables/passthrough.py", line 456, in _invoke
**self.mapper.invoke(
^^^^^^^^^^^^^^^^^^^
File "/home/desir/soft/anaconda3/envs/pdf_parse/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 3149, in invoke
output = {key: future.result() for key, future in zip(steps, futures)}
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/desir/soft/anaconda3/envs/pdf_parse/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 3149, in <dictcomp>
output = {key: future.result() for key, future in zip(steps, futures)}
^^^^^^^^^^^^^^^
File "/home/desir/soft/anaconda3/envs/pdf_parse/lib/python3.11/concurrent/futures/_base.py", line 456, in result
return self.__get_result()
^^^^^^^^^^^^^^^^^^^
File "/home/desir/soft/anaconda3/envs/pdf_parse/lib/python3.11/concurrent/futures/_base.py", line 401, in __get_result
raise self._exception
File "/home/desir/soft/anaconda3/envs/pdf_parse/lib/python3.11/concurrent/futures/thread.py", line 58, in run
result = self.fn(*self.args, **self.kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/desir/soft/anaconda3/envs/pdf_parse/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 4573, in invoke
return self.bound.invoke(
^^^^^^^^^^^^^^^^^^
File "/home/desir/soft/anaconda3/envs/pdf_parse/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 2504, in invoke
input = step.invoke(input, config)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/desir/soft/anaconda3/envs/pdf_parse/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 3976, in invoke
return self._call_with_config(
^^^^^^^^^^^^^^^^^^^^^^^
File "/home/desir/soft/anaconda3/envs/pdf_parse/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 1598, in _call_with_config
context.run(
File "/home/desir/soft/anaconda3/envs/pdf_parse/lib/python3.11/site-packages/langchain_core/runnables/config.py", line 380, in call_func_with_variable_args
return func(input, **kwargs) # type: ignore[call-arg]
^^^^^^^^^^^^^^^^^^^^^
File "/home/desir/soft/anaconda3/envs/pdf_parse/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 3844, in _invoke
output = call_func_with_variable_args(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/desir/soft/anaconda3/envs/pdf_parse/lib/python3.11/site-packages/langchain_core/runnables/config.py", line 380, in call_func_with_variable_args
return func(input, **kwargs) # type: ignore[call-arg]
^^^^^^^^^^^^^^^^^^^^^
File "/home/desir/soft/anaconda3/envs/pdf_parse/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/desir/soft/anaconda3/envs/pdf_parse/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/desir/soft/anaconda3/envs/pdf_parse/lib/python3.11/site-packages/transformers/models/mistral/modeling_mistral.py", line 1139, in forward
outputs = self.model(
^^^^^^^^^^^
File "/home/desir/soft/anaconda3/envs/pdf_parse/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/desir/soft/anaconda3/envs/pdf_parse/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/desir/soft/anaconda3/envs/pdf_parse/lib/python3.11/site-packages/transformers/models/mistral/modeling_mistral.py", line 937, in forward
batch_size, seq_length = input_ids.shape
^^^^^^^^^^^^^^^
```
AttributeError: 'ChatPromptValue' object has no attribute 'shape'
my model loaded from local disk
```
import os
import time
import gc
import torch
print(torch.version.cuda)
gc.collect()
torch.cuda.empty_cache()
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
os.environ['HTTP_PROXY'] = 'http://127.0.0.1:7890'
os.environ['HTTPS_PROXY'] = 'http://127.0.0.1:7890'
cache_dir = os.path.expanduser("~/.mistral")
cache_mistral_tokenizer = AutoTokenizer.from_pretrained(pretrained_model_name_or_path=cache_dir)
cache_mistral_model = AutoModelForCausalLM.from_pretrained(cache_dir)
```
### Idea or request for content:
how can i modify the data before input it to llm | DOC: <Issue related to /v0.2/docs/tutorials/rag/> | https://api.github.com/repos/langchain-ai/langchain/issues/23066/comments | 2 | 2024-06-18T05:56:36Z | 2024-06-27T01:13:51Z | https://github.com/langchain-ai/langchain/issues/23066 | 2,358,977,746 | 23,066 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [x] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
from langchain_core.prompts import PromptTemplate
from langchain_core.pydantic_v1 import BaseModel, Field
from langchain_experimental.llms.ollama_functions import OllamaFunctions
# Schema for structured response
class Person(BaseModel):
name: str = Field(description="The person's name", required=True)
height: float = Field(description="The person's height", required=True)
hair_color: str = Field(description="The person's hair color")
# Prompt template
prompt = PromptTemplate.from_template(
"""Alex is 5 feet tall.
Claudia is 1 feet taller than Alex and jumps higher than him.
Claudia is a brunette and Alex is blonde.
Human: {question}
AI: """
)
# Chain
llm = OllamaFunctions(model="phi3", format="json", temperature=0)
structured_llm = llm.with_structured_output(Person)
chain = prompt | structured_llm
alex = chain.invoke("Describe Alex")
### Error Message and Stack Trace (if applicable)
_No response_
### Description
Trying to extract structured output
### System Info
System Information
------------------
> OS: Darwin
> OS Version: Darwin Kernel Version 23.5.0: Wed May 1 20:14:38 PDT 2024; root:xnu-10063.121.3~5/RELEASE_ARM64_T6020
> Python Version: 3.9.6 (default, Feb 3 2024, 15:58:27)
[Clang 15.0.0 (clang-1500.3.9.4)]
Package Information
-------------------
> langchain_core: 0.2.8
> langchain: 0.2.5
> langchain_community: 0.2.5
> langsmith: 0.1.79
> langchain_experimental: 0.0.61
> langchain_google_genai: 1.0.5
> langchain_google_vertexai: 1.0.4
> langchain_mistralai: 0.1.8
> langchain_openai: 0.1.8
> langchain_text_splitters: 0.2.0
> langchainhub: 0.1.16
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph
> langserve
| ValueError: `tool_calls` missing from AIMessage: {message} | https://api.github.com/repos/langchain-ai/langchain/issues/23065/comments | 3 | 2024-06-18T05:20:22Z | 2024-08-06T15:06:10Z | https://github.com/langchain-ai/langchain/issues/23065 | 2,358,931,924 | 23,065 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
chain = LLMChain(
llm=self.bedrock.llm,
prompt=self.prompt_template,
)
chain_result = chain.predict(statement=text).strip()
### Error Message and Stack Trace (if applicable)
_No response_
### Description
I'm facing an issue similar to #3512 .
Using Langchain in a Flask App, hosted in an Azure Web App. Calling Anthropic Claude3 Haiku model in AWS Bedrock.
First Langchain request takes about 2 minutes to return. The following ones return smoothly. After about 7 idle minutes, first request takes too long again.
Can't reproduce this issue locally. It only happens in Azure environment.
When testing with boto3 AWS python SDK, the requests return fast every time, with no issues.
### System Info
langchain==0.2.3
linux slim-bookworm
python:3.12.3
container image: python:3.12.3-slim-bookworm
| Request Timeout / Taking too long | https://api.github.com/repos/langchain-ai/langchain/issues/23060/comments | 2 | 2024-06-18T00:00:54Z | 2024-06-18T09:27:15Z | https://github.com/langchain-ai/langchain/issues/23060 | 2,358,523,743 | 23,060 |
[
"langchain-ai",
"langchain"
] | ### URL
https://python.langchain.com/v0.2/docs/integrations/vectorstores/azure_cosmos_db_no_sql/
### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
The current code does not work, it fails on document insertion with the following error
TypeError: AzureCosmosDBNoSqlVectorSearch._from_kwargs() missing 1 required keyword-only argument: 'cosmos_database_properties'
### Idea or request for content:
_No response_ | DOC: Azure Cosmos DB No SQL | https://api.github.com/repos/langchain-ai/langchain/issues/23018/comments | 2 | 2024-06-17T20:48:31Z | 2024-06-22T17:26:58Z | https://github.com/langchain-ai/langchain/issues/23018 | 2,358,230,408 | 23,018 |
[
"langchain-ai",
"langchain"
] | ### Privileged issue
- [X] I am a LangChain maintainer, or was asked directly by a LangChain maintainer to create an issue here.
### Issue Content
_No response_ | Add image token counting to ChatOpenAI | https://api.github.com/repos/langchain-ai/langchain/issues/23000/comments | 3 | 2024-06-17T20:29:37Z | 2024-06-19T17:41:48Z | https://github.com/langchain-ai/langchain/issues/23000 | 2,358,200,622 | 23,000 |
[
"langchain-ai",
"langchain"
] | ### URL
https://github.com/langchain-ai/langchain/blob/c6b7db6587c5397e320b84cbd7cd25c7c4b743e5/docs/docs/how_to/toolkits.mdx#L4
### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
The Link "Integration" points to [a wrong url](https://github.com/langchain-ai/langchain/blob/c6b7db6587c5397e320b84cbd7cd25c7c4b743e5/docs/integrations/toolkits).
[this is](https://github.com/langchain-ai/langchain/tree/c6b7db6587c5397e320b84cbd7cd25c7c4b743e5/docs/docs/integrations/toolkits) the Correct URL (/docs/docs instead of /docs/)
### Idea or request for content:
_No response_ | DOC: wrong link URL | https://api.github.com/repos/langchain-ai/langchain/issues/22992/comments | 1 | 2024-06-17T18:31:46Z | 2024-06-18T06:41:38Z | https://github.com/langchain-ai/langchain/issues/22992 | 2,357,979,748 | 22,992 |
[
"langchain-ai",
"langchain"
] | ### URL
https://python.langchain.com/v0.2/docs/how_to/tool_calling/
### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
The following example should not be included in this page. The llm_with_tools instantiation examples use different LLMs that are not compatible with PydanticToolsParser. This creates confusion and wastes time
`from langchain_core.output_parsers.openai_tools import PydanticToolsParser
chain = llm_with_tools | PydanticToolsParser(tools=[Multiply, Add])
chain.invoke(query)`
### Idea or request for content:
_No response_ | DOC: <Issue related to /v0.2/docs/how_to/tool_calling/> | https://api.github.com/repos/langchain-ai/langchain/issues/22989/comments | 3 | 2024-06-17T16:50:08Z | 2024-06-22T15:54:26Z | https://github.com/langchain-ai/langchain/issues/22989 | 2,357,794,583 | 22,989 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
from langchain_community.embeddings import OCIGenAIEmbeddings
m = OCIGenAIEmbeddings(
model_id="MY EMBEDDING MODEL",
compartment_id="MY COMPARTMENT ID"
)
response = m.embed_documents([str(n) for n in range(0, 100)])
```
### Error Message and Stack Trace (if applicable)
(stack trace is sanitized to remove identifying information)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "oci_generative_ai.py", line 192, in embed_documents
response = self.client.embed_text(invocation_obj)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "generative_ai_inference_client.py", line 298, in embed_text
return retry_strategy.make_retrying_call(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "retry.py", line 308, in make_retrying_call
response = func_ref(*func_args, **func_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "base_client.py", line 535, in call_api
response = self.request(request, allow_control_chars, operation_name, api_reference_link)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "circuitbreaker.py", line 159, in wrapper
return call(function, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "circuitbreaker.py", line 170, in call
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "base_client.py", line 726, in request
self.raise_service_error(request, response, service_code, message, operation_name, api_reference_link, target_service, request_endpoint, client_version, timestamp, deserialized_data)
File "base_client.py", line 891, in raise_service_error
raise exceptions.ServiceError(
oci.exceptions.ServiceError: {'target_service': 'generative_ai_inference', 'status': 400, 'code': '400', 'opc-request-id': 'XYZ', 'message': 'Inputs must be provided, support inputs array size less than 96.', 'operation_name': 'embed_text', 'timestamp': 'XYZ', 'client_version': 'Oracle-PythonSDK/2.128.2', 'request_endpoint': 'XYZ', 'logging_tips': 'To get more info on the failing request, refer to https://docs.oracle.com/en-us/iaas/tools/python/latest/logging.html for ways to log the request/response details.', 'troubleshooting_tips': "See https://docs.oracle.com/iaas/Content/API/References/apierrors.htm#apierrors_400__400_400 for more information about resolving this error. Also see https://docs.oracle.com/iaas/api/#/en/generative-ai-inference/20231130/EmbedTextResult/EmbedText for details on this operation's requirements. If you are unable to resolve this generative_ai_inference issue, please contact Oracle support and provide them this full error message."}
### Description
OCI embeddings service has a batch size of 96. Inputs over this length will receive a service error from the embedding service. This can be fixed by adding a batching parameter to the embedding class, and updating the embed_documents function like so:
```python
def embed_documents(self, texts: List[str]) -> List[List[float]]:
"""Call out to OCIGenAI's embedding endpoint.
Args:
texts: The list of texts to embed.
Returns:
List of embeddings, one for each text.
"""
from oci.generative_ai_inference import models
if self.model_id.startswith(CUSTOM_ENDPOINT_PREFIX):
serving_mode = models.DedicatedServingMode(endpoint_id=self.model_id)
else:
serving_mode = models.OnDemandServingMode(model_id=self.model_id)
embeddings = []
def split_texts():
for i in range(0, len(texts), self.batch_size):
yield texts[i:i + self.batch_size]
for chunk in split_texts():
invocation_obj = models.EmbedTextDetails(
serving_mode=serving_mode,
compartment_id=self.compartment_id,
truncate=self.truncate,
inputs=chunk,
)
response = self.client.embed_text(invocation_obj)
embeddings.extend(response.data.embeddings)
return embeddings
```
### System Info
Mac (x86)
```shell
% python --version
Python 3.12.2
% pip freeze |grep langchain
-e git+ssh://git@github.com/langchain-ai/langchain.git@c437b1aab734d5f15d4bdcd4a2d989808d244ad9#egg=langchain&subdirectory=libs/langchain
-e git+ssh://git@github.com/langchain-ai/langchain.git@c437b1aab734d5f15d4bdcd4a2d989808d244ad9#egg=langchain_community&subdirectory=libs/community
-e git+ssh://git@github.com/langchain-ai/langchain.git@c437b1aab734d5f15d4bdcd4a2d989808d244ad9#egg=langchain_core&subdirectory=libs/core
-e git+ssh://git@github.com/langchain-ai/langchain.git@c437b1aab734d5f15d4bdcd4a2d989808d244ad9#egg=langchain_experimental&subdirectory=libs/experimental
-e git+ssh://git@github.com/langchain-ai/langchain.git@c437b1aab734d5f15d4bdcd4a2d989808d244ad9#egg=langchain_openai&subdirectory=libs/partners/openai
-e git+ssh://git@github.com/langchain-ai/langchain.git@c437b1aab734d5f15d4bdcd4a2d989808d244ad9#egg=langchain_text_splitters&subdirectory=libs/text-splitters
``` | OCI Embeddings service should use batch size 96 by default. | https://api.github.com/repos/langchain-ai/langchain/issues/22985/comments | 0 | 2024-06-17T15:29:15Z | 2024-06-17T15:31:49Z | https://github.com/langchain-ai/langchain/issues/22985 | 2,357,634,791 | 22,985 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
pip install --upgrade --quiet pymilvus[model] langchain-milvus
### Error Message and Stack Trace (if applicable)

### Description
can not install langchain-milvus
### System Info
System Information
------------------
> OS: Windows
> OS Version: 10.0.22631
> Python Version: 3.10.7 (tags/v3.10.7:6cc6b13, Sep 5 2022, 14:08:36) [MSC v.1933 64 bit (AMD64)]
Package Information
-------------------
> langchain_core: 0.2.5
> langchain: 0.2.3
> langchain_community: 0.2.4
> langsmith: 0.1.77
> langchain_cli: 0.0.23
> langchain_google_cloud_sql_mysql: 0.2.2
> langchain_google_vertexai: 1.0.5
> langchain_openai: 0.1.6
> langchain_text_splitters: 0.2.1
> langchainhub: 0.1.17
> langserve: 0.2.1
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph | langchain-milvus install error | https://api.github.com/repos/langchain-ai/langchain/issues/22976/comments | 1 | 2024-06-17T11:33:00Z | 2024-06-17T12:42:37Z | https://github.com/langchain-ai/langchain/issues/22976 | 2,357,114,648 | 22,976 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
import os
import requests
import yaml
# Set your OpenAI API key
os.environ["OPENAI_API_KEY"] = "your-openai-api-key"
from langchain_community.agent_toolkits.openapi import planner
from langchain_openai.chat_models import ChatOpenAI
from langchain_community.agent_toolkits.openapi.spec import reduce_openapi_spec
from langchain.requests import RequestsWrapper
from requests.packages.urllib3.exceptions import InsecureRequestWarning
# Disable SSL warnings
requests.packages.urllib3.disable_warnings(InsecureRequestWarning)
import certifi
os.environ['SSL_CERT_FILE'] = certifi.where()
print(os.environ.get('NO_PROXY'))
with open("swagger.yaml") as f:
data = yaml.load(f, Loader=yaml.FullLoader)
swagger_api_spec = reduce_openapi_spec(data)
def construct_superset_aut_headers(url=None):
import requests
url = "https://your-superset-url/api/v1/security/login"
payload = {
"username": "your-username",
"password": "your-password",
"provider": "db",
"refresh": True
}
headers = {
"Content-Type": "application/json"
}
response = requests.post(url, json=payload, headers=headers, verify=False)
data = response.json()
return {"Authorization": f"Bearer {data['access_token']}"}
llm = ChatOpenAI(model='gpt-4o')
swagger_requests_wrapper = RequestsWrapper(headers=construct_superset_aut_headers())
superset_agent = planner.create_openapi_agent(swagger_api_spec, swagger_requests_wrapper, llm, allow_dangerous_requests=True, handle_parsing_errors=True)
superset_agent.run(
"Tell me the number and types of charts and dashboards available."
)
```
### Error Message and Stack Trace (if applicable)
Entering new AgentExecutor chain...
Action: api_planner
Action Input: I need to find the right API calls to get the number and types of charts and dashboards available.
Observation: 1. **Evaluate whether the user query can be solved by the API documented below:**
...
Observation: Use the `requests_get` tool to retrieve a list of charts. is not a valid tool, try one of [requests_get, requests_post].
Thought: To proceed with the plan, I will first retrieve a list of charts using the **GET /api/v1/chart/** endpoint and extract the necessary information.
...
Plan:
1. Retrieve a list of charts using the **GET /api/v1/chart/** endpoint.
2. Extract the count of charts and their IDs.
3. Retrieve a list of dashboards using the **GET /api/v1/dashboard/** endpoint.
4. Extract the count of dashboards and their IDs.
...
Action: Use the `requests_get` tool to retrieve a list of charts.
Action Input:
{
"url": "https://your-superset-url/api/v1/chart/",
"params": {},
"output_instructions": "Extract the count of charts and ids of the charts"
}
...
Traceback (most recent call last):
File "~/anaconda3/envs/superset/lib/python3.11/site-packages/urllib3/connectionpool.py", line 467, in _make_request
self._validate_conn(conn)
File "~/anaconda3/envs/superset/lib/python3.11/site-packages/urllib3/connectionpool.py", line 1099, in _validate_conn
conn.connect()
File "~/anaconda3/envs/superset/lib/python3.11/site-packages/urllib3/connection.py", line 653, in connect
sock_and_verified = _ssl_wrap_socket_and_match_hostname(
File "~/anaconda3/envs/superset/lib/python3.11/site-packages/urllib3/connection.py", line 806, in _ssl_wrap_socket_and_match_hostname
ssl_sock = ssl_wrap_socket(
File "~/anaconda3/envs/superset/lib/python3.11/site-packages/urllib3/util/ssl_.py", line 465, in ssl_wrap_socket
ssl_sock = _ssl_wrap_socket_impl(sock, context, tls_in_tls, server_hostname)
File "~/anaconda3/envs/superset/lib/python3.11/site-packages/urllib3/util/ssl_.py", line 509, in _ssl_wrap_socket_impl
return ssl_context.wrap_socket(sock, server_hostname=server_hostname)
File "~/anaconda3/envs/superset/lib/python3.11/ssl.py", line 517, in wrap_socket
return self.sslsocket_class._create(
File "~/anaconda3/envs/superset/lib/python3.11/ssl.py", line 1104, in _create
self.do_handshake()
File "~/anaconda3/envs/superset/lib/python3.11/ssl.py", line 1382, in do_handshake
self._sslobj.do_handshake()
ssl.SSLCertVerificationError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1006)
...
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "~/anaconda3/envs/superset/lib/python3.11/site-packages/urllib3/connectionpool.py", line 793, in urlopen
response = self._make_request(
File "~/anaconda3/envs/superset/lib/python3.11/site-packages/urllib3/connectionpool.py", line 491, in _make_request
raise new_e
urllib3.exceptions.SSLError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1006)
...
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "~/anaconda3/envs/superset/lib/python3.11/site-packages/requests/adapters.py", line 589, in send
resp = conn.urlopen(
File "~/anaconda3/envs/superset/lib/python3.11/site-packages/urllib3/connectionpool.py", line 847, in urlopen
retries = retries.increment(
File "~/anaconda3/envs/superset/lib/python3.11/site-packages/urllib3/util/retry.py", line 515, in increment
raise MaxRetryError(_pool, url, reason) from reason
urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='your-superset-url', port=443): Max retries exceeded with url: /api/v1/chart/ (Caused by SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1006)')))
...
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "~/git/forPR/superset/openapi-agent.py", line 46, in <module>
superset_agent.run(
File "~/anaconda3/envs/superset/lib/python3.11/site-packages/langchain_core/_api/deprecation.py", line 148, in warning_emitting_wrapper
return wrapped(*args, **kwargs)
File "~/anaconda3/envs/superset/lib/python3.11/site-packages/langchain/chains/base.py", line 600, in run
return self(args[0], callbacks=callbacks, tags=tags, metadata=metadata)
File "~/anaconda3/envs/superset/lib/python3.11/site-packages/langchain_core/_api/deprecation.py", line 148, in warning_emitting_wrapper
return wrapped(*args, **kwargs)
File "~/anaconda3/envs/superset/lib/python3.11/site-packages/langchain/chains/base.py", line 383, in __call__
return self.invoke(
File "~/anaconda3/envs/superset/lib/python3.11/site-packages/langchain/chains/base.py", line 166, in invoke
raise e
File "~/anaconda3/envs/superset/lib/python3.11/site-packages/langchain/chains/base.py", line 156, in invoke
self._call(inputs, run_manager=run_manager)
File "~/anaconda3/envs/superset/lib/python3.11/site-packages/langchain/agents/agent.py", line 1433, in _call
next_step_output = self._take_next_step(
File "~/anaconda3/envs/superset/lib/python3.11/site-packages/langchain/agents/agent.py", line 1139, in _take_next_step
[
File "~/anaconda3/envs/superset/lib/python3.11/site-packages/langchain/agents/agent.py", line 1139, in <listcomp>
[
File "~/anaconda3/envs/superset/lib/python3.11/site-packages/langchain/agents/agent.py", line 1224, in _iter_next_step
yield self._perform_agent_action(
File "~/anaconda3/envs/superset/lib/python3.11/site-packages/langchain/agents/agent.py", line 1246, in _perform_agent_action
observation = tool.run(
File "~/anaconda3/envs/superset/lib/python3.11/site-packages/langchain_core/tools.py", line 452, in run
raise e
File "~/anaconda3/envs/superset/lib/python3.11/site-packages/langchain_core/tools.py", line 413, in run
else context.run(self._run, *tool_args, **tool_kwargs)
File "~/anaconda3/envs/superset/lib/python3.11/site-packages/langchain_community/agent_toolkits/openapi/planner.py", line 88, in _run
str, self.requests_wrapper.get(data["url"], params=data_params)
File "~/anaconda3/envs/superset/lib/python3.11/site-packages/langchain_community/utilities/requests.py", line 154, in get
return self._get_resp_content(self.requests.get(url, **kwargs))
File "~/anaconda3/envs/superset/lib/python3.11/site-packages/langchain_community/utilities/requests.py", line 31, in get
return requests.get(url, headers=self.headers, auth=self.auth, verify=self.verify, **kwargs)
File "~/anaconda3/envs/superset/lib/python3.11/site-packages/requests/api.py", line 73, in get
return request("get", url, params=params, **kwargs)
File "~/anaconda3/envs/superset/lib/python3.11/site-packages/requests/api.py", line 59, in request
return session.request(method=method, url=url, **kwargs)
File "~/anaconda3/envs/superset/lib/python3.11/site-packages/requests/sessions.py", line 589, in request
resp = self.send(prep, **send_kwargs)
File "~/anaconda3/envs/superset/lib/python3.11/site-packages/requests/sessions.py", line 703, in send
r = adapter.send(request, **kwargs)
File "~/anaconda3/envs/superset/lib/python3.11/site-packages/requests/adapters.py", line 620, in send
raise SSLError(e, request=request)
requests.exceptions.SSLError: HTTPSConnectionPool(host='your-superset-url', port=443): Max retries exceeded with url: /api/v1/chart/ (Caused by SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1006)')))
### Description
I am creating an agent that calls the API by accessing the swagger of the API server whose certificate is broken.
At this time, if I structured the code, I was able to encounter the error message.
Of course, resolving the certificate issue would be the best solution,
but it would be even better if a temporary solution was provided through an option.
### System Info
langchain==0.2.4
langchain-community==0.2.4
langchain-core==0.2.6
langchain-openai==0.1.8
langchain-text-splitters==0.2.1 | RequestsWrapper initialization for API Endpoint where SSL authentication fails | https://api.github.com/repos/langchain-ai/langchain/issues/22975/comments | 0 | 2024-06-17T11:31:32Z | 2024-06-18T03:12:43Z | https://github.com/langchain-ai/langchain/issues/22975 | 2,357,111,081 | 22,975 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```py
import tenacity
```
### Error Message and Stack Trace (if applicable)
> ```
> > import tenacity
> ...
> lib/python3.11/site-packages/tenacity/__init__.py", line 653, in <module>
> from tenacity.asyncio import AsyncRetrying # noqa:E402,I100
> ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
> ModuleNotFoundError: No module named 'tenacity.asyncio'
> ```
### Description
Tenacity 8.4.0 has error.
https://github.com/langchain-ai/langchain/blob/892bd4c29be34c0cc095ed178be6d60c6858e2ec/libs/core/pyproject.toml#L15
https://github.com/jd/tenacity/issues/471
### System Info
- Python 3.11
- tenacity 8.4.0 | It will occurred error if a dependent library `tenacity` is upgraded to `8.4.0`. | https://api.github.com/repos/langchain-ai/langchain/issues/22972/comments | 35 | 2024-06-17T08:20:25Z | 2024-06-18T14:34:29Z | https://github.com/langchain-ai/langchain/issues/22972 | 2,356,709,439 | 22,972 |
[
"langchain-ai",
"langchain"
] | ### URL
https://python.langchain.com/v0.2/docs/tutorials/rag/
### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
mycode
`
> from langchain_core.prompts import PromptTemplate
>
> template = """Use the following pieces of context to answer the question at the end.
> If you don't know the answer, just say that you don't know, don't try to make up an answer.
> Use three sentences maximum and keep the answer as concise as possible.
> Always say "感谢提问!" at the end of the answer,总是用中文回答问题,可以使用英语描述专业词汇.
>
> {context}
>
> Question: {question}
>
> Helpful Answer:"""
> custom_rag_prompt = PromptTemplate.from_template(template)
> rag_chain = (
> {"context": retriever | format_docs, "question": RunnablePassthrough()}
> | custom_rag_prompt
> | cache_mistral_model
> | StrOutputParser()
> )
> while True:
> user_input = input("请输入问题或命令(输入 q 退出): ")
> if user_input.lower() == "q":
> break
> for chunk in rag_chain.stream(user_input):
> print(chunk, end="", flush=True)
`
`
-----
`error say :`Traceback (most recent call last):
File "/home/desir/PycharmProjects/pdf_parse/rag/cohere.py", line 141, in <module>
for chunk in rag_chain.stream(user_input):
File "/home/desir/soft/anaconda3/envs/pdf_parse/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 2873, in stream
yield from self.transform(iter([input]), config, **kwargs)
File "/home/desir/soft/anaconda3/envs/pdf_parse/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 2860, in transform
yield from self._transform_stream_with_config(
File "/home/desir/soft/anaconda3/envs/pdf_parse/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 1865, in _transform_stream_with_config
chunk: Output = context.run(next, iterator) # type: ignore
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/desir/soft/anaconda3/envs/pdf_parse/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 2822, in _transform
for output in final_pipeline:
File "/home/desir/soft/anaconda3/envs/pdf_parse/lib/python3.11/site-packages/langchain_core/output_parsers/transform.py", line 50, in transform
yield from self._transform_stream_with_config(
File "/home/desir/soft/anaconda3/envs/pdf_parse/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 1829, in _transform_stream_with_config
final_input: Optional[Input] = next(input_for_tracing, None)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/desir/soft/anaconda3/envs/pdf_parse/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 4057, in transform
for output in self._transform_stream_with_config(
File "/home/desir/soft/anaconda3/envs/pdf_parse/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 1865, in _transform_stream_with_config
chunk: Output = context.run(next, iterator) # type: ignore
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/desir/soft/anaconda3/envs/pdf_parse/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 4025, in _transform
output = call_func_with_variable_args(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/desir/soft/anaconda3/envs/pdf_parse/lib/python3.11/site-packages/langchain_core/runnables/config.py", line 380, in call_func_with_variable_args
return func(input, **kwargs) # type: ignore[call-arg]
^^^^^^^^^^^^^^^^^^^^^
File "/home/desir/soft/anaconda3/envs/pdf_parse/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/desir/soft/anaconda3/envs/pdf_parse/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/desir/soft/anaconda3/envs/pdf_parse/lib/python3.11/site-packages/transformers/models/mistral/modeling_mistral.py", line 1139, in forward
outputs = self.model(
^^^^^^^^^^^
File "/home/desir/soft/anaconda3/envs/pdf_parse/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1532, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/desir/soft/anaconda3/envs/pdf_parse/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1541, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/desir/soft/anaconda3/envs/pdf_parse/lib/python3.11/site-packages/transformers/models/mistral/modeling_mistral.py", line 937, in forward
batch_size, seq_length = input_ids.shape
^^^^^^^^^^^^^^^
**AttributeError: 'StringPromptValue' object has no attribute 'shape'`**
### what happened?please
### Idea or request for content:
i don't know | DOC: <Issue related to /v0.2/docs/tutorials/rag/> | https://api.github.com/repos/langchain-ai/langchain/issues/22971/comments | 3 | 2024-06-17T07:30:48Z | 2024-06-19T05:01:46Z | https://github.com/langchain-ai/langchain/issues/22971 | 2,356,605,589 | 22,971 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
using the standard tiledb code located in tiledb.py
### Error Message and Stack Trace (if applicable)
there's no specific error to provide
### Description
tiledb source code doesn't have "from_documents" method despite the online instructions saying it does.
### System Info
windows 10 | Why doesn't the TileDB vector store implementation have a "from_documents" method when the instructions say it does... | https://api.github.com/repos/langchain-ai/langchain/issues/22964/comments | 3 | 2024-06-17T02:01:06Z | 2024-06-17T09:34:03Z | https://github.com/langchain-ai/langchain/issues/22964 | 2,356,180,425 | 22,964 |
[
"langchain-ai",
"langchain"
] | ### URL
https://python.langchain.com/v0.2/docs/tutorials/sql_qa/
### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
I am quite new to SQL agents, however, I think there should be a line of code where the new `retriever_tool` should be added to the list of tools available to the agent? I can't seem to connect how these two blocks of code will work together:
```
retriever_tool = create_retriever_tool(
retriever,
name="search_proper_nouns",
description=description,
)
```
AND
```
agent = create_react_agent(llm, tools, messages_modifier=system_message)
```
without an intermediate step like ```tools.add(retriever_tool)``` or something to that effect.
If I am wrong, please explain how Agent will know of `retriever_tool`.
Arindam
### Idea or request for content:
_No response_ | DOC: There seems to be a code line missing from the given example to connect `retriever_tool` to the `tools` list <Issue related to /v0.2/docs/tutorials/sql_qa/> | https://api.github.com/repos/langchain-ai/langchain/issues/22963/comments | 1 | 2024-06-17T00:14:48Z | 2024-06-17T12:57:18Z | https://github.com/langchain-ai/langchain/issues/22963 | 2,356,069,720 | 22,963 |
[
"langchain-ai",
"langchain"
] | ### URL
https://api.python.langchain.com/en/latest/tools/langchain_community.tools.ddg_search.tool.DuckDuckGoSearchResults.html
### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
The documentation indicates that json should be returned by this tool (https://python.langchain.com/v0.2/docs/integrations/tools/ddg/), but instead it's a normal string that is not in json format (and also can't be parsed correctly through simple text search alone)
### Idea or request for content:
Adjust the tool to either return json or a python object with the results (title, url, etc) (this is the preferred method for me personally) | DuckDuckGo Search tool does not return JSON format | https://api.github.com/repos/langchain-ai/langchain/issues/22961/comments | 2 | 2024-06-16T21:22:02Z | 2024-08-10T23:41:59Z | https://github.com/langchain-ai/langchain/issues/22961 | 2,355,988,551 | 22,961 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.