issue_owner_repo listlengths 2 2 | issue_body stringlengths 0 261k ⌀ | issue_title stringlengths 1 925 | issue_comments_url stringlengths 56 81 | issue_comments_count int64 0 2.5k | issue_created_at stringlengths 20 20 | issue_updated_at stringlengths 20 20 | issue_html_url stringlengths 37 62 | issue_github_id int64 387k 2.46B | issue_number int64 1 127k |
|---|---|---|---|---|---|---|---|---|---|
[
"langchain-ai",
"langchain"
] | ### Feature request
I propose the addition of a new feature, a BinaryPyPdf loader, to the existing Langchain document loaders. This loader is designed to handle PDF files in a binary format, providing a more efficient and effective way of processing PDF documents within the Langchain project.
### Motivation
As a Langchain enthusiast, I noticed that the current document loaders lack a dedicated loader for handling PDF files in binary format. This often leads to inefficiencies and limitations when working with PDF documents. The addition of a BinaryPyPdf loader would address this gap and enhance the overall functionality and versatility of the Langchain document loaders.
### Your contribution
I have already developed a BinaryPyPdf loader using `pypdf` that is ready for integration into the Langchain project. I am prepared to submit a PR for this feature, following the guidelines outlined in the [`CONTRIBUTING.MD`](https://github.com/langchain-ai/langchain/blob/master/.github/CONTRIBUTING.md). I look forward to the opportunity to contribute to the project and enhance its capabilities. | Addition of BinaryPyPdf Loader for Langchain Document Loaders | https://api.github.com/repos/langchain-ai/langchain/issues/13916/comments | 1 | 2023-11-27T16:04:01Z | 2024-03-13T19:55:56Z | https://github.com/langchain-ai/langchain/issues/13916 | 2,012,607,901 | 13,916 |
[
"langchain-ai",
"langchain"
] | ### System Info
You cannot run `poetry install --with test` on a fresh build:
```
╭─ username@comp ~/path/to/coding
╰─➤ cd langchain2
ls
╭─username@comp ~/path/to/langchain2 ‹master›
╰─➤ ls
CITATION.cff Makefile cookbook libs pyproject.toml
LICENSE README.md docker poetry.lock templates
MIGRATE.md SECURITY.md docs poetry.toml
╭─username@comp ~/path/to/langchain2 ‹master›
╰─➤ poetry install --with test
Creating virtualenv langchain-monorepo in /path/to/langchain2/.venv
Installing dependencies from lock file
Package operations: 165 installs, 1 update, 0 removals
• Downgrading pip (23.3.1 -> 23.2.1)
• Installing attrs (23.1.0)
• Installing rpds-py (0.10.3)
• Installing referencing (0.30.2)
• Installing six (1.16.0)
• Installing jsonschema-specifications (2023.7.1)
• Installing platformdirs (3.11.0)
• Installing python-dateutil (2.8.2)
• Installing traitlets (5.11.1)
• Installing types-python-dateutil (2.8.19.14)
• Installing arrow (1.3.0)
• Installing entrypoints (0.4)
• Installing fastjsonschema (2.18.1)
• Installing jsonschema (4.19.1)
• Installing jupyter-core (5.3.2)
• Installing nest-asyncio (1.5.8)
• Installing pycparser (2.21)
• Installing pyzmq (25.1.1)
• Installing tornado (6.3.3)
• Installing cffi (1.16.0)
• Installing fqdn (1.5.1)
• Installing idna (3.4)
• Installing isoduration (20.11.0)
• Installing jsonpointer (2.4)
• Installing jupyter-client (7.4.9)
• Installing markupsafe (2.1.3)
• Installing nbformat (5.9.2)
• Installing ptyprocess (0.7.0)
• Installing rfc3339-validator (0.1.4)
• Installing rfc3986-validator (0.1.1)
• Installing soupsieve (2.5)
• Installing uri-template (1.3.0)
• Installing webcolors (1.13)
• Installing webencodings (0.5.1)
• Installing argon2-cffi-bindings (21.2.0): Pending...
• Installing argon2-cffi-bindings (21.2.0)
• Installing asttokens (2.4.0)
• Installing beautifulsoup4 (4.12.2)
• Installing bleach (6.0.0)
• Installing defusedxml (0.7.1)
• Installing executing (2.0.0)
• Installing jinja2 (3.1.2)
• Installing jupyterlab-pygments (0.2.2)
• Installing mistune (3.0.2)
• Installing nbclient (0.7.4)
• Installing packaging (23.2)
• Installing pandocfilters (1.5.0)
• Installing parso (0.8.3)
• Installing pure-eval (0.2.2)
• Installing pygments (2.16.1)
• Installing python-json-logger (2.0.7)
• Installing pyyaml (6.0.1)
• Installing sniffio (1.3.0)
• Installing terminado (0.17.1)
• Installing tinycss2 (1.2.1)
• Installing wcwidth (0.2.8)
• Installing anyio (3.7.1): Installing...
• Installing appnope (0.1.3): Installing...
• Installing anyio (3.7.1)
• Installing appnope (0.1.3)
• Installing argon2-cffi (23.1.0)
• Installing backcall (0.2.0)
• Installing certifi (2023.7.22)
• Installing charset-normalizer (3.3.0)
• Installing decorator (5.1.1)
• Installing jedi (0.19.1)
• Installing jupyter-events (0.7.0)
• Installing jupyter-server-terminals (0.4.4)
• Installing matplotlib-inline (0.1.6)
• Installing nbconvert (7.8.0)
• Installing overrides (7.4.0)
• Installing pexpect (4.8.0)
• Installing pickleshare (0.7.5)
• Installing prometheus-client (0.17.1)
• Installing prompt-toolkit (3.0.39)
• Installing send2trash (1.8.2)
• Installing stack-data (0.6.3)
• Installing urllib3 (2.0.6)
• Installing websocket-client (1.6.3)
• Installing babel (2.13.0)
• Installing comm (0.1.4)
• Installing debugpy (1.8.0)
• Installing ipython (8.12.3)
• Installing json5 (0.9.14)
• Installing jupyter-server (2.7.3)
• Installing psutil (5.9.5)
• Installing requests (2.31.0)
• Installing async-lru (2.0.4)
• Installing ipykernel (6.25.2)
• Installing jupyter-lsp (2.2.0)
• Installing jupyterlab-server (2.25.0)
• Installing notebook-shim (0.2.3)
• Installing fastcore (1.4.2)
• Installing ipython-genutils (0.2.0)
• Installing jupyterlab (4.0.6)
• Installing jupyterlab-widgets (3.0.9)
• Installing mdurl (0.1.2)
• Installing qtpy (2.4.0)
• Installing typing-extensions (4.8.0)
• Installing widgetsnbextension (4.0.9)
• Installing alabaster (0.7.13): Installing...
• Installing annotated-types (0.5.0): Pending...
• Installing alabaster (0.7.13)
• Installing annotated-types (0.5.0)
• Installing docutils (0.17.1)
• Installing frozenlist (1.4.0)
• Installing ghapi (0.1.22)
• Installing imagesize (1.4.1)
• Installing ipywidgets (8.1.1)
• Installing jupyter-console (6.6.3)
• Installing markdown-it-py (2.2.0)
• Installing multidict (6.0.4)
• Installing mypy-extensions (1.0.0)
• Installing notebook (7.0.4)
• Installing pydantic-core (2.10.1)
• Installing qtconsole (5.4.4)
• Installing sphinxcontrib-applehelp (1.0.4)
• Installing snowballstemmer (2.2.0)
• Installing sphinxcontrib-devhelp (1.0.2)
• Installing sphinxcontrib-htmlhelp (2.0.1)
• Installing sphinxcontrib-jsmath (1.0.1)
• Installing sphinxcontrib-qthelp (1.0.3)
• Installing sphinxcontrib-serializinghtml (1.1.5)
• Installing zipp (3.17.0)
• Installing aiosignal (1.3.1): Installing...
• Installing async-timeout (4.0.3): Pending...
• Installing aiosignal (1.3.1)
• Installing async-timeout (4.0.3)
• Installing click (8.1.7)
• Installing fastrelease (0.1.17)
• Installing importlib-metadata (6.8.0)
• Installing jupyter (1.0.0)
• Installing marshmallow (3.20.1)
• Installing mdit-py-plugins (0.3.5)
• Installing pathspec (0.11.2)
• Installing pydantic (2.4.2)
• Installing sphinx (4.5.0)
• Installing sqlalchemy (2.0.21)
• Installing tabulate (0.9.0)
• Installing tokenize-rt (5.2.0)
• Installing typing-inspect (0.9.0)
• Installing yarl (1.9.2)
• Installing aiohttp (3.8.5): Failed
ChefBuildError
Backend subprocess exited when trying to invoke build_wheel
*********************
* Accelerated build *
*********************
running bdist_wheel
running build
running build_py
creating build
creating build/lib.macosx-13-arm64-cpython-312
creating build/lib.macosx-13-arm64-cpython-312/aiohttp
copying aiohttp/web_ws.py -> build/lib.macosx-13-arm64-cpython-312/aiohttp
copying aiohttp/worker.py -> build/lib.macosx-13-arm64-cpython-312/aiohttp
copying aiohttp/multipart.py -> build/lib.macosx-13-arm64-cpython-312/aiohttp
copying aiohttp/web_response.py -> build/lib.macosx-13-arm64-cpython-312/aiohttp
copying aiohttp/client_ws.py -> build/lib.macosx-13-arm64-cpython-312/aiohttp
copying aiohttp/test_utils.py -> build/lib.macosx-13-arm64-cpython-312/aiohttp
copying aiohttp/tracing.py -> build/lib.macosx-13-arm64-cpython-312/aiohttp
copying aiohttp/web_exceptions.py -> build/lib.macosx-13-arm64-cpython-312/aiohttp
copying aiohttp/web_middlewares.py -> build/lib.macosx-13-arm64-cpython-312/aiohttp
copying aiohttp/web.py -> build/lib.macosx-13-arm64-cpython-312/aiohttp
copying aiohttp/http_exceptions.py -> build/lib.macosx-13-arm64-cpython-312/aiohttp
copying aiohttp/web_app.py -> build/lib.macosx-13-arm64-cpython-312/aiohttp
copying aiohttp/streams.py -> build/lib.macosx-13-arm64-cpython-312/aiohttp
copying aiohttp/web_protocol.py -> build/lib.macosx-13-arm64-cpython-312/aiohttp
copying aiohttp/log.py -> build/lib.macosx-13-arm64-cpython-312/aiohttp
copying aiohttp/client.py -> build/lib.macosx-13-arm64-cpython-312/aiohttp
copying aiohttp/web_urldispatcher.py -> build/lib.macosx-13-arm64-cpython-312/aiohttp
copying aiohttp/web_request.py -> build/lib.macosx-13-arm64-cpython-312/aiohttp
copying aiohttp/http_websocket.py -> build/lib.macosx-13-arm64-cpython-312/aiohttp
copying aiohttp/client_proto.py -> build/lib.macosx-13-arm64-cpython-312/aiohttp
copying aiohttp/locks.py -> build/lib.macosx-13-arm64-cpython-312/aiohttp
copying aiohttp/__init__.py -> build/lib.macosx-13-arm64-cpython-312/aiohttp
copying aiohttp/web_runner.py -> build/lib.macosx-13-arm64-cpython-312/aiohttp
copying aiohttp/web_server.py -> build/lib.macosx-13-arm64-cpython-312/aiohttp
copying aiohttp/base_protocol.py -> build/lib.macosx-13-arm64-cpython-312/aiohttp
copying aiohttp/payload.py -> build/lib.macosx-13-arm64-cpython-312/aiohttp
copying aiohttp/client_reqrep.py -> build/lib.macosx-13-arm64-cpython-312/aiohttp
copying aiohttp/http.py -> build/lib.macosx-13-arm64-cpython-312/aiohttp
copying aiohttp/web_log.py -> build/lib.macosx-13-arm64-cpython-312/aiohttp
copying aiohttp/resolver.py -> build/lib.macosx-13-arm64-cpython-312/aiohttp
copying aiohttp/formdata.py -> build/lib.macosx-13-arm64-cpython-312/aiohttp
copying aiohttp/payload_streamer.py -> build/lib.macosx-13-arm64-cpython-312/aiohttp
copying aiohttp/web_routedef.py -> build/lib.macosx-13-arm64-cpython-312/aiohttp
copying aiohttp/connector.py -> build/lib.macosx-13-arm64-cpython-312/aiohttp
copying aiohttp/client_exceptions.py -> build/lib.macosx-13-arm64-cpython-312/aiohttp
copying aiohttp/typedefs.py -> build/lib.macosx-13-arm64-cpython-312/aiohttp
copying aiohttp/hdrs.py -> build/lib.macosx-13-arm64-cpython-312/aiohttp
copying aiohttp/web_fileresponse.py -> build/lib.macosx-13-arm64-cpython-312/aiohttp
copying aiohttp/http_writer.py -> build/lib.macosx-13-arm64-cpython-312/aiohttp
copying aiohttp/tcp_helpers.py -> build/lib.macosx-13-arm64-cpython-312/aiohttp
copying aiohttp/helpers.py -> build/lib.macosx-13-arm64-cpython-312/aiohttp
copying aiohttp/http_parser.py -> build/lib.macosx-13-arm64-cpython-312/aiohttp
copying aiohttp/cookiejar.py -> build/lib.macosx-13-arm64-cpython-312/aiohttp
copying aiohttp/pytest_plugin.py -> build/lib.macosx-13-arm64-cpython-312/aiohttp
copying aiohttp/abc.py -> build/lib.macosx-13-arm64-cpython-312/aiohttp
running egg_info
writing aiohttp.egg-info/PKG-INFO
writing dependency_links to aiohttp.egg-info/dependency_links.txt
writing requirements to aiohttp.egg-info/requires.txt
writing top-level names to aiohttp.egg-info/top_level.txt
reading manifest file 'aiohttp.egg-info/SOURCES.txt'
reading manifest template 'MANIFEST.in'
warning: no files found matching 'aiohttp' anywhere in distribution
warning: no previously-included files matching '*.pyc' found anywhere in distribution
warning: no previously-included files matching '*.pyd' found anywhere in distribution
warning: no previously-included files matching '*.so' found anywhere in distribution
warning: no previously-included files matching '*.lib' found anywhere in distribution
warning: no previously-included files matching '*.dll' found anywhere in distribution
warning: no previously-included files matching '*.a' found anywhere in distribution
warning: no previously-included files matching '*.obj' found anywhere in distribution
warning: no previously-included files found matching 'aiohttp/*.html'
no previously-included directories found matching 'docs/_build'
adding license file 'LICENSE.txt'
writing manifest file 'aiohttp.egg-info/SOURCES.txt'
copying aiohttp/_cparser.pxd -> build/lib.macosx-13-arm64-cpython-312/aiohttp
copying aiohttp/_find_header.pxd -> build/lib.macosx-13-arm64-cpython-312/aiohttp
copying aiohttp/_headers.pxi -> build/lib.macosx-13-arm64-cpython-312/aiohttp
copying aiohttp/_helpers.pyi -> build/lib.macosx-13-arm64-cpython-312/aiohttp
copying aiohttp/_helpers.pyx -> build/lib.macosx-13-arm64-cpython-312/aiohttp
copying aiohttp/_http_parser.pyx -> build/lib.macosx-13-arm64-cpython-312/aiohttp
copying aiohttp/_http_writer.pyx -> build/lib.macosx-13-arm64-cpython-312/aiohttp
copying aiohttp/_websocket.pyx -> build/lib.macosx-13-arm64-cpython-312/aiohttp
copying aiohttp/py.typed -> build/lib.macosx-13-arm64-cpython-312/aiohttp
creating build/lib.macosx-13-arm64-cpython-312/aiohttp/.hash
copying aiohttp/.hash/_cparser.pxd.hash -> build/lib.macosx-13-arm64-cpython-312/aiohttp/.hash
copying aiohttp/.hash/_find_header.pxd.hash -> build/lib.macosx-13-arm64-cpython-312/aiohttp/.hash
copying aiohttp/.hash/_helpers.pyi.hash -> build/lib.macosx-13-arm64-cpython-312/aiohttp/.hash
copying aiohttp/.hash/_helpers.pyx.hash -> build/lib.macosx-13-arm64-cpython-312/aiohttp/.hash
copying aiohttp/.hash/_http_parser.pyx.hash -> build/lib.macosx-13-arm64-cpython-312/aiohttp/.hash
copying aiohttp/.hash/_http_writer.pyx.hash -> build/lib.macosx-13-arm64-cpython-312/aiohttp/.hash
copying aiohttp/.hash/_websocket.pyx.hash -> build/lib.macosx-13-arm64-cpython-312/aiohttp/.hash
copying aiohttp/.hash/hdrs.py.hash -> build/lib.macosx-13-arm64-cpython-312/aiohttp/.hash
running build_ext
building 'aiohttp._websocket' extension
creating build/temp.macosx-13-arm64-cpython-312
creating build/temp.macosx-13-arm64-cpython-312/aiohttp
clang -fno-strict-overflow -Wsign-compare -Wunreachable-code -fno-common -dynamic -DNDEBUG -g -O3 -Wall -isysroot /Library/Developer/CommandLineTools/SDKs/MacOSX13.sdk -I/private/var/folders/z4/nphh3sds4zsckwzc8kcht7h00000gn/T/tmpmjkdtpaa/.venv/include -I/opt/homebrew/opt/python@3.12/Frameworks/Python.framework/Versions/3.12/include/python3.12 -c aiohttp/_websocket.c -o build/temp.macosx-13-arm64-cpython-312/aiohttp/_websocket.o
aiohttp/_websocket.c:1475:17: warning: 'Py_OptimizeFlag' is deprecated [-Wdeprecated-declarations]
if (unlikely(!Py_OptimizeFlag)) {
^
/opt/homebrew/opt/python@3.12/Frameworks/Python.framework/Versions/3.12/include/python3.12/cpython/pydebug.h:13:1: note: 'Py_OptimizeFlag' has been explicitly marked deprecated here
Py_DEPRECATED(3.12) PyAPI_DATA(int) Py_OptimizeFlag;
^
/opt/homebrew/opt/python@3.12/Frameworks/Python.framework/Versions/3.12/include/python3.12/pyport.h:317:54: note: expanded from macro 'Py_DEPRECATED'
#define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__))
^
aiohttp/_websocket.c:2680:27: warning: 'ma_version_tag' is deprecated [-Wdeprecated-declarations]
return likely(dict) ? __PYX_GET_DICT_VERSION(dict) : 0;
^
aiohttp/_websocket.c:1118:65: note: expanded from macro '__PYX_GET_DICT_VERSION'
#define __PYX_GET_DICT_VERSION(dict) (((PyDictObject*)(dict))->ma_version_tag)
^
/opt/homebrew/opt/python@3.12/Frameworks/Python.framework/Versions/3.12/include/python3.12/cpython/dictobject.h:22:5: note: 'ma_version_tag' has been explicitly marked deprecated here
Py_DEPRECATED(3.12) uint64_t ma_version_tag;
^
/opt/homebrew/opt/python@3.12/Frameworks/Python.framework/Versions/3.12/include/python3.12/pyport.h:317:54: note: expanded from macro 'Py_DEPRECATED'
#define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__))
^
aiohttp/_websocket.c:2692:36: warning: 'ma_version_tag' is deprecated [-Wdeprecated-declarations]
return (dictptr && *dictptr) ? __PYX_GET_DICT_VERSION(*dictptr) : 0;
^
aiohttp/_websocket.c:1118:65: note: expanded from macro '__PYX_GET_DICT_VERSION'
#define __PYX_GET_DICT_VERSION(dict) (((PyDictObject*)(dict))->ma_version_tag)
^
/opt/homebrew/opt/python@3.12/Frameworks/Python.framework/Versions/3.12/include/python3.12/cpython/dictobject.h:22:5: note: 'ma_version_tag' has been explicitly marked deprecated here
Py_DEPRECATED(3.12) uint64_t ma_version_tag;
^
/opt/homebrew/opt/python@3.12/Frameworks/Python.framework/Versions/3.12/include/python3.12/pyport.h:317:54: note: expanded from macro 'Py_DEPRECATED'
#define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__))
^
aiohttp/_websocket.c:2696:56: warning: 'ma_version_tag' is deprecated [-Wdeprecated-declarations]
if (unlikely(!dict) || unlikely(tp_dict_version != __PYX_GET_DICT_VERSION(dict)))
^
aiohttp/_websocket.c:1118:65: note: expanded from macro '__PYX_GET_DICT_VERSION'
#define __PYX_GET_DICT_VERSION(dict) (((PyDictObject*)(dict))->ma_version_tag)
^
/opt/homebrew/opt/python@3.12/Frameworks/Python.framework/Versions/3.12/include/python3.12/cpython/dictobject.h:22:5: note: 'ma_version_tag' has been explicitly marked deprecated here
Py_DEPRECATED(3.12) uint64_t ma_version_tag;
^
/opt/homebrew/opt/python@3.12/Frameworks/Python.framework/Versions/3.12/include/python3.12/pyport.h:317:54: note: expanded from macro 'Py_DEPRECATED'
#define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__))
^
aiohttp/_websocket.c:2741:9: warning: 'ma_version_tag' is deprecated [-Wdeprecated-declarations]
__PYX_PY_DICT_LOOKUP_IF_MODIFIED(
^
aiohttp/_websocket.c:1125:16: note: expanded from macro '__PYX_PY_DICT_LOOKUP_IF_MODIFIED'
if (likely(__PYX_GET_DICT_VERSION(DICT) == __pyx_dict_version)) {\
^
aiohttp/_websocket.c:1118:65: note: expanded from macro '__PYX_GET_DICT_VERSION'
#define __PYX_GET_DICT_VERSION(dict) (((PyDictObject*)(dict))->ma_version_tag)
^
/opt/homebrew/opt/python@3.12/Frameworks/Python.framework/Versions/3.12/include/python3.12/cpython/dictobject.h:22:5: note: 'ma_version_tag' has been explicitly marked deprecated here
Py_DEPRECATED(3.12) uint64_t ma_version_tag;
^
/opt/homebrew/opt/python@3.12/Frameworks/Python.framework/Versions/3.12/include/python3.12/pyport.h:317:54: note: expanded from macro 'Py_DEPRECATED'
#define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__))
^
aiohttp/_websocket.c:2741:9: warning: 'ma_version_tag' is deprecated [-Wdeprecated-declarations]
__PYX_PY_DICT_LOOKUP_IF_MODIFIED(
^
aiohttp/_websocket.c:1129:30: note: expanded from macro '__PYX_PY_DICT_LOOKUP_IF_MODIFIED'
__pyx_dict_version = __PYX_GET_DICT_VERSION(DICT);\
^
aiohttp/_websocket.c:1118:65: note: expanded from macro '__PYX_GET_DICT_VERSION'
#define __PYX_GET_DICT_VERSION(dict) (((PyDictObject*)(dict))->ma_version_tag)
^
/opt/homebrew/opt/python@3.12/Frameworks/Python.framework/Versions/3.12/include/python3.12/cpython/dictobject.h:22:5: note: 'ma_version_tag' has been explicitly marked deprecated here
Py_DEPRECATED(3.12) uint64_t ma_version_tag;
^
/opt/homebrew/opt/python@3.12/Frameworks/Python.framework/Versions/3.12/include/python3.12/pyport.h:317:54: note: expanded from macro 'Py_DEPRECATED'
#define Py_DEPRECATED(VERSION_UNUSED) __attribute__((__deprecated__))
^
aiohttp/_websocket.c:3042:55: error: no member named 'ob_digit' in 'struct _longobject'
const digit* digits = ((PyLongObject*)x)->ob_digit;
~~~~~~~~~~~~~~~~~~ ^
aiohttp/_websocket.c:3097:55: error: no member named 'ob_digit' in 'struct _longobject'
const digit* digits = ((PyLongObject*)x)->ob_digit;
~~~~~~~~~~~~~~~~~~ ^
aiohttp/_websocket.c:3238:55: error: no member named 'ob_digit' in 'struct _longobject'
const digit* digits = ((PyLongObject*)x)->ob_digit;
~~~~~~~~~~~~~~~~~~ ^
aiohttp/_websocket.c:3293:55: error: no member named 'ob_digit' in 'struct _longobject'
const digit* digits = ((PyLongObject*)x)->ob_digit;
~~~~~~~~~~~~~~~~~~ ^
aiohttp/_websocket.c:3744:47: error: no member named 'ob_digit' in 'struct _longobject'
const digit* digits = ((PyLongObject*)b)->ob_digit;
~~~~~~~~~~~~~~~~~~ ^
6 warnings and 5 errors generated.
error: command '/usr/bin/clang' failed with exit code 1
at ~/.local/pipx/venvs/poetry/lib/python3.12/site-packages/poetry/installation/chef.py:164 in _prepare
160│
161│ error = ChefBuildError("\n\n".join(message_parts))
162│
163│ if error is not None:
→ 164│ raise error from None
165│
166│ return path
167│
168│ def _prepare_sdist(self, archive: Path, destination: Path | None = None) -> Path:
Note: This error originates from the build backend, and is likely not a problem with poetry but with aiohttp (3.8.5) not supporting PEP 517 builds. You can verify this by running 'pip wheel --no-cache-dir --use-pep517 "aiohttp (==3.8.5)"'.
• Installing black (23.10.1)
• Installing colorama (0.4.6)
• Installing dataclasses-json (0.6.1)
• Installing dnspython (2.4.2)
• Installing jsonpatch (1.33)
• Installing jupyter-cache (0.6.1)
• Installing langsmith (0.0.63)
• Installing livereload (2.6.3)
• Installing myst-parser (0.18.1)
• Installing nbdev (1.2.0)
• Installing numpy (1.24.4): Failed
ChefBuildError
Backend 'setuptools.build_meta:__legacy__' is not available.
Traceback (most recent call last):
File "/Users/username/.local/pipx/venvs/poetry/lib/python3.12/site-packages/pyproject_hooks/_in_process/_in_process.py", line 77, in _build_backend
obj = import_module(mod_path)
^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/Cellar/python@3.12/3.12.0/Frameworks/Python.framework/Versions/3.12/lib/python3.12/importlib/__init__.py", line 90, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "<frozen importlib._bootstrap>", line 1381, in _gcd_import
File "<frozen importlib._bootstrap>", line 1354, in _find_and_load
File "<frozen importlib._bootstrap>", line 1304, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 488, in _call_with_frames_removed
File "<frozen importlib._bootstrap>", line 1381, in _gcd_import
File "<frozen importlib._bootstrap>", line 1354, in _find_and_load
File "<frozen importlib._bootstrap>", line 1325, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 929, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 994, in exec_module
File "<frozen importlib._bootstrap>", line 488, in _call_with_frames_removed
File "/private/var/folders/z4/nphh3sds4zsckwzc8kcht7h00000gn/T/tmphs3uy5rx/.venv/lib/python3.12/site-packages/setuptools/__init__.py", line 10, in <module>
import distutils.core
ModuleNotFoundError: No module named 'distutils'
at ~/.local/pipx/venvs/poetry/lib/python3.12/site-packages/poetry/installation/chef.py:164 in _prepare
160│
161│ error = ChefBuildError("\n\n".join(message_parts))
162│
163│ if error is not None:
→ 164│ raise error from None
165│
166│ return path
167│
168│ def _prepare_sdist(self, archive: Path, destination: Path | None = None) -> Path:
Note: This error originates from the build backend, and is likely not a problem with poetry but with numpy (1.24.4) not supporting PEP 517 builds. You can verify this by running 'pip wheel --no-cache-dir --use-pep517 "numpy (==1.24.4)"'.
• Installing numpydoc (1.2)
• Installing pydata-sphinx-theme (0.8.1)
• Installing sphinxcontrib-jquery (4.1)
• Installing tenacity (8.2.3)
Warning: The file chosen for install of executing 2.0.0 (executing-2.0.0-py2.py3-none-any.whl) is yanked. Reason for being yanked: Released 2.0.1 which is equivalent but added 'python_requires = >=3.5' so that pip install with Python 2 uses the previous version 1.2.0.
```
Here is my poetry info:
```
╰─➤ poetry env info 1 ↵
Virtualenv
Python: 3.12.0
Implementation: CPython
Path: /Users/username/path/to/langchain2/.venv
Executable: /Users/username/path/to/langchain2/.venv/bin/python
Valid: True
```
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [X] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
I believe you should be able to do:
1. git clone
2. poetry install --with test
### Expected behavior
I would expect that all packages specified by the lockfile could be installed successfully. | poetry install --with test issue | https://api.github.com/repos/langchain-ai/langchain/issues/13912/comments | 2 | 2023-11-27T14:07:30Z | 2023-11-27T23:07:40Z | https://github.com/langchain-ai/langchain/issues/13912 | 2,012,359,254 | 13,912 |
[
"langchain-ai",
"langchain"
] | ### Feature request
MultiVectorRetriever is really helpful to add summary and hypothetical queries of our documents to improve the retrievers but only these two are stored in the vectorstore, instead the entire document is within a BaseStore (Memory or Local).
The main issue is that:
- the Memory one is not going to persist across restarts
- the File one is going to create tons of files
Why not keeping the original document in the vectorstore as well instead of using external file/memory?
### Motivation
Keep documents, questions and summaries on the same vectorstore.
### Your contribution
I could work on that but I would like to know your point of view. | MultiVector Retriever BaseStore | https://api.github.com/repos/langchain-ai/langchain/issues/13909/comments | 12 | 2023-11-27T11:33:30Z | 2024-07-24T05:23:56Z | https://github.com/langchain-ai/langchain/issues/13909 | 2,012,075,695 | 13,909 |
[
"langchain-ai",
"langchain"
] | ### System Info
Ubuntu 23.10
### Who can help?
@hwchase17 @agola11
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```
from langchain import OpenAI, SQLDatabase
from snowflake.snowpark import Session
from langchain.chains import create_sql_query_chain
from dotenv import load_dotenv
import os
from urllib.parse import quote
load_dotenv()
# use the env vars in comments above to set the vars below
OpenAI_API_KEY = os.getenv("OPENAI_API_KEY")
snowflake_account = os.getenv("ACCOUNT")
username = os.getenv("USER")
password = os.getenv("SNOWSQL_PWD")
warehouse = os.getenv("WAREHOUSE")
database = 'LANGCHAIN_DEMO_DB' #os.getenv("DATABASE")
schema = 'PUBLIC' #os.getenv("SCHEMA")
role = os.getenv("ROLE")
# print out all env vars using f-strings each on a separate line but x out password
print(f"OpenAI_API_KEY: {'x' * len(OpenAI_API_KEY)}")
print(f"snowflake_account: {snowflake_account}")
#print(f"username: {username}")
#print(f"password: {password}")
print(f"warehouse: {warehouse}")
print(f"database: {database}")
print(f"schema: {schema}")
print(f"role: {role}")
encoded_password = quote(password, safe='')
```
but it works in my Jupyter notebook

https://medium.com/@muriithicliffernest/snowflake-langchain-generating-sql-queries-from-natural-language-queries-12c4e2918631 is the tutorial I followed for the .ipynb.
```
pip install --upgrade pip
pip install "snowflake-snowpark-python[pandas]" snowflake-sqlalchemy
pip install langchain openai langchain-experimental jupyter
```
are the instructions to install packages in that Medium article so I matched versions for both the conda env I'm using for the .py and .ipynb. Even if I use the same `langchain-snowlfake` env for both the error is still there. See the red line under `from langchain import OpenAI, SQLDatabase
` in the right half of the image which is showing `lanchain-sql.py`
### Expected behavior
The import should work, no red line. | imports of OpenAI and SQLDatabase don't work in .py file | https://api.github.com/repos/langchain-ai/langchain/issues/13906/comments | 3 | 2023-11-27T11:04:57Z | 2024-02-09T02:11:48Z | https://github.com/langchain-ai/langchain/issues/13906 | 2,012,028,998 | 13,906 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
Sometimes when interacting with the bot using Retrieval QA chain, it just stops at Entering new RetrievalQA chain...
No response, it doesn't give the response, it just stops,
I am using qa.acall
and using the async callback handler
how to fix that, as that is unnacceptable
### Suggestion:
_No response_ | Issue: Retrieval QA Chain not giving response after Entering new RetrievalQA chain... | https://api.github.com/repos/langchain-ai/langchain/issues/13900/comments | 1 | 2023-11-27T07:12:34Z | 2024-03-13T20:02:37Z | https://github.com/langchain-ai/langchain/issues/13900 | 2,011,642,730 | 13,900 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
```python
llm = ChatOpenAI(model=gpt_4, temperature=0, api_key=os.environ['OPENAI_API_KEY'])
llm_chain = LLMChain(llm=llm, prompt=react_prompt)
tool_names = [tool.name]
agent = LLMSingleActionAgent(
llm_chain=llm_chain,
output_parser=react_output_parser,
stop=["\nObservation:"],
allowed_tools=tool_names,
max_execution_time=240,
max_iterations=120,
handle_parsing_errors=True
)
agent_executor = AgentExecutor.from_agent_and_tools(agent=agent, tools=[tool], verbose=True)
response = agent_executor.run(textual_description)
```
This is my setup for AgentExecutor. It is prompted to solve OpenAI gym's Taxi problem and only stop after the passenger is dropped off at the destination. But as the title suggests the AgentExecutor chain finishes before reaching the stopping limits or achieving the stopping condition.
Also when I use GPT-3 model sometimes it stops following the ReAct template occasionally and raises errors as my output parser cannot process the output correctly. I wonder if there is a way to change that.
### Suggestion:
_No response_ | Issue: AgentExecutor stopping before reaching the set max_iteration and max_execution_time limits without meeting the stop condition | https://api.github.com/repos/langchain-ai/langchain/issues/13897/comments | 4 | 2023-11-27T04:43:45Z | 2023-11-29T13:52:34Z | https://github.com/langchain-ai/langchain/issues/13897 | 2,011,474,672 | 13,897 |
[
"langchain-ai",
"langchain"
] | ### System Info
langchain==0.0.316
python==3.10.13
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
i'm using the code here:https://python.langchain.com/docs/integrations/llms/chatglm
Here is the full error output:
Traceback (most recent call last):
File "C:\Users\vic\Desktop\chatGLM\.conda\lib\site-packages\requests\models.py", line 971, in json
return complexjson.loads(self.text, **kwargs)
File "C:\Users\vic\Desktop\chatGLM\.conda\lib\json\__init__.py", line 346, in loads
return _default_decoder.decode(s)
File "C:\Users\vic\Desktop\chatGLM\.conda\lib\json\decoder.py", line 337, in decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
File "C:\Users\vic\Desktop\chatGLM\.conda\lib\json\decoder.py", line 355, in raw_decode
raise JSONDecodeError("Expecting value", s, err.value) from None
json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\Users\vic\AppData\Roaming\Python\Python310\site-packages\langchain\llms\chatglm.py", line 107, in _call
parsed_response = response.json()
File "C:\Users\vic\Desktop\chatGLM\.conda\lib\site-packages\requests\models.py", line 975, in json
raise RequestsJSONDecodeError(e.msg, e.doc, e.pos)
requests.exceptions.JSONDecodeError: Expecting value: line 1 column 1 (char 0)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "c:\Users\vic\Desktop\chatGLM\test_server.py", line 36, in <module>
print(llm_chain.run(question))
File "C:\Users\vic\AppData\Roaming\Python\Python310\site-packages\langchain\chains\base.py", line 503, in run
return self(args[0], callbacks=callbacks, tags=tags, metadata=metadata)[
File "C:\Users\vic\AppData\Roaming\Python\Python310\site-packages\langchain\chains\base.py", line 308, in __call__
raise e
File "C:\Users\vic\AppData\Roaming\Python\Python310\site-packages\langchain\chains\base.py", line 302, in __call__
self._call(inputs, run_manager=run_manager)
File "C:\Users\vic\AppData\Roaming\Python\Python310\site-packages\langchain\chains\llm.py", line 93, in _call
response = self.generate([inputs], run_manager=run_manager)
File "C:\Users\vic\AppData\Roaming\Python\Python310\site-packages\langchain\chains\llm.py", line 103, in generate
return self.llm.generate_prompt(
File "C:\Users\vic\AppData\Roaming\Python\Python310\site-packages\langchain\llms\base.py", line 497, in generate_prompt
return self.generate(prompt_strings, stop=stop, callbacks=callbacks, **kwargs)
File "C:\Users\vic\AppData\Roaming\Python\Python310\site-packages\langchain\llms\base.py", line 646, in generate
output = self._generate_helper(
File "C:\Users\vic\AppData\Roaming\Python\Python310\site-packages\langchain\llms\base.py", line 534, in _generate_helper
raise e
File "C:\Users\vic\AppData\Roaming\Python\Python310\site-packages\langchain\llms\base.py", line 521, in _generate_helper
self._generate(
File "C:\Users\vic\AppData\Roaming\Python\Python310\site-packages\langchain\llms\base.py", line 1043, in _generate
self._call(prompt, stop=stop, run_manager=run_manager, **kwargs)
File "C:\Users\vic\AppData\Roaming\Python\Python310\site-packages\langchain\llms\chatglm.py", line 120, in _call
raise ValueError(
ValueError: Error raised during decoding response from inference endpoint: Expecting value: line 1 column 1 (char 0).
### Expected behavior
output of ChatGLM's response is missing
print(response)
<Response [200]> | Error raised during decoding response from inference endpoint when using ChatGLM | https://api.github.com/repos/langchain-ai/langchain/issues/13896/comments | 2 | 2023-11-27T04:12:28Z | 2024-03-13T20:03:46Z | https://github.com/langchain-ai/langchain/issues/13896 | 2,011,448,434 | 13,896 |
[
"langchain-ai",
"langchain"
] | ### Feature request
We are working on a way to add a multi-input tool to LangChain for searching Reddit posts. Integrating the API as a tool will allow agents to search for posts using a specific search query and some query parameters like sort, time_filter, subreddit etc. to respond to prompts. The tool will use search functionality provided by [the `praw` package](https://praw.readthedocs.io/en/stable/code_overview/models/subreddit.html#praw.models.Subreddit.search).
### Motivation
Although LangChain currently has a document loader for Reddit (RedditPostsLoader), it is more centred around subreddit and username to load posts and we want to create our tool to provide more functionalities. Our tool will offer functionality for sorting and filtering by time, which is currently not handled by RedditPostsLoader. With this tool, agents can respond to prompts by interacting with the API without the user having to manually load the Reddit posts. The multi-input nature of the tool will make it useful for responding to more diverse prompts and we hope that users can use it to better leverage [multi-input tool](https://python.langchain.com/docs/modules/agents/tools/multi_input_tool) and [shared memory](https://python.langchain.com/docs/modules/agents/how_to/sharedmemory_for_tools) functionalities already provided by LangChain.
### Your contribution
We have our code already prepared and we will be submitting a PR soon. As encouraged by contributing.md, we have added integration tests, a notebook example, and edits for documentation generation. `praw` has also been added as an optional dependency. | Adding a multi-input Reddit search tool | https://api.github.com/repos/langchain-ai/langchain/issues/13891/comments | 2 | 2023-11-27T02:16:19Z | 2023-12-11T03:21:33Z | https://github.com/langchain-ai/langchain/issues/13891 | 2,011,359,518 | 13,891 |
[
"langchain-ai",
"langchain"
] | ### System Info
langchain version 0.0.340
Python version: 3.11.5
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
1. Create an Obsidian template with a [template variable](https://help.obsidian.md/Plugins/Templates#Template+variables) in the [properties](https://help.obsidian.md/Editing+and+formatting/Properties#Property+format) section of the file.
2. Attempt to load a directory containing that template file using [ObsidianLoader](https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/document_loaders/obsidian.py).
```shell
$ echo -e "---\nyear: {{date:YYYY}}\n---" > vault/template.md
$ python
>>> from langchain.document_loaders.obsidian import ObsidianLoader
>>> loader = ObsidianLoader('vault')
>>> loader.load()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/user/.conda/envs/localai/lib/python3.11/site-packages/langchain/document_loaders/obsidian.py", line 115, in load
front_matter = self._parse_front_matter(text)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/user/.conda/envs/localai/lib/python3.11/site-packages/langchain/document_loaders/obsidian.py", line 48, in _parse_front_matter
front_matter = yaml.safe_load(match.group(1))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/user/.conda/envs/localai/lib/python3.11/site-packages/yaml/__init__.py", line 125, in safe_load
return load(stream, SafeLoader)
^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/user/.conda/envs/localai/lib/python3.11/site-packages/yaml/__init__.py", line 81, in load
return loader.get_single_data()
^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/user/.conda/envs/localai/lib/python3.11/site-packages/yaml/constructor.py", line 51, in get_single_data
return self.construct_document(node)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/user/.conda/envs/localai/lib/python3.11/site-packages/yaml/constructor.py", line 60, in construct_document
for dummy in generator:
File "/home/user/.conda/envs/localai/lib/python3.11/site-packages/yaml/constructor.py", line 413, in construct_yaml_map
value = self.construct_mapping(node)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/user/.conda/envs/localai/lib/python3.11/site-packages/yaml/constructor.py", line 218, in construct_mapping
return super().construct_mapping(node, deep=deep)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/user/.conda/envs/localai/lib/python3.11/site-packages/yaml/constructor.py", line 141, in construct_mapping
raise ConstructorError("while constructing a mapping", node.start_mark,
yaml.constructor.ConstructorError: while constructing a mapping
in "<unicode string>", line 1, column 7:
year: {{date:YYYY}}
^
found unhashable key
in "<unicode string>", line 1, column 8:
year: {{date:YYYY}}
```
### Expected behavior
[Template variables](https://help.obsidian.md/Plugins/Templates#Template+variables) are a feature in Obsidian and including them in the properties section of a file is perfectly valid, so [ObsidianLoader](https://github.com/langchain-ai/langchain/blob/master/libs/langchain/langchain/document_loaders/obsidian.py) should have no issue loading a directory that includes a file that has template variable in its properties. | ObsidianLoader fails when encountering template variables in the properties frontmatter of a file | https://api.github.com/repos/langchain-ai/langchain/issues/13887/comments | 1 | 2023-11-27T01:05:47Z | 2024-03-13T20:01:25Z | https://github.com/langchain-ai/langchain/issues/13887 | 2,011,308,854 | 13,887 |
[
"langchain-ai",
"langchain"
] | ### Feature request
If I want to use VectorStoreRetrieverMemory to store my users' chat memories, I need to search and store them using user_id and session_id. However, memory.save_context doesn't have a 'metadata' option.
### Motivation
I want to associate chat merory with single user
### Your contribution
I can't submit PR | storing metadata with the VectorStoreRetrieverMemory memory module | https://api.github.com/repos/langchain-ai/langchain/issues/13876/comments | 2 | 2023-11-26T15:14:26Z | 2024-03-13T19:55:51Z | https://github.com/langchain-ai/langchain/issues/13876 | 2,011,079,659 | 13,876 |
[
"langchain-ai",
"langchain"
] | ### System Info
Python version: 3.11.5
Langchain version: 0.0.316
### Who can help?
@3coins
@hw
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
I am using Amazon Kendra as vector store to retrieve relevant documents as part of a Q&A application. As `UserContext` I am using User token:
```
def get_kendra_json_token(user_name: str, groups: List[str]):
kendra_json_token = {
'username': user_name,
'groups': groups
}
return kendra_json_token
```
This output is subsequently converted to: `'user_context': {'Token': json.dumps(kendra_json_token)}`
Everything is fine when I build the Retriever:
```
def get_kendra_doc_retriever(inputs: KendraRequest) -> AmazonKendraRetriever:
try:
kendra_client = boto3.client("kendra", os.environ.get('AWS_REGION'))
retriever = AmazonKendraRetriever(
index_id=inputs.kendra_index_id,
top_k=get_param(AIAssistantParam.NB_KENDRA_DOCS),
client=kendra_client,
attribute_filter=inputs.attribute_filter,
user_context=inputs.user_context
)
logger.info(f'Kendra retriever successfully instantiated')
return retriever
```
But then, when I call `get_relevant_documents`:
```
def ask_question(
chain: Chain,
retriever: AmazonKendraRetriever,
question: str
) -> Response:
try:
context = retriever.get_relevant_documents(question)
```
I get this exception: `An error occurred (AccessDeniedException) when calling the Retrieve operation: The provided JSON token isn't valid. The username couldn't be parsed. Generate a new token with username as an array of strings and try your request again.` Of course, `username` should be an string.
If I change the code doing this (swapping the content of `user_name` and `groups` in the user token):
```
def get_kendra_json_token(user_name: List[str], groups: str):
kendra_json_token = {
'username': groups,
'groups': user_name
}
return kendra_json_token
```
everything works fine. It is like `user_name` and `groups` inputs parameters are messed up somewhere.
### Expected behavior
No exception should be raised when creating the user token as explained in the description above. | KENDRA: issue with user_context parameter when using get_relevant_documents method (langchain.retrievers.kendra.AmazonKendraRetriever) | https://api.github.com/repos/langchain-ai/langchain/issues/13870/comments | 1 | 2023-11-26T09:55:33Z | 2023-12-11T15:22:58Z | https://github.com/langchain-ai/langchain/issues/13870 | 2,010,975,800 | 13,870 |
[
"langchain-ai",
"langchain"
] | ### Feature request
Currently the `_search` function in `ElasticsearchStore` assumes that the `hit` object returned in the search has a `metadata` field under `_source`:
```python
hit["_source"]["metadata"][field] = hit["_source"][field]
```
However, this is not the case in the index I work with - it does not have a `metadata` field. Due to that, an exception is raised.
Note that the following code does not help -
```python
if "metadata" not in fields:
fields.append("metadata")
```
The index still does not return any `metadata`.
I assume that in indexes created by `ElasticsearchStore` the `metadata` field is forced, and therefor there is no such issue. However, when using indexes created by external tools, it is better not to assume that the field exists, and support the case where it doesn't.
### Motivation
I'd prefer to re-use the existing `ElasticsearchStore` instead of my own implementation of it.
### Your contribution
I think I can contribute a PR handling this issue, if the admins confirm the feature request. | Support for elastic index without metadata field | https://api.github.com/repos/langchain-ai/langchain/issues/13869/comments | 1 | 2023-11-26T09:33:34Z | 2024-03-13T19:56:05Z | https://github.com/langchain-ai/langchain/issues/13869 | 2,010,969,350 | 13,869 |
[
"langchain-ai",
"langchain"
] | ### System Info
#### Environment variable
```bash
BENTOML_DEBUG=''
BENTOML_QUIET=''
BENTOML_BUNDLE_LOCAL_BUILD=''
BENTOML_DO_NOT_TRACK=''
BENTOML_CONFIG=''
BENTOML_CONFIG_OPTIONS=''
BENTOML_PORT=''
BENTOML_HOST=''
BENTOML_API_WORKERS=''
```
#### System information
`bentoml`: 1.1.10
`python`: 3.11.5
`platform`: Linux-6.2.0-37-generic-x86_64-with-glibc2.35
`uid_gid`: 1000:1000
`conda`: 23.7.4
`in_conda_env`: True
<details><summary><code>conda_packages</code></summary>
<br>
```yaml
name: openllm
channels:
- defaults
dependencies:
- _libgcc_mutex=0.1=main
- _openmp_mutex=5.1=1_gnu
- bzip2=1.0.8=h7b6447c_0
- ca-certificates=2023.08.22=h06a4308_0
- ld_impl_linux-64=2.38=h1181459_1
- libffi=3.4.4=h6a678d5_0
- libgcc-ng=11.2.0=h1234567_1
- libgomp=11.2.0=h1234567_1
- libstdcxx-ng=11.2.0=h1234567_1
- libuuid=1.41.5=h5eee18b_0
- ncurses=6.4=h6a678d5_0
- openssl=3.0.12=h7f8727e_0
- pip=23.3.1=py311h06a4308_0
- python=3.11.5=h955ad1f_0
- readline=8.2=h5eee18b_0
- setuptools=68.0.0=py311h06a4308_0
- sqlite=3.41.2=h5eee18b_0
- tk=8.6.12=h1ccaba5_0
- wheel=0.41.2=py311h06a4308_0
- xz=5.4.2=h5eee18b_0
- zlib=1.2.13=h5eee18b_0
- pip:
- accelerate==0.24.1
- aiohttp==3.9.0
- aiosignal==1.3.1
- anyio==3.7.1
- appdirs==1.4.4
- asgiref==3.7.2
- attrs==23.1.0
- beautifulsoup4==4.12.2
- bentoml==1.1.10
- bitsandbytes==0.41.2.post2
- build==0.10.0
- cattrs==23.1.2
- certifi==2023.11.17
- charset-normalizer==3.3.2
- circus==0.18.0
- click==8.1.7
- click-option-group==0.5.6
- cloudpickle==3.0.0
- coloredlogs==15.0.1
- contextlib2==21.6.0
- cuda-python==12.3.0
- dataclasses-json==0.6.2
- datasets==2.15.0
- deepmerge==1.1.0
- deprecated==1.2.14
- dill==0.3.7
- distlib==0.3.7
- distro==1.8.0
- einops==0.7.0
- fastapi==0.104.1
- fastcore==1.5.29
- filelock==3.13.1
- filetype==1.2.0
- frozenlist==1.4.0
- fs==2.4.16
- fsspec==2023.10.0
- ghapi==1.0.4
- greenlet==3.0.1
- h11==0.14.0
- httpcore==1.0.2
- httptools==0.6.1
- httpx==0.25.2
- huggingface-hub==0.19.4
- humanfriendly==10.0
- idna==3.6
- importlib-metadata==6.8.0
- inflection==0.5.1
- jinja2==3.1.2
- jsonpatch==1.33
- jsonpointer==2.4
- jsonschema==4.20.0
- jsonschema-specifications==2023.11.1
- langchain==0.0.340
- langsmith==0.0.66
- markdown-it-py==3.0.0
- markupsafe==2.1.3
- marshmallow==3.20.1
- mdurl==0.1.2
- mpmath==1.3.0
- msgpack==1.0.7
- multidict==6.0.4
- multiprocess==0.70.15
- mypy-extensions==1.0.0
- networkx==3.2.1
- ninja==1.11.1.1
- numpy==1.26.2
- nvidia-cublas-cu12==12.1.3.1
- nvidia-cuda-cupti-cu12==12.1.105
- nvidia-cuda-nvrtc-cu12==12.1.105
- nvidia-cuda-runtime-cu12==12.1.105
- nvidia-cudnn-cu12==8.9.2.26
- nvidia-cufft-cu12==11.0.2.54
- nvidia-curand-cu12==10.3.2.106
- nvidia-cusolver-cu12==11.4.5.107
- nvidia-cusparse-cu12==12.1.0.106
- nvidia-ml-py==11.525.150
- nvidia-nccl-cu12==2.18.1
- nvidia-nvjitlink-cu12==12.3.101
- nvidia-nvtx-cu12==12.1.105
- openllm==0.4.28
- openllm-client==0.4.28
- openllm-core==0.4.28
- opentelemetry-api==1.20.0
- opentelemetry-instrumentation==0.41b0
- opentelemetry-instrumentation-aiohttp-client==0.41b0
- opentelemetry-instrumentation-asgi==0.41b0
- opentelemetry-sdk==1.20.0
- opentelemetry-semantic-conventions==0.41b0
- opentelemetry-util-http==0.41b0
- optimum==1.14.1
- orjson==3.9.10
- packaging==23.2
- pandas==2.1.3
- pathspec==0.11.2
- pillow==10.1.0
- pip-requirements-parser==32.0.1
- pip-tools==7.3.0
- platformdirs==4.0.0
- prometheus-client==0.19.0
- protobuf==4.25.1
- psutil==5.9.6
- pyarrow==14.0.1
- pyarrow-hotfix==0.6
- pydantic==1.10.13
- pygments==2.17.2
- pyparsing==3.1.1
- pyproject-hooks==1.0.0
- python-dateutil==2.8.2
- python-dotenv==1.0.0
- python-json-logger==2.0.7
- python-multipart==0.0.6
- pytz==2023.3.post1
- pyyaml==6.0.1
- pyzmq==25.1.1
- ray==2.8.0
- referencing==0.31.0
- regex==2023.10.3
- requests==2.31.0
- rich==13.7.0
- rpds-py==0.13.1
- safetensors==0.4.0
- schema==0.7.5
- scipy==1.11.4
- sentencepiece==0.1.99
- simple-di==0.1.5
- six==1.16.0
- sniffio==1.3.0
- soupsieve==2.5
- sqlalchemy==2.0.23
- starlette==0.27.0
- sympy==1.12
- tenacity==8.2.3
- tokenizers==0.15.0
- torch==2.1.0
- tornado==6.3.3
- tqdm==4.66.1
- transformers==4.35.2
- triton==2.1.0
- typing-extensions==4.8.0
- typing-inspect==0.9.0
- tzdata==2023.3
- urllib3==2.1.0
- uvicorn==0.24.0.post1
- uvloop==0.19.0
- virtualenv==20.24.7
- vllm==0.2.2
- watchfiles==0.21.0
- websockets==12.0
- wrapt==1.16.0
- xformers==0.0.22.post7
- xxhash==3.4.1
- yarl==1.9.3
- zipp==3.17.0
prefix: /home/lolevsky/anaconda3/envs/openllm
```
</details>
<details><summary><code>pip_packages</code></summary>
<br>
```
accelerate==0.24.1
aiohttp==3.9.0
aiosignal==1.3.1
anyio==3.7.1
appdirs==1.4.4
asgiref==3.7.2
attrs==23.1.0
beautifulsoup4==4.12.2
bentoml==1.1.10
bitsandbytes==0.41.2.post2
build==0.10.0
cattrs==23.1.2
certifi==2023.11.17
charset-normalizer==3.3.2
circus==0.18.0
click==8.1.7
click-option-group==0.5.6
cloudpickle==3.0.0
coloredlogs==15.0.1
contextlib2==21.6.0
cuda-python==12.3.0
dataclasses-json==0.6.2
datasets==2.15.0
deepmerge==1.1.0
Deprecated==1.2.14
dill==0.3.7
distlib==0.3.7
distro==1.8.0
einops==0.7.0
fastapi==0.104.1
fastcore==1.5.29
filelock==3.13.1
filetype==1.2.0
frozenlist==1.4.0
fs==2.4.16
fsspec==2023.10.0
ghapi==1.0.4
greenlet==3.0.1
h11==0.14.0
httpcore==1.0.2
httptools==0.6.1
httpx==0.25.2
huggingface-hub==0.19.4
humanfriendly==10.0
idna==3.6
importlib-metadata==6.8.0
inflection==0.5.1
Jinja2==3.1.2
jsonpatch==1.33
jsonpointer==2.4
jsonschema==4.20.0
jsonschema-specifications==2023.11.1
langchain==0.0.340
langsmith==0.0.66
markdown-it-py==3.0.0
MarkupSafe==2.1.3
marshmallow==3.20.1
mdurl==0.1.2
mpmath==1.3.0
msgpack==1.0.7
multidict==6.0.4
multiprocess==0.70.15
mypy-extensions==1.0.0
networkx==3.2.1
ninja==1.11.1.1
numpy==1.26.2
nvidia-cublas-cu12==12.1.3.1
nvidia-cuda-cupti-cu12==12.1.105
nvidia-cuda-nvrtc-cu12==12.1.105
nvidia-cuda-runtime-cu12==12.1.105
nvidia-cudnn-cu12==8.9.2.26
nvidia-cufft-cu12==11.0.2.54
nvidia-curand-cu12==10.3.2.106
nvidia-cusolver-cu12==11.4.5.107
nvidia-cusparse-cu12==12.1.0.106
nvidia-ml-py==11.525.150
nvidia-nccl-cu12==2.18.1
nvidia-nvjitlink-cu12==12.3.101
nvidia-nvtx-cu12==12.1.105
openllm==0.4.28
openllm-client==0.4.28
openllm-core==0.4.28
opentelemetry-api==1.20.0
opentelemetry-instrumentation==0.41b0
opentelemetry-instrumentation-aiohttp-client==0.41b0
opentelemetry-instrumentation-asgi==0.41b0
opentelemetry-sdk==1.20.0
opentelemetry-semantic-conventions==0.41b0
opentelemetry-util-http==0.41b0
optimum==1.14.1
orjson==3.9.10
packaging==23.2
pandas==2.1.3
pathspec==0.11.2
Pillow==10.1.0
pip-requirements-parser==32.0.1
pip-tools==7.3.0
platformdirs==4.0.0
prometheus-client==0.19.0
protobuf==4.25.1
psutil==5.9.6
pyarrow==14.0.1
pyarrow-hotfix==0.6
pydantic==1.10.13
Pygments==2.17.2
pyparsing==3.1.1
pyproject_hooks==1.0.0
python-dateutil==2.8.2
python-dotenv==1.0.0
python-json-logger==2.0.7
python-multipart==0.0.6
pytz==2023.3.post1
PyYAML==6.0.1
pyzmq==25.1.1
ray==2.8.0
referencing==0.31.0
regex==2023.10.3
requests==2.31.0
rich==13.7.0
rpds-py==0.13.1
safetensors==0.4.0
schema==0.7.5
scipy==1.11.4
sentencepiece==0.1.99
simple-di==0.1.5
six==1.16.0
sniffio==1.3.0
soupsieve==2.5
SQLAlchemy==2.0.23
starlette==0.27.0
sympy==1.12
tenacity==8.2.3
tokenizers==0.15.0
torch==2.1.0
tornado==6.3.3
tqdm==4.66.1
transformers==4.35.2
triton==2.1.0
typing-inspect==0.9.0
typing_extensions==4.8.0
tzdata==2023.3
urllib3==2.1.0
uvicorn==0.24.0.post1
uvloop==0.19.0
virtualenv==20.24.7
vllm==0.2.2
watchfiles==0.21.0
websockets==12.0
wrapt==1.16.0
xformers==0.0.22.post7
xxhash==3.4.1
yarl==1.9.3
zipp==3.17.0
```
</details>
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
I am following the example and wrote the code:
```
llm = OpenLLM(server_url=server_url, server_type='http')
llm("What is the difference between a duck and a goose? And why there are so many Goose in Canada?")
```
Seems like First request is hitting the server
```
(scheme=http,method=POST,path=/v1/metadata,type=application/json,length=2) (status=200
```
Till now its look promising, but then I am getting error ```TypeError: 'dict' object is not callable```.
As attached in the trace:
```
Traceback (most recent call last):
File "/home/lolevsky/Github/Zodiac/main.py", line 24, in <module>
run_zodiac()
File "/home/lolevsky/Github/Zodiac/main.py", line 9, in run_zodiac
resA = llm("What is the difference between a duck and a goose? And why there are so many Goose in Canada?")
File "/usr/local/lib/python3.10/dist-packages/langchain/llms/base.py", line 876, in __call__
self.generate(
File "/usr/local/lib/python3.10/dist-packages/langchain/llms/base.py", line 626, in generate
params = self.dict()
File "/usr/local/lib/python3.10/dist-packages/langchain/llms/base.py", line 974, in dict
starter_dict = dict(self._identifying_params)
File "/usr/local/lib/python3.10/dist-packages/langchain/llms/openllm.py", line 220, in _identifying_params
self.llm_kwargs.update(self._client._config())
TypeError: 'dict' object is not callable
```
### To reproduce
This is how I had setup the envirment:
- conda create --name openllm python=3.11
- conda activate openllm
- pip install openllm
- pip install langchain
### Expected behavior
Should not get errors, should hit the server for prompting | bug: When running by example getting error: TypeError: 'dict' object is not callable | https://api.github.com/repos/langchain-ai/langchain/issues/13867/comments | 4 | 2023-11-26T08:25:23Z | 2024-04-15T16:07:35Z | https://github.com/langchain-ai/langchain/issues/13867 | 2,010,950,164 | 13,867 |
[
"langchain-ai",
"langchain"
] | ### Issue with current documentation:
Hi, can we get more documentation on `langchain_experimental.rl_chain`? I'm having trouble wrapping my head around how it works, and the documentation is sparse.
From the notebook intro, originally I thought it was going to tune the human written prompt template and then output a new and improved prompt template that it found was better. However it seems to be doing something else.
### Idea or request for content:
_No response_ | DOC: How langchain_experimental.rl_chain works | https://api.github.com/repos/langchain-ai/langchain/issues/13865/comments | 3 | 2023-11-26T06:33:13Z | 2024-03-13T19:55:36Z | https://github.com/langchain-ai/langchain/issues/13865 | 2,010,911,639 | 13,865 |
[
"langchain-ai",
"langchain"
] | ### System Info
RTX 3090
```
Here is notebook for reference: https://colab.research.google.com/drive/1Rwdrji34CV4QJofVl9jAT7-EwodvphA4?usp=sharing
```
### Who can help?
@agola11 @ey
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```
!wget -O /content/models/ggml-model-f16.gguf https://huggingface.co/mys/ggml_llava-v1.5-7b/resolve/main/ggml-model-f16.gguf
!wget -O /content/models/ggml-model-q5_k.gguf https://huggingface.co/mys/ggml_llava-v1.5-7b/resolve/main/ggml-model-q5_k.gguf
```
```
%%bash
# Define the directory containing the images
IMG_DIR=/content/LLAVA/
# Loop through each image in the directory
for img in "${IMG_DIR}"*.jpg; do
# Extract the base name of the image without extension
base_name=$(basename "$img" .jpg)
# Define the output file name based on the image name
output_file="${IMG_DIR}${base_name}.txt"
# Execute the command and save the output to the defined output file
/content/llama.cpp/bin/llava -m /content/models/ggml-model-q5_k.gguf --mmproj /content/models//mmproj-model-f16.gguf --temp 0.1 -p "Describe the image in detail. Be specific about graphs, such as bar plots." --image "$img" > "$output_file"
done
```
gives error:
```
bash: line 14: /content/llama.cpp/bin/llava: No such file or directory
bash: line 14: /content/llama.cpp/bin/llava: No such file or directory
bash: line 14: /content/llama.cpp/bin/llava: No such file or directory
bash: line 14: /content/llama.cpp/bin/llava: No such file or directory
bash: line 14: /content/llama.cpp/bin/llava: No such file or directory
bash: line 14: /content/llama.cpp/bin/llava: No such file or directory
bash: line 14: /content/llama.cpp/bin/llava: No such file or directory
bash: line 14: /content/llama.cpp/bin/llava: No such file or directory
bash: line 14: /content/llama.cpp/bin/llava: No such file or directory
bash: line 14: /content/llama.cpp/bin/llava: No such file or directory
---------------------------------------------------------------------------
CalledProcessError Traceback (most recent call last)
[<ipython-input-51-e049cdfbb7ce>](https://localhost:8080/#) in <cell line: 1>()
----> 1 get_ipython().run_cell_magic('bash', '', '\n# Define the directory containing the images\nIMG_DIR=/content/LLAVA/\n\n# Loop through each image in the directory\nfor img in "${IMG_DIR}"*.jpg; do\n # Extract the base name of the image without extension\n base_name=$(basename "$img" .jpg)\n\n # Define the output file name based on the image name\n output_file="${IMG_DIR}${base_name}.txt"\n\n # Execute the command and save the output to the defined output file\n /content/llama.cpp/bin/llava -m /content/models/ggml-model-q5_k.gguf --mmproj /content/models//mmproj-model-f16.gguf --temp 0.1 -p "Describe the image in detail. Be specific about graphs, such as bar plots." --image "$img" > "$output_file"\n\ndone\n')
4 frames
<decorator-gen-103> in shebang(self, line, cell)
[/usr/local/lib/python3.10/dist-packages/IPython/core/magics/script.py](https://localhost:8080/#) in shebang(self, line, cell)
243 sys.stderr.flush()
244 if args.raise_error and p.returncode!=0:
--> 245 raise CalledProcessError(p.returncode, cell, output=out, stderr=err)
246
247 def _run_script(self, p, cell, to_close):
CalledProcessError: Command 'b'\n# Define the directory containing the images\nIMG_DIR=/content/LLAVA/\n\n# Loop through each image in the directory\nfor img in "${IMG_DIR}"*.jpg; do\n # Extract the base name of the image without extension\n base_name=$(basename "$img" .jpg)\n\n # Define the output file name based on the image name\n output_file="${IMG_DIR}${base_name}.txt"\n\n # Execute the command and save the output to the defined output file\n /content/llama.cpp/bin/llava -m /content/models/ggml-model-q5_k.gguf --mmproj /content/models//mmproj-model-f16.gguf --temp 0.1 -p "Describe the image in detail. Be specific about graphs, such as bar plots." --image "$img" > "$output_file"\n\ndone\n'' returned non-zero exit status 127.
```
### Expected behavior
it should run , but do not understand
```
from langchain.llms import LlamaCpp
llm = LlamaCpp(model_path="/path/to/llama/model")
``` | CalledProcessError: bash command for LLAVA in Multimodal giving error | https://api.github.com/repos/langchain-ai/langchain/issues/13863/comments | 3 | 2023-11-26T02:43:06Z | 2024-03-13T20:01:26Z | https://github.com/langchain-ai/langchain/issues/13863 | 2,010,867,450 | 13,863 |
[
"langchain-ai",
"langchain"
] | ### System Info
Hello,
I am trying to use a Baseten base LLM in a RAG pipeline.
```
from operator import itemgetter
from langchain.llms import Baseten
from langchain.schema.runnable import RunnableMap
llm = Baseten(model="MODEL_ID", verbose=True)
rag_chain_from_docs = (
{
"context": lambda input: format_docs(input["documents"]),
"question": itemgetter("question"),
}
| rag_prompt_custom
| llm
| StrOutputParser()
)
rag_chain_with_source = RunnableMap(
{"documents": retriever, "question": RunnablePassthrough()}
) | {
"documents": lambda input: [doc.metadata for doc in input["documents"]],
"answer": rag_chain_from_docs,
}
rag_chain_with_source.invoke("What is Task Decomposition")
```
I am using a FAISS retriever and I am getting the following error on the `.invoke()` method:
```
File "/Users/usr/miniconda3/envs/langchain/lib/python3.11/site-packages/langchain/llms/baseten.py", line 69, in _call
response = model.predict({"prompt": prompt, **kwargs})
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/usr/miniconda3/envs/langchain/lib/python3.11/site-packages/baseten/common/core.py", line 67, in wrapper
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/Users/usr/miniconda3/envs/langchain/lib/python3.11/site-packages/baseten/baseten_deployed_model.py", line 124, in predict
raise TypeError('predict can be called with either a list, a pandas DataFrame, or a numpy array.')
TypeError: predict can be called with either a list, a pandas DataFrame, or a numpy array.
```
It seems the `model.predict()` method is expecting a list. Does anyone already encountered this error?
Thank you in advance !
### Who can help?
@hwchase17 @agola11
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```
from operator import itemgetter
from langchain.llms import Baseten
from langchain.schema.runnable import RunnableMap
llm = Baseten(model="MODEL_ID", verbose=True)
rag_chain_from_docs = (
{
"context": lambda input: format_docs(input["documents"]),
"question": itemgetter("question"),
}
| rag_prompt_custom
| llm
| StrOutputParser()
)
rag_chain_with_source = RunnableMap(
{"documents": retriever, "question": RunnablePassthrough()}
) | {
"documents": lambda input: [doc.metadata for doc in input["documents"]],
"answer": rag_chain_from_docs,
}
rag_chain_with_source.invoke("What is Task Decomposition")
```
### Expected behavior
It seems the `model.predict()` method is expecting a list. Could you fix this issue? | TypeError using Baseten in a RAG | https://api.github.com/repos/langchain-ai/langchain/issues/13861/comments | 1 | 2023-11-25T23:39:12Z | 2024-03-13T20:02:45Z | https://github.com/langchain-ai/langchain/issues/13861 | 2,010,829,835 | 13,861 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
```# Load Tools
tools = load_tools(["serpapi","langchain_experimental_python_repl"], llm=llm)
```
error
```
Exception has occurred: ImportError
This tool has been moved to langchain experiment. This tool has access to a python REPL. For best practices make sure to sandbox this tool. Read To keep using this code as is, install langchain experimental and update relevant imports replacing 'langchain' with 'langchain_experimental'
File "/home/isayahc/projects/buy-bot/react_agent.py", line 49, in create_agent_executor
tools = load_tools(["serpapi","python_repl"], llm=llm)
File "/home/isayahc/projects/buy-bot/react_agent.py", line 88, in <module>
agent_executor = create_agent_executor()
ImportError: This tool has been moved to langchain experiment. This tool has access to a python REPL. For best practices make sure to sandbox this tool. Read h To keep using this code as is, install langchain experimental and update relevant imports replacing 'langchain' with 'langchain_experimental'
```
how do i fix this?
### Suggestion:
_No response_ | Issue: <Please write a comprehensive title after the 'Issue: ' prefix> | https://api.github.com/repos/langchain-ai/langchain/issues/13859/comments | 1 | 2023-11-25T23:18:35Z | 2023-11-26T01:05:46Z | https://github.com/langchain-ai/langchain/issues/13859 | 2,010,826,017 | 13,859 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
Exception has occurred: ImportError
This tool has been moved to langchain experiment. This tool has access to a python REPL. For best practices make sure to sandbox this tool. Read https://github.com/langchain-ai/langchain/blob/master/SECURITY.md To keep using this code as is, install langchain experimental and update relevant imports replacing 'langchain' with 'langchain_experimental'
File "/home/isayahc/projects/buy-bot/react_agent.py", line 43, in create_agent_executor
tools = load_tools(["serpapi","python_repl"], llm=llm)
File "/home/isayahc/projects/buy-bot/react_agent.py", line 81, in
agent_executor = create_agent_executor()
ImportError: This tool has been moved to langchain experiment. This tool has access to a python REPL. For best practices make sure to sandbox this tool. Read https://github.com/langchain-ai/langchain/blob/master/SECURITY.md To keep using this code as is, install langchain experimental and update relevant imports replacing 'langchain' with 'langchain_experimental'
I am trying to use the python repl, and load it to my tools
### Suggestion:
_No response_ | Issue: Load tools from experimental langchain module | https://api.github.com/repos/langchain-ai/langchain/issues/13858/comments | 1 | 2023-11-25T22:57:56Z | 2024-03-13T19:57:32Z | https://github.com/langchain-ai/langchain/issues/13858 | 2,010,821,824 | 13,858 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
Exception has occurred: ImportError
This tool has been moved to langchain experiment. This tool has access to a python REPL. For best practices make sure to sandbox this tool. Read https://github.com/langchain-ai/langchain/blob/master/SECURITY.md To keep using this code as is, install langchain experimental and update relevant imports replacing 'langchain' with 'langchain_experimental'
File "/home/isayahc/projects/buy-bot/react_agent.py", line 43, in create_agent_executor
tools = load_tools(["serpapi","python_repl"], llm=llm)
File "/home/isayahc/projects/buy-bot/react_agent.py", line 81, in <module>
agent_executor = create_agent_executor()
ImportError: This tool has been moved to langchain experiment. This tool has access to a python REPL. For best practices make sure to sandbox this tool. Read https://github.com/langchain-ai/langchain/blob/master/SECURITY.md To keep using this code as is, install langchain experimental and update relevant imports replacing 'langchain' with 'langchain_experimental'
### Suggestion:
_No response_ | Issue: what string works for experimental tool | https://api.github.com/repos/langchain-ai/langchain/issues/13856/comments | 3 | 2023-11-25T22:34:55Z | 2023-11-25T22:56:58Z | https://github.com/langchain-ai/langchain/issues/13856 | 2,010,817,039 | 13,856 |
[
"langchain-ai",
"langchain"
] | ### System Info
Python 3.9.18
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [x] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Hi i try this example, but doesnt work
```
from llamaapi import LlamaAPI
from langchain.chains import create_extraction_chain
llama = LlamaAPI("My Api KEy")
from langchain_experimental.llms import ChatLlamaAPI
model = ChatLlamaAPI(client=llama)
schema = {
"properties": {
"name": {"type": "string"},
"height": {"type": "integer"},
"hair_color": {"type": "string"},
},
"required": ["name", "height"],
}
inp = """
Alex is 5 feet tall. Claudia is 1 feet taller Alex and jumps higher than him. Claudia is a brunette and Alex is blonde.
"""
chain = create_extraction_chain(schema, model)
chain.run(inp)
```
File "C:\mini\envs\py39\lib\site-packages\langchain\output_parsers\openai_func
tions.py", line 136, in parse_result
return res.get(self.key_name) if partial else res[self.key_name]
TypeError: string indices must be integers
### Expected behavior
Extraction from text | create_extraction_chain does not work with other LLMs?i try with llama_api | https://api.github.com/repos/langchain-ai/langchain/issues/13847/comments | 4 | 2023-11-25T13:01:51Z | 2024-03-17T16:06:32Z | https://github.com/langchain-ai/langchain/issues/13847 | 2,010,605,909 | 13,847 |
[
"langchain-ai",
"langchain"
] | ### System Info
I am write a code and i want to add history to my langchain agent. History is present in chats list
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [x] My own modified scripts
### Related Components
- [x] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [x] Memory
- [x] Agents / Agent Executors
- [x] Tools / Toolkits
- [x] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
`async def chat_with_agent(user_input, formatting_data, chats: list):
"""
Initiates a chat with the agent based on the user input.
"""
try:
# Initialize the chat model
llm_model = "gpt-4-1106-preview"
llm = ChatOpenAI(temperature=0.3, model=llm_model)
# Load necessary tools
tool = StructuredTool.from_function(get_human_input)
tools = load_tools(["serpapi"], llm=llm)
tools_list = [tool, exposure, get_user_profile, get_user_risk_profile, get_stock_technical_analysis,
get_stock_fundamental_analysis, get_mutual_fund_exposure, get_stock_based_news,
user_agent_chat_history]
# Initialize the agent
agent = initialize_agent(
tools + tools_list,
llm,
agent=AgentType.STRUCTURED_CHAT_ZERO_SHOT_REACT_DESCRIPTION,
handle_parsing_errors=True,
verbose=True,
max_execution_time=1800,
max_iterations=300,
agent_kwargs={
'prefix': "Answer the questions as best you can. Use tools when needed. First task is check user "
"If the query requires and domain specific experts and mention it in response there are other"
"experts in system like stock expert, tax expert, mutual fund expert "
"First: you task is to answer only financial question only"
},
return_intermediate_steps=True
)
# Add additional prompt
extra_prompt = ("You are Relationship Manager. All values are in Indian Rupees. Answers or tasks always lie "
"in the capacity of the tools. So ensure you are not expecting anything outside of it."
)
final_input = "This is user input " + user_input + " This is helping prompt " + extra_prompt
try:
logger.info(f"User input + extra prompt: {user_input + extra_prompt}")
# Run the agent
result = agent(final_input)
except Exception as e:
logger.exception(f"Error while running the agent: {e}")
result = str(e)
logger.info(f"Agent chat result: {result['output']}")
response = personalised_response_from_ai(final_input, str(result['output']), str(result["intermediate_steps"]),
formatting_data)
"""report = report_writing_tool(user_input, str(result['output']), str(result["intermediate_steps"]))"""
logger.info(f"Response from GPT: {response}")
# return f" {response}, Report: {report}"
if response:
return response
else:
return str(result['output'])
except Exception as e:
logger.error(f"Error while talking with RM Agent: {str(e)}")
raise HTTPException(status_code=500, detail=str(e))`
### Expected behavior
I want add history to my agent | Add memeory to langchain AgentType.STRUCTURED_CHAT_ZERO_SHOT_REACT_DESCRIPTION agent | https://api.github.com/repos/langchain-ai/langchain/issues/13845/comments | 2 | 2023-11-25T12:05:33Z | 2024-03-13T19:55:41Z | https://github.com/langchain-ai/langchain/issues/13845 | 2,010,590,203 | 13,845 |
[
"langchain-ai",
"langchain"
] | ### System Info
None
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
My chat model is ERNIE-Bot, running the following code reported an error:

But after I removed SystemMessage it works fine. So I want to know, do all models support SystemMessage?

### Expected behavior
None | Does only openai support SystemMessage? | https://api.github.com/repos/langchain-ai/langchain/issues/13842/comments | 1 | 2023-11-25T08:57:09Z | 2024-03-13T20:00:32Z | https://github.com/langchain-ai/langchain/issues/13842 | 2,010,536,775 | 13,842 |
[
"langchain-ai",
"langchain"
] | ### System Info
im running it on google Collab
### Who can help?
trying the example of mult-modal rag - I tried everything no matter what if still getting this error
please tell if if there is any alternative way or how can we install it? @bas
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
just run it on collab & we will not able to get output from partitions
https://github.com/langchain-ai/langchain/blob/master/cookbook/Multi_modal_RAG.ipynb

### Expected behavior
it should work normal without error on collab | Unable to get page count. Is poppler installed and in PATH | https://api.github.com/repos/langchain-ai/langchain/issues/13838/comments | 3 | 2023-11-25T06:31:04Z | 2024-07-03T17:15:16Z | https://github.com/langchain-ai/langchain/issues/13838 | 2,010,498,080 | 13,838 |
[
"langchain-ai",
"langchain"
] | Please help , I am blocked on it from many days .
**I am trying to filter question of answer from pdf doc based on uploaded email filter . Only if email match happen answer the question from that document else just empty .
I tried following code its not working rather even in case of OTHER email id also it give answer from my email uploaded document which is wrong , seems like filtering not working .**
First i tried by putting email as metadata it did not work then i added as independent email field its not working there either .
I am instantiating vector store like below , it has email field in it . I am able to create and successfully able to upload pdf doc with all the fields including email is populated . I checked created index and email field is filterable , searchable .
- Creating azure instance as below :
`
`self.vector_store = AzureSearch(azure_search_endpoint=endpoint, azure_search_key=admin_key, index_name=index_name, embedding_function=embedding_function,fields=fields)`
- After doc uploaded as shown below it has email in it .
```
"@odata.context": "https://ops-documents.search.windows.net/indexes('index-new-5')/$metadata#docs(*)",
"value": [
{
"@search.score": 1,
"id": "NjQ1YWViNWQtNDJkNy00NTcxLTlkMTktMDIzZTc0NTZlNDhm",
"content": "ICAO TRIP 2023 – .",
"metadata": "{\"source\": \"7-pl\", \"page\": 7, \"file_id\": \"65612fd773a9aa51a0939c96\", \"upload_document_name\": \"SITA Lab Furhat Backoffice IMM officer Dialog - v1.pdf\", \"email\": \"savita.raghuvanshi@st.com\", \"company\": \"sita\"}",
"email": "savita.raghuvanshi@st.com"
```
-
- Using following way to search :
```
my_retriever = self.vector_store.as_retriever(search_kwargs={'filter': { 'email': email }})
qa = RetrievalQA.from_chain_type(llm=self.llm,chain_type="stuff",retriever=my_retriever,chain_type_kwargs={"prompt": self.PROMPT},
return_source_documents=True,
)
results = qa({"query": question})
```
```
Installations using as :
**I am using OPENAI_API_VERSION="2023-10-01-preview" with gpt4 model**
azure-common==1.1.28
azure-core==1.29.5
azure-identity==1.15.0
azure-search==1.0.0b2
azure-search-documents==11.4.0b8
langchain==0.0.326
```
Kindly let me know if anything needed from me . Thanks so much for your help . | lang chain Azure vector search not working neither on its direct fields nor on its metadata fields | https://api.github.com/repos/langchain-ai/langchain/issues/13833/comments | 4 | 2023-11-25T00:45:36Z | 2024-05-08T22:59:31Z | https://github.com/langchain-ai/langchain/issues/13833 | 2,010,391,830 | 13,833 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
Hi I am trying to implement memory in an RAG agent and am following documentation, but I get the following error:
ValueError: variable chat_history should be a list of base messages, got
It seems that I should be passing in a chat_history, but all the notebook examples I have seen only pass the question. I have seen some implementations use initialize_agent(), while others use AgentExecutor(). Any help on how to implement memory with the agent would be greatly appreciated.
this is my implementation:
```from langchain.vectorstores import Pinecone
from langchain.llms import Cohere
from langchain.retrievers.multi_query import MultiQueryRetriever
from langchain.prompts import PromptTemplate
import logging
from langchain.chains import LLMChain, ConversationChain
from langchain.chains.conversation.memory import ConversationBufferWindowMemory
from langchain.chains import RetrievalQA
from langchain.agents import Tool
from langchain.retrievers import ContextualCompressionRetriever
from langchain.retrievers.document_compressors import CohereRerank
index = pinecone.Index(index_name)
vectorstore = Pinecone(index, embeddings, "text")
llm = Cohere(cohere_api_key=cohere_api_key)
retriever_from_llm = MultiQueryRetriever.from_llm(
retriever=vectorstore.as_retriever(search_kwargs={"k": 10,
'filter': {'user_id_str': '42',
'internal_mydocai_id_str': {"$in":["4", "5"]}}}), llm=llm
)
from langchain.prompts import PromptTemplate
QUERY_PROMPT = PromptTemplate(
input_variables=["question"],
template="""You are an AI language model assistant. Your task is to output the original query and four
different versions of the given user query to retrieve relevant documents from a vector
database. By generating multiple perspectives on the user question, your goal is to help
the user overcome some of the limitations of the distance-based similarity search, while staying in the scope of the original question.
Provide the original query and the alternative questions separated by newlines. Do not output anything else.
Original question: {question}""",
)
logging.basicConfig()
logging.getLogger("langchain.retrievers.multi_query").setLevel(logging.INFO)
compressor = CohereRerank(model= "rerank-multilingual-v2.0", cohere_api_key=cohere_api_key, client=co, user_agent="mydocument", top_n=5
)
compression_retriever = ContextualCompressionRetriever(
base_compressor=compressor, base_retriever=retriever_from_llm
)
compressed_docs = compression_retriever.get_relevant_documents(
question
memory = ConversationBufferWindowMemory(k=2, memory_key="chat_history", input_key='input', output_key="output")
# retrieval qa chain
qa = RetrievalQA.from_chain_type(
llm=llm,
chain_type="stuff",
retriever=compression_retriever,
)
tools = [
Tool(
name='Knowledge Base',
func=qa.run,
description=(
'use this tool when answering general knowledge queries to get '
'more information about the topic'
)
)
]
agent = initialize_agent(
agent='chat-conversational-react-description',
tools=tools,
llm=llm,
verbose=True,
max_iterations=3,
early_stopping_method='generate',
memory=memory
)
agent(question)
```
Error:
```
ValueError: variable chat_history should be a list of base messages, got
### Suggestion:
_No response_ | Unable to implement memory in RAG agent, asking for chat_history | https://api.github.com/repos/langchain-ai/langchain/issues/13830/comments | 1 | 2023-11-25T00:31:32Z | 2024-03-13T20:02:47Z | https://github.com/langchain-ai/langchain/issues/13830 | 2,010,379,993 | 13,830 |
[
"langchain-ai",
"langchain"
] | I am using local LLM with langchain: openhermes-2.5-mistral-7b.Q8_0.gguf
When using database agent this is how I am initiating things:
`db = SQLDatabase.from_uri(sql_uri)
model_path = "./openhermes-2.5-mistral-7b.Q8_0.gguf"
n_gpu_layers = 1 # Change this value based on your model and your GPU VRAM pool.
n_batch = 512 # Should be between 1 and n_ctx, consider the amount of VRAM in your GPU.
n_ctx=50000
callback_manager = CallbackManager([StreamingStdOutCallbackHandler()])
llm = LlamaCpp(
model_path=model_path,
#temperature=0,
n_gpu_layers=n_gpu_layers, n_batch=n_batch,
n_ctx=n_ctx,
callback_manager=callback_manager,
verbose=True,
)
#toolkit = CustomSQLDatabaseToolkit(db=db, llm=llm)
toolkit = SQLDatabaseToolkit(db=db, llm=llm)
toolkit.get_tools()
PREFIX = '''You are a SQL expert. You have access to a Microsoft SQL Server database.
Identify which tables can be used to answer the user's question and write and execute a SQL query accordingly.
'''
FORMAT_INSTRUCTIONS = """RESPONSE FORMAT INSTRUCTIONS
----------------------------
When responding please, please output a response in this format:
thought: Reason about what action to take next, and whether to use a tool.
action: The tool to use. Must be one of: {tool_names}
action_input: The input to the tool
For example:
thought: I need to get all tables from database
action: sql_db_list_tables
action_input: Empty string
"""
agent_executor = create_sql_agent(
llm=llm,
toolkit=toolkit,
agent_type=AgentType.ZERO_SHOT_REACT_DESCRIPTION,
verbose=True,
handle_parsing_errors=True,
agent_kwargs={
'prefix': PREFIX,
'format_instructions': FORMAT_INSTRUCTIONS,
}
)
now = datetime.datetime.now()
print("Starting executor : ")
print(now.strftime("%Y-%m-%d %H:%M:%S"))
agent_executor.run("Who is oldest user")`
When Entering chain I usually get error "Could not parse LLM output" as despite the instructions Action Input part is not created by LLM.
> Entering new AgentExecutor chain...
Action: sql_db_list_tables
Traceback (most recent call last):
File "/Users/dino/Codings/python/LLM_test1/.venv/lib/python3.9/site-packages/langchain/agents/agent.py", line 1032, in _take_next_step
output = self.agent.plan(
File "/Users/dino/Codings/python/LLM_test1/.venv/lib/python3.9/site-packages/langchain/agents/agent.py", line 636, in plan
return self.output_parser.parse(full_output)
File "/Users/dino/Codings/python/LLM_test1/.venv/lib/python3.9/site-packages/langchain/agents/mrkl/output_parser.py", line 70, in parse
raise OutputParserException(
langchain.schema.output_parser.OutputParserException: Could not parse LLM output: `Action: sql_db_list_tables`
Any Idea how to fix this ? | Langchain Database Agent with local LLM | https://api.github.com/repos/langchain-ai/langchain/issues/13826/comments | 5 | 2023-11-24T21:49:25Z | 2024-03-04T11:57:39Z | https://github.com/langchain-ai/langchain/issues/13826 | 2,010,243,906 | 13,826 |
[
"langchain-ai",
"langchain"
] | ### System Info
Hello! I got this error while trying to run code from [docs](https://python.langchain.com/docs/integrations/tools/dalle_image_generator).
I have Python 3.11.3, openai 1.3.5 and langchain 0.0.340 .
```
You tried to access openai.Image, but this is no longer supported in openai>=1.0.0 - see the README at https://github.com/openai/openai-python for the API.
You can run `openai migrate` to automatically upgrade your codebase to use the 1.0.0 interface.
Alternatively, you can pin your installation to the old version, e.g. `pip install openai==0.28`
A detailed migration guide is available here: https://github.com/openai/openai-python/discussions/742
```
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Write code from [docs](https://python.langchain.com/docs/integrations/tools/dalle_image_generator) and run it using python3.
```python
from langchain.chains import LLMChain
from langchain.llms import OpenAI
from langchain.prompts import PromptTemplate
from langchain.utilities.dalle_image_generator import DallEAPIWrapper
llm = OpenAI(temperature=0.9)
prompt = PromptTemplate(
input_variables=["image_desc"],
template="Generate a detailed prompt to generate an image based on the following description: {image_desc}",
)
chain = LLMChain(llm=llm, prompt=prompt)
image_url = DallEAPIWrapper().run(chain.run("halloween night at a haunted museum"))
print(image_url)
```
### Expected behavior
get an image url | getting an error with DallEAPIWrapper | https://api.github.com/repos/langchain-ai/langchain/issues/13825/comments | 3 | 2023-11-24T20:46:33Z | 2024-03-13T20:03:04Z | https://github.com/langchain-ai/langchain/issues/13825 | 2,010,209,501 | 13,825 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
My ChatOpenAI usually takes a response time of 1000 ms. I want the model to switch to either GooglePalm or some other language model when the response time of ChatOpenAI is large. Is it possible?
### Suggestion:
_No response_ | Is it possible to switch language models if the ms in the first model is large? | https://api.github.com/repos/langchain-ai/langchain/issues/13821/comments | 2 | 2023-11-24T17:25:34Z | 2024-03-13T19:56:31Z | https://github.com/langchain-ai/langchain/issues/13821 | 2,010,029,887 | 13,821 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
I have the following piece of code:
openai_llm = ChatOpenAI(model_name='gpt-3.5-turbo-1106', streaming=True, callbacks=[StreamingStdOutCallbackHandler()],
temperature=0.5, max_retries=0)
I still keep getting this error:
urllib3.util.retry:Converted retries value: 2 -> Retry(total=2, connect=None, read=None, redirect=None, status=None)
I tried everything. nothing is working at this point. I dont want the Retries to happen. I have fallback models but they aren't being utilized.
### Suggestion:
_No response_ | URL Lib error doesn't get resolved. | https://api.github.com/repos/langchain-ai/langchain/issues/13816/comments | 1 | 2023-11-24T14:18:22Z | 2024-03-13T20:00:26Z | https://github.com/langchain-ai/langchain/issues/13816 | 2,009,801,860 | 13,816 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
I've written this code to make sure that the max_retries count is 0. Even though I set that value in my LLM instance, It doesn't work. So I've decided to create a Custom WebBaseLoader. But I'm stuck on how to mount it and make sure that it works.
would appreciate any help!
`class CustomWebBaseLoader(WebBaseLoader):
def __init__(
self,
# ...
) -> None:
# ...
if session:
self.session = session
else:
session = requests.Session()
# ...
# Set the retry configuration for the session
retries = Retry(total=0, backoff_factor=1, status_forcelist=[500, 502, 503, 504])
session.mount('http://', HTTPAdapter(max_retries=retries))
session.mount('https://', HTTPAdapter(max_retries=retries))
# ...
self.session = session
# ...
`
### Suggestion:
_No response_ | How to mount the custom Retry class with 0 retries to improve Fallbacks? | https://api.github.com/repos/langchain-ai/langchain/issues/13814/comments | 1 | 2023-11-24T13:58:05Z | 2023-11-24T15:15:47Z | https://github.com/langchain-ai/langchain/issues/13814 | 2,009,773,400 | 13,814 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
Hi, I was recently trying to implement Fallbacks into my system but I've realized that even though there are other models that can be hit if there is a failure, I've noticed that this isn't happening. I tried to Debug it and this is the error I see. I keep getting this error even though the max retries value is 0
openai_llm = ChatOpenAI(model_name='gpt-3.5-turbo-1106', streaming=True, callbacks=[StreamingStdOutCallbackHandler()],
temperature=0.5, max_retries=0)
Error:
DEBUG:urllib3.util.retry:Converted retries value: 2 -> Retry(total=2, connect=None, read=None, redirect=None, status=None)
The model takes forever to respond. is there a fix for this?
### Suggestion:
_No response_ | Urllib3 retry error fix with fallbacks. | https://api.github.com/repos/langchain-ai/langchain/issues/13811/comments | 3 | 2023-11-24T11:20:31Z | 2023-11-24T13:57:24Z | https://github.com/langchain-ai/langchain/issues/13811 | 2,009,558,007 | 13,811 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
Hi, I was recently trying to implement Fallbacks into my system but I've realized that even though there are other models that can be hit if there is a failure, I've noticed that this isn't happening. I tried to Debug it and this is the error I see. I keep getting this error even though the max retries value is 0
openai_llm = ChatOpenAI(model_name='gpt-3.5-turbo-1106', streaming=True, callbacks=[StreamingStdOutCallbackHandler()],
temperature=0.5, max_retries=0)
Error:
DEBUG:urllib3.util.retry:Converted retries value: 2 -> Retry(total=2, connect=None, read=None, redirect=None, status=None)
The model takes forever to respond. is there a fix for this?
### Suggestion:
_No response_ | Urllib Retry error help. | https://api.github.com/repos/langchain-ai/langchain/issues/13809/comments | 1 | 2023-11-24T10:21:56Z | 2023-11-24T11:19:25Z | https://github.com/langchain-ai/langchain/issues/13809 | 2,009,473,024 | 13,809 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
Hi, sorry if I missed it, but I couldn't find in the documentation, here in issues nor through google an answer to this question: how can one compute perplexity score (for each generated token and/or a mean perplexity score for the whole output) during inference?
In our use case, we use LLMs only for inference, but we have to be able to give some kind of confidence score along with the models' answers. We use various integration backends in our stack, HF transformers, vLLM, llama.cpp to name a few.
Any help would be greatly appreciated. Thanks!
### Suggestion:
_No response_ | Issue: how to compute perplexity score during inference? | https://api.github.com/repos/langchain-ai/langchain/issues/13808/comments | 3 | 2023-11-24T09:56:16Z | 2024-03-17T16:06:26Z | https://github.com/langchain-ai/langchain/issues/13808 | 2,009,434,445 | 13,808 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
_No response_
### Suggestion:
_No response_ | It’s so damn hard to use. | https://api.github.com/repos/langchain-ai/langchain/issues/13807/comments | 3 | 2023-11-24T09:19:01Z | 2024-03-13T19:55:50Z | https://github.com/langchain-ai/langchain/issues/13807 | 2,009,376,766 | 13,807 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
Hello
I have to configure the langchain with PDF data, and the PDF contains a lot of unstructured table.
We have a string and a table, so how do you recommend handling it?
### Suggestion:
_No response_ | how to handle a PDF(including tables..) | https://api.github.com/repos/langchain-ai/langchain/issues/13805/comments | 7 | 2023-11-24T08:54:28Z | 2024-04-18T16:34:48Z | https://github.com/langchain-ai/langchain/issues/13805 | 2,009,341,776 | 13,805 |
[
"langchain-ai",
"langchain"
] | CypherQueryCorrector does not handle with some query types:
If there is such a query (There is a comma in the match between clauses):
```
MATCH (a:APPLE {apple_id: 123})-[:IN]->(b:BUCKET), (ba:BANANA {name: banana1})
```
Corresponding code section:
- It extract a relation between BUCKET and BANANA, however there is none.
- ELSE case should be divided into cases. (INCOMING relation, OUTGOING relation, BIDIRECTIONAL relation, NO relation etc.)
- IF there is no relation and only a comma between clauses, then it should not try validation.
https://github.com/langchain-ai/langchain/blob/751226e067bc54a70910763c0eebb34544aaf47c/libs/langchain/langchain/chains/graph_qa/cypher_utils.py#L228 | CypherQueryCorrector cannot validate a correct cypher, some query types are not handled | https://api.github.com/repos/langchain-ai/langchain/issues/13803/comments | 4 | 2023-11-24T08:39:00Z | 2023-11-27T03:30:12Z | https://github.com/langchain-ai/langchain/issues/13803 | 2,009,321,641 | 13,803 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
Entering new SQLDatabaseChain chain...
what is Gesh D desigation
SQLQuery:SELECT [Designation]
FROM [EGV_emp_departments_ChatGPT]
WHERE [EmployeeName] = 'Gesh D'
SQLResult:
Answer:Final answer here
> Finished chain.
Final answer here
below is my code
# import os
# import re
# from langchain.llms import OpenAI
# from langchain_experimental.sql import SQLDatabaseChain
# from langchain.sql_database import SQLDatabase
# # from secret_key import openapi_key
# openapi_key = "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"
# os.environ['OPENAI_API_KEY'] = openapi_key
# def chat(question):
# # llm = OpenAI(temperature=0)
# # tools = load_tools(["llm-math"], llm=llm)
# # agent = initialize_agent(tools, llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION)
# driver = 'ODBC Driver 17 for SQL Server'
# from urllib.parse import quote_plus
# driver = 'ODBC Driver 17 for SQL Server'
# host = '####'
# user = '###'
# database = '#####'
# password = '#####'
# encoded_password = quote_plus(password)
# db = SQLDatabase.from_uri(f"mssql+pyodbc://{user}:{encoded_password}@{host}/{database}?driver={quote_plus(driver)}", include_tables = ['HRMSGPTAutomation'], sample_rows_in_table_info=2)
# llm = OpenAI(temperature=0, verbose=True)
# token_limit = 16_000
# model_name="gpt-3.5-turbo-16k"
# db_chain = SQLDatabaseChain.from_llm(llm, db, verbose=True)
# # agent_executor = create_sql_agent(
# # llm=llm,
# # toolkit=toolkit,
# # verbose=True,
# # reduce_k_below_max_tokens=True,
# # )
# # mrkl = initialize_agent(
# # tools,
# # ChatOpenAI(temperature=0),
# # agent=AgentType.CHAT_ZERO_SHOT_REACT_DESCRIPTION,
# # verbose=True,
# # handle_parsing_errors=True,
# # )
# return db_chain.run(question)
# # print(chat("what is Vijayalakshmi B department"))
import sqlalchemy as sal
import os, sys, openai
import constants
from langchain.sql_database import SQLDatabase
from langchain.llms.openai import OpenAI
from langchain_experimental.sql import SQLDatabaseChain
from sqlalchemy import create_engine
from langchain.chat_models import ChatOpenAI
from typing import List, Optional
from langchain.agents.agent_toolkits import SQLDatabaseToolkit
from langchain.callbacks.manager import CallbackManagerForToolRun
from langchain.chat_models import ChatOpenAI
from langchain_experimental.plan_and_execute import (
PlanAndExecute,
load_agent_executor,
load_chat_planner,
)
from langchain.sql_database import SQLDatabase
from langchain.text_splitter import TokenTextSplitter
from langchain.tools import BaseTool
from langchain.tools.sql_database.tool import QuerySQLDataBaseTool
from secret_key import openapi_key
os.environ['OPENAI_API_KEY'] = openapi_key
def chat(question):
from urllib.parse import quote_plus
server_name = constants.server_name
database_name = constants.database_name
username = constants.username
password = constants.password
encoded_password = quote_plus(password)
connection_uri = f"mssql+pyodbc://{username}:{encoded_password}@{server_name}/{database_name}?driver=ODBC+Driver+17+for+SQL+Server"
engine = create_engine(connection_uri)
model_name="gpt-3.5-turbo-16k"
db = SQLDatabase(engine, view_support=True, include_tables=['EGV_emp_departments_ChatGPT'])
llm = ChatOpenAI(temperature=0, verbose=False, model=model_name)
db_chain = SQLDatabaseChain.from_llm(llm, db, verbose=True)
PROMPT = """
Given an input question, first create a syntactically correct mssql query to run,
then look at the results of the query and return the answer.
The question: {db_chain.run}
"""
return db_chain.run(question)
answer=chat("what is Gesh D desigation")
print(answer)
### Suggestion:
_No response_ | How to modify the code, If the SQLResult is empty, the Answer should be "No results found". DO NOT hallucinate an answer if there is no result. | https://api.github.com/repos/langchain-ai/langchain/issues/13802/comments | 9 | 2023-11-24T08:18:06Z | 2024-04-22T16:30:35Z | https://github.com/langchain-ai/langchain/issues/13802 | 2,009,296,023 | 13,802 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
Encountered IndexError in the code due to an attempt to access a list index that is out of range. This issue occurs when there are no documents available, resulting in the deletion of all embeddings. To handle this case gracefully, consider implementing a mechanism to display a default value or a meaningful message when there are no documents present. The specific line causing the error is:
```python
source = relevant_document[0].metadata['source']
### Suggestion:
_No response_ | Issue:Handle IndexError for Empty Document Lists - Display Default Value or Message | https://api.github.com/repos/langchain-ai/langchain/issues/13799/comments | 3 | 2023-11-24T06:41:19Z | 2024-03-13T19:56:37Z | https://github.com/langchain-ai/langchain/issues/13799 | 2,009,189,632 | 13,799 |
[
"langchain-ai",
"langchain"
] | ### Feature request
Currently when initilizing the `MlflowCallbackHandler`, we can pass in name, experiment, tracking_uri. But couldn't pass in the `nested` param.
### Motivation
It would be nice if we can do that, so that when there are multiple chains running, we can nest those runs under one parent run, making it easier to group them for later on monitor/debug.
### Your contribution
I tried to edit the mlflow_callback.py and add the nested option, but it doesn't seem to honor the value.
Please let me know if you know how to make this work and I'm happy to put up a PR. Thanks! | Feat: MLFlow callback allow passing `nested` param | https://api.github.com/repos/langchain-ai/langchain/issues/13795/comments | 1 | 2023-11-24T05:05:03Z | 2024-03-13T20:04:39Z | https://github.com/langchain-ai/langchain/issues/13795 | 2,009,104,256 | 13,795 |
[
"langchain-ai",
"langchain"
] | ### System Info
master branch
### Who can help?
@hwchase17
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [X] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
https://github.com/langchain-ai/langchain/blob/751226e067bc54a70910763c0eebb34544aaf47c/libs/core/langchain_core/prompts/chat.py#L653-L659
ChatPromptTemplate override save method as NotImplementedError, while the base class `BasePromptTemplate` has a default implementation of `save` method https://github.com/langchain-ai/langchain/blob/751226e067bc54a70910763c0eebb34544aaf47c/libs/core/langchain_core/prompts/base.py#L157-L192. It should just depend on the implementation of `_prompt_type` property, where ChatPromptTemplate already implemented here https://github.com/langchain-ai/langchain/blob/751226e067bc54a70910763c0eebb34544aaf47c/libs/core/langchain_core/prompts/chat.py#L648-L651.
If removing the override save function, ChatPromptTemplate could be saved correctly.
### Expected behavior
We should be able to save ChatPromptTemplate object into a file. | ChatPromptTemplate save method not implemented | https://api.github.com/repos/langchain-ai/langchain/issues/13794/comments | 1 | 2023-11-24T04:52:26Z | 2024-03-13T20:04:59Z | https://github.com/langchain-ai/langchain/issues/13794 | 2,009,093,017 | 13,794 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
it seems my discord account has been hacked, so it auto send advertisement like below, and i thought it was official ads just like telegram and the ads only show to me, so I don't pay attention to it.
<img width="503" alt="image" src="https://github.com/langchain-ai/langchain/assets/1664952/8a048285-bd18-408e-9a31-0d2c57e1ef17">
Now i can't join discord server as expect.
I have enable 2FA and change my password, so could your please assist me in removing myself from the Discord blacklist. my account id is "h3l1221", thanks a low .
### Suggestion:
_No response_ | Issue: please assist me in removing myself from the Discord blacklist. | https://api.github.com/repos/langchain-ai/langchain/issues/13793/comments | 1 | 2023-11-24T03:13:29Z | 2024-03-16T16:06:51Z | https://github.com/langchain-ai/langchain/issues/13793 | 2,009,030,550 | 13,793 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
can i define an agent using a llm chain or a conversationcain as a too?
### Suggestion:
_No response_ | Issue: can i define an agent using a llm chain or a conversationcain as a too? | https://api.github.com/repos/langchain-ai/langchain/issues/13792/comments | 9 | 2023-11-24T02:15:35Z | 2024-03-13T20:03:36Z | https://github.com/langchain-ai/langchain/issues/13792 | 2,008,978,562 | 13,792 |
[
"langchain-ai",
"langchain"
] | ### System Info
$ python3 --version
Python 3.11.6
$ pip show openai | grep Version
Version: 1.3.5
$ pip show langchain | grep Version
Version: 0.0.340
### Who can help?
Who wants to use AzureOpenai deployments with langchain, enabling last openai package versione 1.x.x.
### Information
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
### Reproduction
Hi @hwchase17, @agola11, all!
Thanks in advance for your huge work.
I so far successfully used langchain with the openai module 0.28.x.
Today I upgraded the openai package to the latest version (1.x.x), and I installed also the latest langchain package version.
I configured the environment variables (as described above) and I run the following simple program, as described in langchain documentation: https://python.langchain.com/docs/integrations/chat/azure_chat_openai
```
$ source path/.langchain_azure.env
$ cat path/.langchain_azure.env
# https://python.langchain.com/docs/integrations/llms/azure_openai
export OPENAI_API_TYPE=azure
export OPENAI_API_VERSION=2023-09-01-preview
export AZURE_OPENAI_ENDPOINT="https://xxxxxxxxxxx.openai.azure.com/"
export AZURE_OPENAI_API_KEY="XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX"
```
Program that generate a run-time exception:
```python
# langchain_simple.py
# this example follows the langchain documentation example at: https://python.langchain.com/docs/integrations/chat/azure_chat_openai
import os
from langchain.chat_models import AzureChatOpenAI
from langchain.schema import HumanMessage
model = AzureChatOpenAI(
openai_api_version="2023-05-15",
azure_deployment="gpt-35-turbo" # my existing deployment name
)
message = HumanMessage(content="Translate this sentence from English to French. I love programming.")
model([message])
```
I got the following exception:
```
$ py langchain_simple.py
/home/giorgio/.local/lib/python3.11/site-packages/langchain/chat_models/azure_openai.py:162: UserWarning: As of openai>=1.0.0, if `deployment_name` (or alias `azure_deployment`) is specified then `openai_api_base` (or alias `base_url`) should not be. Instead use `deployment_name` (or alias `azure_deployment`) and `azure_endpoint`.
warnings.warn(
/home/giorgio/.local/lib/python3.11/site-packages/langchain/chat_models/azure_openai.py:170: UserWarning: As of openai>=1.0.0, if `openai_api_base` (or alias `base_url`) is specified it is expected to be of the form https://example-resource.azure.openai.com/openai/deployments/example-deployment. Updating https://openai-convai.openai.azure.com/ to https://openai-convai.openai.azure.com/.
warnings.warn(
Traceback (most recent call last):
File "/home/giorgio/gpt/langchain/langchain_simple.py", line 9, in <module>
model = AzureChatOpenAI(
^^^^^^^^^^^^^^^^
File "/home/giorgio/.local/lib/python3.11/site-packages/langchain/load/serializable.py", line 97, in __init__
super().__init__(**kwargs)
File "/home/giorgio/.local/lib/python3.11/site-packages/pydantic/v1/main.py", line 341, in __init__
raise validation_error
pydantic.v1.error_wrappers.ValidationError: 1 validation error for AzureChatOpenAI
__root__
base_url and azure_endpoint are mutually exclusive (type=value_error)
```
BTW, I do not understand exactly the error sentence:
```base_url and azure_endpoint are mutually exclusive (type=value_error)```
Where am I wrong?
Thanks
Giorgio
### Expected behavior
I didn't expect to have run-time errors | Azure OpenAI (with openai module 1.x.x) seems not working anymore | https://api.github.com/repos/langchain-ai/langchain/issues/13785/comments | 11 | 2023-11-23T18:06:59Z | 2024-07-01T16:04:09Z | https://github.com/langchain-ai/langchain/issues/13785 | 2,008,648,479 | 13,785 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
How can I add the new parameter **JSON MODE** ( https://platform.openai.com/docs/guides/text-generation/json-mode )
to this snippet of code?
```python
from langchain.chat_models import ChatOpenAI
llm = ChatOpenAI(
model_name="gpt-3.5-turbo-1106",
temperature=1,
max_tokens=None
)
```
I see in openai it should be used in this way:
```python
response = client.chat.completions.create(
model="gpt-3.5-turbo-1106",
response_format={ "type": "json_object" },
messages=[
{"role": "system", "content": "You are a helpful assistant designed to output JSON."},
{"role": "user", "content": "Who won the world series in 2020?"}
]
)
```
Thanks!!
### Suggestion:
_No response_ | How to add json_object to ChatOpenAI class? | https://api.github.com/repos/langchain-ai/langchain/issues/13783/comments | 6 | 2023-11-23T17:26:32Z | 2024-05-16T16:07:54Z | https://github.com/langchain-ai/langchain/issues/13783 | 2,008,609,445 | 13,783 |
[
"langchain-ai",
"langchain"
] | ### Feature request
Support auth token headers in langchain, when underlying model supports it.
### Motivation
I'd like to use ChatAnthropic, but I'm using a custom proxy backend, where I need to provide Authorization header.
This is simple using AnthropicClient directly as it accepts auth_token parameter.
Problem is, langchain ChatAnthropic abstraction, doesn't accept this param.
It would be great to have an option to pass auth_token to model, when model supports it.
### Your contribution
I don't feel comfortable in this codebase to create a PR | Support auth header, when underlying client support it. | https://api.github.com/repos/langchain-ai/langchain/issues/13782/comments | 1 | 2023-11-23T16:46:27Z | 2024-03-13T20:02:24Z | https://github.com/langchain-ai/langchain/issues/13782 | 2,008,561,158 | 13,782 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
InMemoryCache works, but it doesn't print to stderr like all other LLM responses. I checked logs and response is stored. I don't know how to access to this stored values...
### Suggestion:
option to view object or option to return values. | Issue: how to access to cached question and answer in InMemoryCache | https://api.github.com/repos/langchain-ai/langchain/issues/13778/comments | 8 | 2023-11-23T13:14:19Z | 2024-02-12T14:22:48Z | https://github.com/langchain-ai/langchain/issues/13778 | 2,008,213,482 | 13,778 |
[
"langchain-ai",
"langchain"
] | ### Issue with current documentation:
if "our goal is to have the simplest developer setup possible", we shouldn't encourage mixing poetry with the use of conda and pip in [CONTRIBUTING.md](https://github.com/langchain-ai/langchain/blob/5ae51a8a85d1a37ea98afeaf639a72ac74a50523/.github/CONTRIBUTING.md). Mixing different tools for the same job is error prone and leads to confusion for new contributors.
Specifically, the following lines should be reconsidered:
❗Note: Before installing Poetry, if you use Conda, create and activate a new Conda env (e.g. conda create -n langchain python=3.9) ([link](https://github.com/langchain-ai/langchain/blob/5ae51a8a85d1a37ea98afeaf639a72ac74a50523/.github/CONTRIBUTING.md#:~:text=%E2%9D%97Note%3A%20Before%20installing%20Poetry%2C%20if%20you%20use%20Conda%2C%20create%20and%20activate%20a%20new%20Conda%20env%20(e.g.%20conda%20create%20%2Dn%20langchain%20python%3D3.9)))
❗Note: If you use Conda or Pyenv as your environment/package manager, after installing Poetry, tell Poetry to use the virtualenv python environment (poetry config virtualenvs.prefer-active-python true) ([link](https://github.com/langchain-ai/langchain/blob/5ae51a8a85d1a37ea98afeaf639a72ac74a50523/.github/CONTRIBUTING.md#:~:text=%E2%9D%97Note%3A%20If%20you%20use%20Conda%20or%20Pyenv%20as%20your%20environment/package%20manager%2C%20after%20installing%20Poetry%2C%20tell%20Poetry%20to%20use%20the%20virtualenv%20python%20environment%20(poetry%20config%20virtualenvs.prefer%2Dactive%2Dpython%20true)))
If the tests don't pass, you may need to pip install additional dependencies, such as numexpr and openapi_schema_pydantic. ([link](https://github.com/langchain-ai/langchain/blob/5ae51a8a85d1a37ea98afeaf639a72ac74a50523/.github/CONTRIBUTING.md#:~:text=If%20the%20tests%20don%27t%20pass%2C%20you%20may%20need%20to%20pip%20install%20additional%20dependencies%2C%20such%20as%20numexpr%20and%20openapi_schema_pydantic.))
### Idea or request for content:
_No response_ | DOC: Simplify CONTRIBUTING.md by removing conda and pip references | https://api.github.com/repos/langchain-ai/langchain/issues/13776/comments | 1 | 2023-11-23T11:09:26Z | 2024-03-13T19:56:04Z | https://github.com/langchain-ai/langchain/issues/13776 | 2,008,002,786 | 13,776 |
[
"langchain-ai",
"langchain"
] | ### Issue with current documentation:
(import pandas as pd
import json
from IPython.display import Markdown, display
from langchain.agents import create_csv_agent
from langchain.chat_models import ChatOpenAI
from langchain.llms import OpenAI
import os
os.environ["OPENAI_API_KEY"] = ""
# Load the dataset
df = pd.read_csv('Loan Collections - Sheet1.csv')
def convert_columns_to_float_by_keywords(df):
# Convert date and time columns to float
for column in df.select_dtypes(include=['object']).columns:
# Check if the column name contains 'date' or 'time'
if 'date' in column.lower() or 'time' in column.lower() or 'dob' in column.lower() or 'date of birth' in column.lower():
try:
# Convert the column to datetime
df[column] = pd.to_datetime(df[column], errors='coerce')
# Convert datetime to numerical representation (e.g., days since a reference date)
reference_date = pd.to_datetime('1900-01-01')
df[column] = (df[column] - reference_date).dt.total_seconds() / (24 * 60 * 60)
except ValueError:
# Handle errors during conversion
print(f"Error converting column '{column}' to float.")
# Convert columns with specific keywords to float
keywords_to_convert = ["unique id", "reference id", "account id"]
for column in df.columns:
# Check if the column name contains any of the specified keywords
if any(keyword in column.lower() for keyword in keywords_to_convert):
try:
# Convert the column to float
df[column] = pd.to_numeric(df[column], errors='coerce')
except ValueError:
# Handle errors during conversion
print(f"Error converting column '{column}' to float.")
# Convert 'date' and 'time' columns to float
convert_columns_to_float_by_keywords(df)
# Extract unique values for each column
unique_values_per_column = {}
for column in df.select_dtypes(include=['object']).columns:
unique_values_per_column[column] = df[column].unique().tolist()
# Convert the dictionary to JSON
json_data_train = json.dumps(unique_values_per_column, indent=4)
testData_fname = "Mutual Funds 2023 - Mutual Funds Data Final File (1).csv"
# Load the dataset
df2 = pd.read_csv(testData_fname)
convert_columns_to_float_by_keywords(df2)
# Extract unique values for each column
unique_values_per_column = {}
for column in df2.select_dtypes(include=['object']).columns:
unique_values_per_column[column] = df2[column].unique().tolist()
# Convert the dictionary to JSON
json_data_test = json.dumps(unique_values_per_column, indent=4)
# Define user's question
user_question = "monthly growth analysis of Broker Commission ?"
# Define the prompt template
prompt_template = f'''If the dataset has the following columns: {json_data_train}'''+''' Understand user questions with different column names and convert them to a JSON format.
Question might not even mentioned column name at all, it would probably mention value of the column. so it has to figure it out columnn name based on that value.
Example1:
User Question1: top zone in the year 2019 with Loan Amt between 10k and 20k and tenure > 12 excluding Texas region?
{
"start_date": "01-01-2019",
"end_date": "31-12-2019",
"time_stamp_col": "Due Date",
"agg_columns": [],
"trend": "null",
"to_start_date": "null",
"to_end_date": "null",
"growth": "null",
"variables_grpby": ["Zone"],
"filters": {},
"not_in": {"Region": ["Texas"]},
"num_filter": {
"gt": [
["Loan Tenure", 12],
["Loan Amount", 10000]
],
"lt": [
["Loan Amount", 20000]
]
},
"percent": "false",
"top": "1",
"bottom": "null"
}
Note the following in the above example
- The word "top" in the User Question made the "top" key have the value as "1". If "highest" is mentioned in the User Question, even then "top" would have the value as "1". If "top" is not mentioned or not implied in the User Question, then it takes on the value "null". Similarly for "bottom" key in the System Response.
- The word "zone" in the User Question refers to a column "Zone" in the dataset and since it is a non-numeric column and we have to group by that column, the system response has it as one of the values of the list of the key "variables_grpby"
- The key "start_date" and "end_date" Since it is mentioned 2019 in the User Question as the timeframe, the "start_date" assumes the beginning of the year 2019 and "end_date" assumes the end of the year 2019. If no date related words are mentioned in the question, "start_date" would be "null" and "end_date" would be "null".
- The key "time_stamp_col" in the System Response should mention the relevant time related column name from the dataset according to the question if the question mentions a time related word.
- The key "agg_columns" in the System Response is a list of columns to be aggregated which should mention the numeric column names on which the question wants us to aggregate on.
- The key "trend" in the System Response, "trend" is set to "null" since the user question doesn't imply any trend analysis . If the question were about trends over time, this key would contain information about the trend, such as "upward," "downward," or "null" if no trend is specified.
- The key "filters" An empty dictionary in this case, as there are no explicit filters mentioned in the user question. If the user asked to filter data based on certain conditions (e.g. excluding a specific region), this key would contain the relevant filters.
- The key "to_start_date" and "to_end_date" Both set to "null" in this example because the user question specifies a single timeframe (2019). If the question mentioned a range (e.g. "from January 2019 to March 2019"), these keys would capture the specified range.
- The key "growth" Set to "null" in this example as there is no mention of growth in the user question. If the user inquired about growth or change over time, this key would provide information about the type of growth (e.g."monthly","yearly"," "absolute") or be set to "null" if not applicable.
- The key "not_in" Contains information about exclusion criteria based on the user's question. In this example, it excludes the "Texas" region. If the user question doesn't involve exclusions, this key would be an empty dictionary.
- The key "num_filter" Specifies numerical filters based on conditions in the user question. In this example, it filters loans with a tenure greater than 12 and loan amounts between 10k and 20k. If the user question doesn't involve numerical filters, this key would be an empty dictionary.
- The key "percent" Set to "false" in this example as there is no mention of percentage in the user question. If the user inquired about percentages, this key would contain information about the use of percentages in the response.
Similarly, below are more examples of user questions and their corresponding expected System Responses.
Example 2:
User Question: What is the Highest Loan Amount and Loan Outstanding by RM Name James in January 2020
{
"start_date": "01-01-2020",
"end_date": "31-01-2020",
"time_stamp_col": "Due Date",
"agg_columns": ["Loan Amount", "Loan Outstanding"],
"trend": "null",
"to_start_date": "null",
"to_end_date": "null",
"growth": "null",
"variables_grpby": [],
"filters": {"RM Name": ["James"]},
"not_in": {},
"num_filter": {},
"percent": "false",
"top": "1",
"bottom": "null"
}
Example 3:
User Question: Which RM Name with respect to Region has the Highest Interest Outstanding and Principal Outstanding in the year 2019
{
"start_date": "01-01-2019",
"end_date": "31-12-2019",
"time_stamp_col": "Due Date",
"agg_columns": ["Interest Outstanding", "Principal Outstanding"],
"trend": "null",
"to_start_date": "null",
"to_end_date": "null",
"growth": "null",
"variables_grpby": ["RM Name", "Region"],
"filters": {},
"not_in": {},
"num_filter": {},
"percent": "false",
"top": "1",
"bottom": "null"
}
Example 4:
User Question: Which Branch in North Carolina with respect to Cibil Score Bucket has the Highest Cibil Score in 2019
{
"start_date": "01-01-2019",
"end_date": "31-12-2019",
"time_stamp_col": "Due Date",
"agg_columns": ["Cibil Score", "DPD Bucket"],
"trend": "null",
"to_start_date": "null",
"to_end_date": "null",
"growth": "null",
"variables_grpby": ["Branch"],
"filters": {"Region": ["North Carolina"]},
"not_in": {},
"num_filter": {},
"percent": "false",
"top": "1",
"bottom": "null"
}
''
Example 5:
User Question: With respect to Zone, Region, Branch, RM Name what is the Highest Loan Amount, Loan Tenure, Loan Outstanding, EMI Pending, Principal Outstanding
{
"start_date": "null",
"end_date": "null",
"time_stamp_col": "null",
"agg_columns": ["Loan Amount", "Loan Tenure", "Loan Outstanding", "EMI Pending", "Principal Outstanding"],
"trend": "null",
"to_start_date": "null",
"to_end_date": "null",
"growth": "null",
"variables_grpby": ["Zone", "Region", "Branch", "RM Name"],
"filters": {},
"not_in": {},
"num_filter": {},
"percent": "false",
"top": "1",
"bottom": "null"
}
Example 6:
User Question: Top 2 zones by Housing Loan in the year 2019
{
"start_date": "01-01-2019",
"end_date": "31-12-2019",
"time_stamp_col": "Due Date",
"agg_columns": ["Housing Loan"],
"trend": "null",
"to_start_date": "null",
"to_end_date": "null",
"growth": "null",
"variables_grpby": ["Zone"],
"filters": {"Product": ["Home Loan"]},
"not_in": {},
"num_filter": {},
"percent": "false",
"top": "2",
"bottom": "null"
}
Example 7:
User Question: yearly growth analysis by Due Date of Loan Amount?
{
"start_date": "null",
"end_date": "null",
"time_stamp_col": "Due Date",
"agg_columns": ["Loan Amount"],
"trend": "null",
"to_start_date": "null",
"to_end_date": "null",
"growth": "yearly",
"variables_grpby": ["Due Date"],
"filters": {},
"not_in": {},
"num_filter": {},
"percent": "false",
"top": "null",
"bottom": "null"
}
'''+ f'''Our test dataset has the following columns: {json_data_test}
User Question (to be converted): {user_question}'''
# Set the context length
context_length = 3 # Set your desired context length here
# Load the agent with GPT-4 and the specified context length
gpt4_agent = create_csv_agent(ChatOpenAI(temperature=0, model_name="gpt-4"), testData_fname, context_length=context_length)
# Use the formatted question as the input to your agent
response = gpt4_agent.run(prompt_template)
# Print the response
print(user_question)
print(response))
and the error is ( RateLimitError: Rate limit reached for gpt-4 in organization org-bJurmVX4HBor6BJfUF9Q6miB on tokens per min (TPM): Limit 10000, Used 1269, Requested 9015. Please try again in 1.704s. Visit https://platform.openai.com/account/rate-limits to learn more.) so try to run gpt-4 model but chaining the lenght contex
### Idea or request for content:
_No response_ | how to chain the lenght of context if i wanted use gpt-4 model because of increase token i can't use gpt 4 so how to chain the context | https://api.github.com/repos/langchain-ai/langchain/issues/13772/comments | 1 | 2023-11-23T08:57:22Z | 2024-03-13T20:02:21Z | https://github.com/langchain-ai/langchain/issues/13772 | 2,007,748,476 | 13,772 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
It stated in the docs that there's a parameter called response_if_no_docs_found, which If specified, the chain will return a fixed response if no docs
are found for the question.
I have tried it and asked another person, who says they tried it and it doesn't work,
how do i get it to work
because its a very important aspect
### Suggestion:
_No response_ | Issue: Conversational retrieval chain, response_if_no_docs_found not working | https://api.github.com/repos/langchain-ai/langchain/issues/13771/comments | 2 | 2023-11-23T08:17:41Z | 2024-03-13T19:57:37Z | https://github.com/langchain-ai/langchain/issues/13771 | 2,007,681,270 | 13,771 |
[
"langchain-ai",
"langchain"
] | ### Feature request
JS LangChain (https://js.langchain.com/docs/integrations/chat/ollama_functions) supports Ollama Functions and allows to return JSON output from Ollama. It would be good to have the same functionality in Python LangChain.
### Motivation
JSON response from LangChain Ollama would be very useful, when integrating LangChain into API pipelines running on local environments
### Your contribution
I could test the implementation. | JSON response support for Ollama | https://api.github.com/repos/langchain-ai/langchain/issues/13770/comments | 1 | 2023-11-23T08:13:02Z | 2024-03-13T19:55:41Z | https://github.com/langchain-ai/langchain/issues/13770 | 2,007,675,246 | 13,770 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
I am trying to create a vertex ai and langchain based entity extraction program. I have stored my documents in form of vector embeddings in chromadb. I have been trying to extract the attributes/features like name, price, etc but whenever i run chains I'm getting this error.
### Suggestion:
_No response_ | _TextGenerationModel.predict() got an unexpected keyword argument 'functions'Issue: <Please write a comprehensive title after the 'Issue: ' prefix> | https://api.github.com/repos/langchain-ai/langchain/issues/13766/comments | 4 | 2023-11-23T06:51:29Z | 2024-03-17T16:06:22Z | https://github.com/langchain-ai/langchain/issues/13766 | 2,007,583,055 | 13,766 |
[
"langchain-ai",
"langchain"
] | ### Feature request
Introduce a more flexible method to specify model parameters directly within the LLMChain or VLLM method, allowing users to modify these parameters without creating a new model instance each time.
### Motivation
Currently, when utilizing the Langchain library's VLLM class, modifying model parameters (such as temperature, top_k, top_p, etc.) requires creating a new instance of the VLLM class. This process becomes cumbersome, especially when frequent adjustments to model parameters are necessary.
Current Situation:
`
from langchain.llms import VLLM
llm = VLLM(
model="mosaicml/mpt-7b",
trust_remote_code=True, # mandatory for hf models
max_new_tokens=128,
top_k=10,
top_p=0.95,
temperature=0.8,
)
template = """Question: {question}
Answer: Let's think step by step."""
prompt = PromptTemplate(template=template, input_variables=["question"])
llm_chain = LLMChain(prompt=prompt, llm=llm)
question = "Who was the US president in the year the first Pokemon game was released?"
print(llm_chain.run(question))
`
It is very tedious to keep creating new model when i want to change model params like (temperature and etc) .
In naive VLLM, this can be done so easily:
llm = LLM(model="qwen/Qwen-7B-Chat", revision="v1.1.8", trust_remote_code=True)
outputs = llm.generate(prompts, model_params)
outputs = llm.generate(prompts, model_params)
### Your contribution
NA | Changing Model Param after Initialization VLLM Model | https://api.github.com/repos/langchain-ai/langchain/issues/13762/comments | 2 | 2023-11-23T03:45:30Z | 2024-04-22T16:52:00Z | https://github.com/langchain-ai/langchain/issues/13762 | 2,007,439,169 | 13,762 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
I got 2 functions and 2 related tools:
def Func_Tool_Extractor(parameters):
print("\n", parameters)
print("触发Func_Tool_Extractor插件")
pdf_id, extractor_id_position = parameters.split(",")
print(f"获取到pdf id为{pdf_id},这篇文档用户想要查询第{extractor_id_position}个分子")
test_smiles = "C1C2=C3CCC3=C2CC1"
extractor_id = 1876
print(f"查询到第{extractor_id_position}个分子的smiles为{test_smiles}, extractor_id为{extractor_id}")
answer = f"查询到分子的SMILES为{test_smiles}, extractor_id为{extractor_id}, pdf_id为{pdf_id}"
return answer
def Func_Tool_ADMET(parameters):
print("\n", parameters)
print("触发Func_Tool_ADMET插件")
print("........正在解析ADMET属性..........")
data = {
"id": "2567",
"smiles": "C1=CC=CC=C1C1=C(C2=CC=CC=C2)C=CC=C1",
"humanIntestinalAbsorption": "HIA+|0.73",
"caco2Permeability": "None",
"caco2PermeabilityIi": "Caco2+|0.70",
"pGlycoproteinInhibitorI": "Pgp_nonInhibitor|0.51",
"pGlycoproteinInhibitorIi": "Pgp_Inhibitor|0.68",
"pGlycoproteinSubstrate": "substrate|0.56",
"bloodBrainBarrier": "BBB+|0.73",
"cyp4501a2Inhibitor": "Inhibitor|0.73",
"cyp4502c19Inhibitor": "Inhibitor|0.68",
"cyp4502c9Inhibitor": "Non_Inhibitor|0.53",
"cyp4502c9Substrate": "non-substrate|0.59",
"cyp4502d6Inhibitor": "Non_Inhibitor|0.65",
"cyp4502d6Substrate": "substrate|0.55",
"cyp4503a4Inhibitor": "Non_Inhibitor|0.71",
"cyp4503a4Substrate": "non_substrate|0.52",
"cypInhibitorPromiscuity": "High CYP Inhibitory Promiscuity|0.61",
"biodegradation": "Not ready biodegradable|0.64",
"renalOrganicCationTransporter": "inhibitor|0.64",
"amesToxicity": "Non AMES toxic|0.66",
"carcinogens": "non_carcinogens|0.71",
"humanEtherAGoGoRelatedGeneInhibitionI": "Weak inhibitor|0.60",
"humanEtherAGoGoRelatedGeneInhibitionIi": "Weak inhibitor|0.53",
"honeyBeeToxicity": "highAT|0.56",
"tetrahymenaPyriformisToxicity": "None",
"tetrahymenaPyriformisToxicityIi": "non-TPT|0.73",
"fishToxicity": "None",
"fishToxicityIi": "High FHMT|0.73",
"aqueousSolubility": "None",
"savePath": "/profile/chemicalAppsResult/admetResult/2023/11/10/2567",
"status": "1",
"favoriteFlag": "0"
}
return data
# Define the tools
tool1 = Tool(
name="Tool_Extractor",
func=Func_Tool_Extractor,
description="""
useful when you want to get a molecule from a document.
like: get the 3rd molecule of the document.
The input of this tool should be comma separated string of two, representing the pdf_id and the number of the molecule to be gotten.
"""
)
tool2 = Tool(
name="Tool_ADMET",
func=Func_Tool_ADMET,
description="""
useful when you want to obtain the ADMET data for a molecule.
like: get the ADMET data for molecule X
The input to this tool should be a string, representing the SMILES of the molecule.
"""
)
Give me a CONVERSATIONAL_REACT_DESCRIPTION code example to use these two tools, and the following requirements should be met:
1. Use initialize_agent method to initialize the agent as needed;
2. The output of tool2 should not be modified by LLM or further processed by the agent chain, avoiding data elimination caused by the thoughts made by LLM models;
3. Use memory to keep history chat messages;
4. Use prompt templates to customize the outputs, especially for the tool2.
### Suggestion:
_No response_ | Combine Tools Together in a Chain | https://api.github.com/repos/langchain-ai/langchain/issues/13760/comments | 1 | 2023-11-23T03:12:58Z | 2024-03-13T19:55:50Z | https://github.com/langchain-ai/langchain/issues/13760 | 2,007,415,465 | 13,760 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
HI there, I am trying to use Multiquery retiever on pinecone vectordb with multiple filters. I specifically need to use an OR operator.
For example, I would like to retrieve documents with metadata having the metadata "category" : 'value1' OR 'value2' OR 'value3'
this is my current implementation:
```
import pinecone
from langchain.llms import Cohere
from langchain.retrievers.multi_query import MultiQueryRetriever
from langchain.vectorstores import Pinecone
pinecone.init(api_key=pine_api_key, environment=pinecone_env)
index_name = "index_name"
index = pinecone.Index(index_name)
vectorstore = Pinecone(index, embeddings, "text")
llm = Cohere(cohere_api_key=cohere_api_key)
retriever_from_llm = MultiQueryRetriever.from_llm(
retriever=vectorstore.as_retriever(search_kwargs={"k": 10,
'filter': {'user_id': '42',
'category': "c1"}}), llm=llm
)
```
At this point I am able to sucessfully filter by user_id and category when it is only one value, but when I add more values as such
'category': ["c1", "c2", "c3"].
It does not work, when retrieving documents as such:
```
question = "what is foo?"
unique_docs = retriever_from_llm.get_relevant_documents(query=question)
```
Error:
```
ApiException: (400)
Reason: Bad Request
HTTP response headers: HTTPHeaderDict({'content-type': 'application/json', 'date': 'Thu, 23 Nov 2023 00:24:58 GMT', 'x-envoy-upstream-service-time': '0', 'content-length': '108', 'server': 'envoy'})
HTTP response body: {"code":3,"message":"illegal condition for field category, got ["c1","c2"]","details":[]}.
```
How can I add a filters that returns documents with category c1 Or c2 Or c3, using pincone and MultiqueryRetriver?
### Suggestion:
_No response_ | Pinecone MultiqueryRetriver with multiple OR metadata filters | https://api.github.com/repos/langchain-ai/langchain/issues/13758/comments | 2 | 2023-11-23T00:55:10Z | 2024-03-13T19:56:26Z | https://github.com/langchain-ai/langchain/issues/13758 | 2,007,320,637 | 13,758 |
[
"langchain-ai",
"langchain"
] | ### Feature request
When I receive empty text from the Azure content filter, I need to temporarily switch to OpenAI and try again to get an answer.
How can I achieve this functionality?
### Motivation
Error handling for prod. High availability
### Your contribution
pr | How to use a different OpenAI endpoint when Azure content filter applys ? | https://api.github.com/repos/langchain-ai/langchain/issues/13757/comments | 3 | 2023-11-23T00:47:25Z | 2024-03-13T19:56:29Z | https://github.com/langchain-ai/langchain/issues/13757 | 2,007,316,049 | 13,757 |
[
"langchain-ai",
"langchain"
] | ```python
from langchain.memory import RedisChatMessageHistory
history = RedisChatMessageHistory("foo")
history.add_user_message("hi!")
history.add_ai_message("whats up?")
```
{**<ins>"type": "human"**</ins>, "data": {"content": "hi!", "additional_kwargs": {}, <ins>**"type": "human"**</ins>, "example": false}}
I suspect it's a bug in the serialization or does it serve any purpose being in that structure with a duplicate k/v?
https://github.com/langchain-ai/langchain/blob/163bf165ed2d6ae453aad1bca4ed56814d81bf5b/libs/core/langchain_core/messages/base.py#L113C1-L115C1 | MessageHistory type field serialized twice twice | https://api.github.com/repos/langchain-ai/langchain/issues/13755/comments | 1 | 2023-11-22T23:55:16Z | 2024-02-28T16:06:45Z | https://github.com/langchain-ai/langchain/issues/13755 | 2,007,263,917 | 13,755 |
[
"langchain-ai",
"langchain"
] | ### System Info
Python 3.9.6, Langchain 0.0.334
### Who can help?
@eyurtsev
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
I'm experimenting with some simple code to load a local repository to test CodeLlama, but the "exclude" in GenericLoader.from_filesystem seems not working:
`from langchain.document_loaders.generic import GenericLoader
from langchain.document_loaders.parsers import LanguageParser
from langchain.text_splitter import Language
repo_path = "../../my/laravel/project/"
# Load
loader = GenericLoader.from_filesystem(
repo_path,
glob="**/*",
suffixes=[".php"],
parser=LanguageParser(
parser_threshold=2000,
),
exclude=["../../my/laravel/project/vendor/", "../../my/laravel/project/node_modules/", "../../my/laravel/project/storage/", "../../my/laravel/project/public/", "../../my/laravel/project/tests/", "../../my/laravel/project/resources/"]
)
documents = loader.load()
len(documents)
`
Am I missing something obvious? I cannot find any example...with or without the exclude, the length of docs is the same (and if I just print "documents" I see files in the folders I excluded).
### Expected behavior
I would expect that listing subpaths from the main path then these would be excluded. | GenericLoader.from_filesystem "exclude" not working | https://api.github.com/repos/langchain-ai/langchain/issues/13751/comments | 4 | 2023-11-22T23:08:00Z | 2024-03-13T19:55:44Z | https://github.com/langchain-ai/langchain/issues/13751 | 2,007,226,855 | 13,751 |
[
"langchain-ai",
"langchain"
] | ### System Info
**Operating system/architecture:**
Linux/X86_64
**CPU | Memory**
8 vCPU | 16 GB
**Platform version**
1.4.0
**Launch type**
FARGATE
**Project libraries:**
snowflake-sqlalchemy==1.4.6
python-dotenv==0.21.0
openai==0.27.2
langchain==0.0.336
pandas==2.0.2
boto3==1.26.144
colorama==0.4.6
fastapi==0.100.1
pydantic~=1.10.8
pytest~=7.1.2
uvicorn~=0.17.6
cassio==0.1.3
sentry-sdk==1.29.2
langsmith==0.0.66
numpy==1.24.3
SQLAlchemy==1.4.46
psycopg2-binary==2.9.7
tiktoken==0.4.0
httpx==0.24.1
unidecode==1.3.7
transformers==4.28.0
transformers[torch]
tensorflow==2.12.1
keras==2.12.0
**Python version of the project:**
python:3.10-slim-bullseye
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [X] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
It's quite challenging to replicate the error as it appears to be rather random. After a few requests, FastAPI stops responding following the OPTIONS query of the endpoint. This issue seems to be attributable to one of the libraries in use. I observed this error after a code refactoring in the project, moving from legacy chains to chains with LCEL. Since this refactoring, the ECS system has exhibited peculiar behavior. Extensive debugging has been conducted throughout the codebase, yet there are no indications of the error's origin. It's worth noting that everything functions flawlessly in local emulation, with no occurrence of any unusual errors. The problem arises when the code is deployed to the ECS Fargate instance, and I want to emphasize that this issue did not exist before the aforementioned changes were made.
<img width="996" alt="Captura de pantalla 2023-11-22 a la(s) 5 51 24 p m" src="https://github.com/langchain-ai/langchain/assets/122487744/15c12aeb-6016-468d-81de-b31a4204ca78">
### Expected behavior
I need someone to help me with new ways to debug this extremely rare bug, to give me ideas on what to do, what to show from my machine, what can be done, or if it's some incompatibility between the libraries. I haven't been able to pinpoint the specific point where the program stops, and it's proving to be very challenging. | Random Application Lockdown on ECS Fargate with Langchain and FastAPI | https://api.github.com/repos/langchain-ai/langchain/issues/13750/comments | 5 | 2023-11-22T22:56:07Z | 2024-05-14T16:07:15Z | https://github.com/langchain-ai/langchain/issues/13750 | 2,007,218,514 | 13,750 |
[
"langchain-ai",
"langchain"
] | ### System Info
Langchain version: 0.0.339
Python version: 3.11.5
Running on Ubuntu 22.04.3 LTS via WSL 2
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
1. Running the steps described in the guide for RecursiveUrlLoader: [https://python.langchain.com/docs/integrations/document_loaders/recursive_url](url)
2. Loading some content with UTF-8 encoding, the Python docs for example: [https://docs.python.org/3/](url)
Exact code used:
```python
from pprint import pprint
from langchain.document_loaders import RecursiveUrlLoader
from bs4 import BeautifulSoup
def load_python_docs():
url = "https://docs.python.org/3/"
loader = RecursiveUrlLoader(
url=url, max_depth=2, extractor=lambda x: BeautifulSoup(x, "html.parser").text
)
return loader.load()
pprint([doc.metadata for doc in load_python_docs()])
```
3. If you print the loaded documents, you should be able to see these kind of encoding issues:
```
{'description': 'Editor, Adam Turner,. This article explains the new features '
'in Python 3.12, compared to 3.11. Python 3.12 was released '
'on October 2, 2023. For full details, see the changelog. '
'Summary â\x80\x93 Release hi...',
'language': None,
'source': 'https://docs.python.org/3/whatsnew/3.12.html',
'title': 'Whatâ\x80\x99s New In Python 3.12 — Python 3.12.0 documentation'}
```
### Expected behavior
Should load content correctly, using the right encoding to parse the document. I suppose the issue is due to the fact that the `_get_child_links_recursive` method is calling `requests.get` and not specifying the encoding for the response. A quick fix that worked for me was to include the following line, just after the GET request: `response.encoding = response.apparent_encoding` | UTF-8 content is not loaded correctly with RecursiveUrlLoader | https://api.github.com/repos/langchain-ai/langchain/issues/13749/comments | 1 | 2023-11-22T22:47:26Z | 2024-02-28T16:06:55Z | https://github.com/langchain-ai/langchain/issues/13749 | 2,007,212,254 | 13,749 |
[
"langchain-ai",
"langchain"
] | ### System Info
Langchain version: 0.0.339
Laangserve version: 0.0.30
Langchain-cli version: 0.0.19
Running on apple silicone: MacBook Pro M3 max using a zsh shell.
### Who can help?
@erick
### Information
- [x] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [x] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Running the installation steps in the guide with
`pip3 install -U langchain-cli`
And then trying to run `langchain app` just results in zsh responding with `zsh: command not found: langchain`.
### Expected behavior
Running the installation steps in the guide with
`pip3 install -U langchain-cli`
Should allow me to run
`langchain app new my-app` | Langchain-cli Does not install a usable binary. | https://api.github.com/repos/langchain-ai/langchain/issues/13743/comments | 6 | 2023-11-22T19:54:42Z | 2023-12-04T02:56:26Z | https://github.com/langchain-ai/langchain/issues/13743 | 2,007,028,867 | 13,743 |
[
"langchain-ai",
"langchain"
] | ### Issue with current documentation:
Some langchain objects are used but not imported in [Types of MessagePromptTemplate](https://python.langchain.com/docs/modules/model_io/prompts/prompt_templates/msg_prompt_templates) documentation.
Specifically, it lacks the following:
```python
from langchain.prompts import HumanMessagePromptTemplate
from langchain.prompts import ChatPromptTemplate
```
and
```python
from langchain.schema.messages import HumanMessage, AIMessage
```
### Idea or request for content:
_No response_ | DOC: missing imports in "Types of MessagePromptTemplate" | https://api.github.com/repos/langchain-ai/langchain/issues/13736/comments | 1 | 2023-11-22T18:08:15Z | 2024-02-28T16:07:00Z | https://github.com/langchain-ai/langchain/issues/13736 | 2,006,867,499 | 13,736 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
**Description:**
Hi, sir. I am currently developing an Agent for scientific tasks using Langchain. The first tool I intend to integrate is a function for generating a mesh, implemented through the external API `dolfinx`. However, I've encountered a compatibility issue between my Langchain packages and the `dolfinx` package. This seems to be a broader problem related to designing an Agent that leverages diverse external APIs.
The concern here is that as I continue to expand the Agent's capabilities with more tools utilizing various external APIs, I might face additional conflicts. This issue could be a common challenge in the development process of creating Agents with multiple, diverse external integrations. Here's a snippet of the code for reference:
```
python
from mpi4py import MPI
from dolfinx import mesh, io
@tool("create_mesh")
def create_mesh_tool(text: str):
"""Returns a dolfinx.mesh.Mesh object, useful for generating meshes for PDEs.
The input should always be a string."""
domain = mesh.create_unit_square(MPI.COMM_WORLD, 8, 8, mesh.CellType.quadrilateral)
return domain
tools = [make_random_num, create_mesh_tool]
```
I am seeking advice or solutions on how to effectively manage and resolve these package conflicts within the Langchain framework, particularly when integrating tools that depend on external APIs like dolfinx.
### Suggestion:
_No response_ | Issue: <Handling Python Package Conflicts in Langchain When Integrating External APIs for Agent Tools> | https://api.github.com/repos/langchain-ai/langchain/issues/13734/comments | 1 | 2023-11-22T17:59:25Z | 2024-02-28T16:07:06Z | https://github.com/langchain-ai/langchain/issues/13734 | 2,006,847,641 | 13,734 |
[
"langchain-ai",
"langchain"
] | ### System Info
langchain==0.0.339
windows11
python==3.10
### Who can help?
simple chain with tool code ,but not pass full input to tool cause '\n'
`
model = ChatOpenAI(model_name="gpt-3.5-turbo", temperature=0, verbose=False, streaming=True)
custom_tool_list = [PythonREPLTool(),#CustomPythonExec()
]
memory = ConversationBufferMemory(memory_key="chat_history")
agent_executor = initialize_agent(
custom_tool_list,
llm=model,
agent=AgentType.CONVERSATIONAL_REACT_DESCRIPTION,
verbose=True,
max_iterations=1,
memory=memory,
early_stopping_method="generate",
agent_kwargs={"prefix": custom_prefix},
handle_parsing_errors="Check your output and make sure it conforms",
agent_executor.run("some prompt")
`
logs:
[chain/end] [1:chain:AgentExecutor > 2:chain:LLMChain] [4.94s] Exiting Chain run with output:
{
"text": "Thought: Do I need to use a tool? Yes\nAction: Python_REPL\nAction Input: import pandas as pd\ndf = pd.read_csv('./statics/20231123_000614_test.csv')\nline_count = len(df
)\nline_count"
}
[tool/start] [1:chain:AgentExecutor > 4:tool:Python_REPL] Entering Tool run with input:
"import pandas as pd"
2023-11-23 00:06:19 - Python REPL can execute arbitrary code. Use with caution.
[tool/end] [1:chain:AgentExecutor > 4:tool:Python_REPL] [4ms] Exiting Tool run with output:
""
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [X] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
[chain/end] [1:chain:AgentExecutor > 2:chain:LLMChain] [4.94s] Exiting Chain run with output:
{
"text": "Thought: Do I need to use a tool? Yes\nAction: Python_REPL\nAction Input: import pandas as pd\ndf = pd.read_csv('./statics/20231123_000614_test.csv')\nline_count = len(df
)\nline_count"
}
[tool/start] [1:chain:AgentExecutor > 4:tool:Python_REPL] Entering Tool run with input:
"import pandas as pd"
2023-11-23 00:06:19 - Python REPL can execute arbitrary code. Use with caution.
[tool/end] [1:chain:AgentExecutor > 4:tool:Python_REPL] [4ms] Exiting Tool run with output:
""
### Expected behavior
pass full input string to tool | chain input not pass into Python_REPL tool input by '\n' | https://api.github.com/repos/langchain-ai/langchain/issues/13730/comments | 2 | 2023-11-22T16:15:52Z | 2024-03-13T19:57:11Z | https://github.com/langchain-ai/langchain/issues/13730 | 2,006,686,650 | 13,730 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
I followed the lecture by Harrison on langchain on deeplearning.ai and when i follow it I am getting error `Got invalid JSON object. Error: Extra data:`when using `SelfQueryRetriever` to `get_relevant_documents`.
Using the following code:
```python
document_content_description = "Lecture notes"
retriever = SelfQueryRetriever.from_llm(
llm=model,
vectorstore=vectordb,
metadata_field_info=metadata_field_info,
document_contents=document_content_description,
verbose=True,
handle_parsing_errors=True,
)
docs = retriever.get_relevant_documents(question)
```
### Suggestion:
_No response_ | Issue: Error `Got invalid JSON object. Error: Extra data:`when using `SelfQueryRetriever` | https://api.github.com/repos/langchain-ai/langchain/issues/13728/comments | 3 | 2023-11-22T16:07:12Z | 2023-11-24T17:09:20Z | https://github.com/langchain-ai/langchain/issues/13728 | 2,006,669,669 | 13,728 |
[
"langchain-ai",
"langchain"
] | ### System Info
LangChain Version: 0.0.339
Platform: Windows 10
Python Version: 3.10.10
### Who can help?
@agol
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [X] Callbacks/Tracing
- [X] Async
### Reproduction
I've implemented this `RunnableBranch` example: https://python.langchain.com/docs/expression_language/how_to/routing
using this `MyCallbackHandler` example: https://python.langchain.com/docs/modules/agents/how_to/streaming_stdout_final_only
For streamming I've used this `streamming` example: https://python.langchain.com/docs/modules/model_io/chat/streaming
the result becomes something like this:
```
##
#Other#
##
(Verse 1)... <the output of the question as a whole, not separated by tokens, generated all at once>
```
### Expected behavior
every single token be surrounded by ##, like this:
```
#token1#
#token2#
...
```
| RunnableBranch doesnt stream correctly | https://api.github.com/repos/langchain-ai/langchain/issues/13723/comments | 7 | 2023-11-22T14:57:38Z | 2024-03-23T16:06:11Z | https://github.com/langchain-ai/langchain/issues/13723 | 2,006,525,842 | 13,723 |
[
"langchain-ai",
"langchain"
] | ### Feature request
it could be better where i can add a callback to get the total token spend on other LLMS apart from OPENAI ?
### Motivation
since there is a call back for openai why don't we have for other models too just to see how much token is consumed ?
### Your contribution
Happy to contribute to the issue with PR later | get token count for other LLM Models | https://api.github.com/repos/langchain-ai/langchain/issues/13719/comments | 5 | 2023-11-22T12:50:02Z | 2024-05-13T16:08:52Z | https://github.com/langchain-ai/langchain/issues/13719 | 2,006,274,368 | 13,719 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
Hi I am running the create_pandas_dataframe_agent agent, and the issue I'm facing is with the call back FinalStreamingStdOutCallbackHandler for some reason when I only have one civ it works fine as soon as I pass through multiple then it doesn't stream the data it just prints it all in one go.
here is my code:
```
def _get_llm(callback):
return OpenAI(
callbacks=[callback],
streaming=True,
temperature=0
)
file_path = ['people-100.csv','organizations-100.csv']
dataframes = [pd.read_csv(path) for path in file_path]
agent = create_pandas_dataframe_agent(
llm = _get_llm(FinalStreamingStdOutCallbackHandler()),
df = dataframes,
# agent_executor_kwargs={"memory": memory},
verbose=False,
agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION,
)
print(agent.run('go in deatil on the explination of the data and give me a statistical list aswell'))
```
### Suggestion:
_No response_ | Issue: <Please write a comprehensive title after the 'Issue: ' prefix> | https://api.github.com/repos/langchain-ai/langchain/issues/13717/comments | 3 | 2023-11-22T11:54:27Z | 2024-03-13T19:55:38Z | https://github.com/langchain-ai/langchain/issues/13717 | 2,006,180,319 | 13,717 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
I'm using
```python
from langchain.agents import create_pandas_dataframe_agent
pd_agent = create_pandas_dataframe_agent(OpenAI(temperature=0),
df,
verbose=True)
pd_agent.run('some question on dataframe')
```
```python
english_tools = [
Tool(name="SomeNAME_1",
func=lambda q: app.finance_chain.run(q),
description=" Some app related description ",
return_direct=True,
coroutine=lambda q: app.finance_chain.arun(q),
),
Tool(name="SomeNAME_2",
func=lambda q: app.rqa(q),
description=" Some app related description ",
coroutine=lambda q: app.rqa_english.arun(q),
return_direct=True
),
Tool.from_function(
name="SomeNAME_3",
func=lambda q: app.pd_agent(q),
description=" Some app related description",
coroutine=lambda q: app.pd_agent.arun(q),
)
]
```
when we call this agent seperatly it's gives me a good result
but when i use this agent as a tool along with other tools it's not gives consistent results, mostly wrong answer
### Suggestion:
_No response_ | Agent as tool gives wrong result ( pandas agent) | https://api.github.com/repos/langchain-ai/langchain/issues/13711/comments | 4 | 2023-11-22T08:56:25Z | 2024-03-24T11:39:43Z | https://github.com/langchain-ai/langchain/issues/13711 | 2,005,863,753 | 13,711 |
[
"langchain-ai",
"langchain"
] | ### System Info
Traceback (most recent call last):
File "/Users/xxx/anaconda3/envs/kaoyan-chat/lib/python3.10/site-packages/langchain/output_parsers/xml.py", line 2, in <module>
import xml.etree.ElementTree as ET
File "/Users/xxx/anaconda3/envs/kaoyan-chat/lib/python3.10/site-packages/langchain/output_parsers/xml.py", line 2, in <module>
import xml.etree.ElementTree as ET
ModuleNotFoundError: No module named 'xml.etree'; 'xml' is not a package
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [X] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
llm = AzureChatOpenAI(model_name="gpt-35-turbo",
deployment_name="gpt-35-turbo", temperature=0.3)
tools = load_tools(["serpapi", "llm-math"], llm=llm)
### Expected behavior
i run this code and hope it can run normally, but it throw the exception. xml is not a package , i think this is beacause the python file is named xml.py , it confilct with import xml.etree.ElementTree as ET
| ModuleNotFoundError: No module named 'xml.etree'; 'xml' is not a package | https://api.github.com/repos/langchain-ai/langchain/issues/13709/comments | 6 | 2023-11-22T08:36:24Z | 2024-02-28T16:07:20Z | https://github.com/langchain-ai/langchain/issues/13709 | 2,005,831,934 | 13,709 |
[
"langchain-ai",
"langchain"
] | ### Feature request
Given that the seed is already in beta phase, the system_fingerprint would also be highly useful in tracking deterministic responses. Suggest to add a `system_fingerprint` as part of the callbacks. https://api.python.langchain.com/en/latest/callbacks/langchain.callbacks.openai_info.OpenAICallbackHandler.html#langchain.callbacks.openai_info.OpenAICallbackHandler
### Motivation
Given that the seed is already in beta phase, the system_fingerprint would also be highly useful
### Your contribution
Proposed suggestion | Add system_fingerprint in OpenAI callbacks | https://api.github.com/repos/langchain-ai/langchain/issues/13707/comments | 4 | 2023-11-22T08:24:18Z | 2024-03-13T19:58:46Z | https://github.com/langchain-ai/langchain/issues/13707 | 2,005,812,732 | 13,707 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
This question may be stupid but I find nothing to refer to.
I'm using langchain.chains.LLMChain in flask api, but I found that the chain will be reloaded and reconstructed if I call the api after a call. So the first question I wanna ask is how to make my LLMChain be remembered so that I can use it directly in the next call.
I tried to use session in flask, but get ```Object of type LLMChain is not JSON serializable```.
Thanks.
### Suggestion:
_No response_ | Issue: how to remain the chain in different api calls? | https://api.github.com/repos/langchain-ai/langchain/issues/13697/comments | 12 | 2023-11-22T03:33:38Z | 2024-04-19T10:14:36Z | https://github.com/langchain-ai/langchain/issues/13697 | 2,005,499,791 | 13,697 |
[
"langchain-ai",
"langchain"
] | ### Issue with current documentation:
There is a lack of comprehensive documentation on how to use load_qa_chain with memory.
I've found this: https://cheatsheet.md/langchain-tutorials/load-qa-chain-langchain.en but does not cover other memories, like for example BufferedWindowMemory.
I have the following memory:
```
memory = {ConversationBufferWindowMemory} chat_memory=ChatMessageHistory(messages=[HumanMessage(content='What is the main material? Keept he answer short'), AIMessage(content='The main material is R 2 Fe 14 B.'), HumanMessage(content='What are its properties? '), AIMessage(content='The properties
Config = {type} <class 'langchain.schema.memory.BaseMemory.Config'>
ai_prefix = {str} 'AI'
buffer = {str} 'Human: What is the main material? Keept he answer short\nAI: The main material is R 2 Fe 14 B.\nHuman: What are its properties? \nAI: The properties of the material being discussed are not explicitly mentioned in the given context. However, some information
buffer_as_messages = {list: 4} [content='What is the main material? Keept he answer short', content='The main material is R 2 Fe 14 B.', content='What are its properties? ', content='The properties of the material being discussed are not explicitly mentioned in the given context. Howeve
buffer_as_str = {str} 'Human: What is the main material? Keept he answer short\nAI: The main material is R 2 Fe 14 B.\nHuman: What are its properties? \nAI: The properties of the material being discussed are not explicitly mentioned in the given context. However, some information
chat_memory = {ChatMessageHistory} messages=[HumanMessage(content='What is the main material? Keept he answer short'), AIMessage(content='The main material is R 2 Fe 14 B.'), HumanMessage(content='What are its properties? '), AIMessage(content='The properties of the material being discussed
human_prefix = {str} 'Human'
input_key = {NoneType} None
k = {int} 4
lc_attributes = {dict: 0} {}
lc_secrets = {dict: 0} {}
memory_key = {str} 'history'
memory_variables = {list: 1} ['history']
output_key = {NoneType} None
return_messages = {bool} False
```
And the messages are the following:
```
0 = {HumanMessage} content='What is the main material? Keept he answer short'
1 = {AIMessage} content='The main material is R 2 Fe 14 B.'
2 = {HumanMessage} content='What are its properties? '
3 = {AIMessage} content='The properties of the material being discussed are not explicitly mentioned in the given context. However, some information about the magnetic properties and microstructure of the material is provided. It is mentioned that the magnetic properties of the specimens were measured using a BH tracer or a vibrating sample magnetometer (VSM). The microstructure and crystal structure were analyzed using various techniques such as focused ion beam-scanning electron microscopy (FIB-SEM), electron probe microanalyses (EPMA), scanning transmission electron microscope-energy dispersive spectroscopy (STEM-EDS), and X-ray diffraction (XRD). The area ratio of each phase and the coverage ratio of the grains by the grain-boundary phase were calculated. Additionally, it is mentioned that the material contains rare-earth elements (Nd, Y, and Ce) and Fe, and that the coercivity can be improved by controlling the grain boundaries using certain phases (R6Fe13Ga and RFe2).'
```
And then I pass it to a load_qa_chain which is instantiated as follow (I'm summarising)
```
self.chain = load_qa_chain(llm, chain_type=qa_chain_type)
[...]
self.chain.run(input_documents=relevant_documents,
question=query,
memory=memory)
```
however, in the prompt, there seems to be no memory information.
I would like to avoid having to rewrite the prompt with a custom one. Maybe the memory does not follow the required conventions (e.g. how the human / assitant message should be characterized), which are not clear to me, I haven't found any documentation in regard.
### Idea or request for content:
_No response_ | DOC: how to use load_qa_chain with memory | https://api.github.com/repos/langchain-ai/langchain/issues/13696/comments | 6 | 2023-11-22T03:29:40Z | 2023-11-22T06:30:33Z | https://github.com/langchain-ai/langchain/issues/13696 | 2,005,494,274 | 13,696 |
[
"langchain-ai",
"langchain"
] | ### Issue with current documentation:
Hey guys! Below is the code which i'm using to get the output
```
import pandas as pd
from IPython.display import Markdown, display
from langchain.agents import create_csv_agent
from langchain.chat_models import ChatOpenAI
from langchain.llms import OpenAI
import os
os.environ["OPENAI_API_KEY"] = ""
# Load the dataset
df = pd.read_csv('https://gist.githubusercontent.com/armgilles/194bcff35001e7eb53a2a8b441e8b2c6/raw/92200bc0a673d5ce2110aaad4544ed6c4010f687/pokemon.csv')
df.to_csv('pokemon.csv', index=False)
# Extract columns for the prompt template
columns = list(df.columns)
json_data = ', '.join(columns)
# Define the prompt template
prompt_template = f'''If the dataset has the following columns: {json_data}
Understand user questions with different column names and convert them to a JSON format. Here's an example:
Example 1:
User Question: Top 2 zones by Housing Loan in the West and South in the year 2019 excluding small loans and with Discount > 5 and Term Deposit between 10k and 15k
{{
"start_date": "01-01-2019",
"end_date": "31-12-2019",
"time_stamp_col": "Month",
"agg_columns": ["Housing Loan"],
"trend": null,
"to_start_date": null,
"to_end_date": null,
"growth": null,
"variables_grpby": ["Zone"],
"filters": {{"Zone": ["West", "South"]}},
"not_in": {{"Loan Type": ["Small"]}},
"num_filter": {{
"gt": [
["Discount", 5],
["Term Deposit", 10000]
],
"lt": [
["Term Deposit", 15000]
]
}},
"percent": false,
"top": "2",
"bottom": null
}}
Example 2:
User Question: What is the Highest Loan Amount and Loan Outstanding by RM Name James in January 2020
{{
"start_date": "01-01-2020",
"end_date": "31-01-2020",
"time_stamp_col": "Due Date",
"agg_columns": ["Loan Amount", "Loan Outstanding"],
"trend": null,
"to_start_date": null,
"to_end_date": null,
"growth": null,
"variables_grpby": [],
"filters": {{"RM Name": ["James"]}},
"not_in": {{}},
"num_filter": {{}},
"percent": false,
"top": "1",
"bottom": null
}}
Example 3:
User Question: Which RM Name with respect to Region has the Highest Interest Outstanding and Principal Outstanding in the year 2019
{{
"start_date": "01-01-2019",
"end_date": "31-12-2019",
"time_stamp_col": "Due Date",
"agg_columns": ["Interest Outstanding", "Principal Outstanding"],
"trend": null,
"to_start_date": null,
"to_end_date": null,
"growth": null,
"variables_grpby": ["RM Name", "Region"],
"filters": {{}},
"not_in": {{}},
"num_filter": {{}},
"percent": false,
"top": "1",
"bottom": null
}}
Example 4:
User Question: Which Branch in North Carolina with respect to Cibil Score Bucket has the Highest Cibil Score in 2019
{{
"start_date": "01-01-2019",
"end_date": "31-12-2019",
"time_stamp_col": "Due Date",
"agg_columns": ["Cibil Score", "DPD Bucket"],
"trend": null,
"to_start_date": null,
"to_end_date": null,
"growth": null,
"variables_grpby": ["Branch"],
"filters": {{"Region": ["North Carolina"]}},
"not_in": {{}},
"num_filter": {{}},
"percent": false,
"top": "1",
"bottom": null
}}
Example 5:
User Question: With respect to Zone, Region, Branch, RM Name what is the Highest Loan Amount, Loan Tenure, Loan Outstanding, EMI Pending, Principal Outstanding
{{
"start_date": null,
"end_date": null,
"time_stamp_col": null,
"agg_columns": ["Loan Amount", "Loan Tenure", "Loan Outstanding", "EMI Pending", "Principal Outstanding"],
"trend": null,
"to_start_date": null,
"to_end_date": null,
"growth": null,
"variables_grpby": ["Zone", "Region", "Branch", "RM Name"],
"filters": {{}},
"not_in": {{}},
"num_filter": {{}},
"percent": false,
"top": "1",
"bottom": null
}}
Example 6:
User Question: Top 2 zones by Housing Loan in the year 2019
{{
"start_date": "01-01-2019",
"end_date": "31-12-2019",
"time_stamp_col": "Due Date",
"agg_columns": ["Housing Loan"],
"trend": null,
"to_start_date": null,
"to_end_date": null,
"growth": null,
"variables_grpby": ["Zone"],
"filters": {{"Product": ["Home Loan"]}},
"not_in": {{}},
"num_filter": {{}},
"percent": false,
"top": "2",
"bottom": null
}}
Our test dataset has the following columns: {json_data}
User Question (to be converted): {{{{user_question}}}}'''
prompt_template = prompt_template.replace("{", "{{").replace("}", "}}").replace("{{{{", "{").replace("}}}}", "}")
# Define user's question
user_question = "Which pokemon has the highest attack and which has lowest defense? I need the output in the json format"
# Format the user's question within the template
formatted_question = prompt_template.format(user_question=user_question)
# Load the agent
agent = create_csv_agent(OpenAI(temperature=0), "pokemon.csv", verbose=True)
gpt4_agent = create_csv_agent(ChatOpenAI(temperature=0, model_name="gpt-4"), "pokemon.csv", verbose=True)
# Use the formatted question as the input to your agent
response = agent.run(formatted_question)
# Print the response
print(response)
```
Below is the output which i got
```
> Entering new chain...
Thought: I need to find the highest attack and lowest defense values in the dataset
Action: python_repl_ast
Action Input: df[['Name', 'Attack', 'Defense']].sort_values(by=['Attack', 'Defense'], ascending=[False, True]).head(1).to_json()
Observation: {"Name":{"163":"MewtwoMega Mewtwo X"},"Attack":{"163":190},"Defense":{"163":100}}
Thought: I now know the final answer
Final Answer: The pokemon with the highest attack is MewtwoMega Mewtwo X with an attack of 190 and the pokemon with the lowest defense is MewtwoMega Mewtwo X with a defense of 100.
> Finished chain.
The pokemon with the highest attack is MewtwoMega Mewtwo X with an attack of 190 and the pokemon with the lowest defense is MewtwoMega Mewtwo X with a defense of 100.
```
But the output i need is in the JSON format whichi've mentioned in the prompt_template. Can anyone assist me?
### Idea or request for content:
_No response_ | While utlizing the create_csv_agent function, the output is not returned in the JSON format | https://api.github.com/repos/langchain-ai/langchain/issues/13686/comments | 1 | 2023-11-21T21:51:45Z | 2024-02-14T03:35:23Z | https://github.com/langchain-ai/langchain/issues/13686 | 2,005,201,744 | 13,686 |
[
"langchain-ai",
"langchain"
] | ### System Info
It appears that OpenAI's SDK v1.0.0 update introduced some needed migrations.
Running the Langchain OpenAIModerationChain with OpenAI SDK >= v1.0.0 provides the following error:
```
You tried to access openai.Moderation, but this is no longer supported in openai>=1.0.0 - see the README at https://github.com/openai/openai-python for the API.
You can run `openai migrate` to automatically upgrade your codebase to use the 1.0.0 interface.
Alternatively, you can pin your installation to the old version, e.g. `pip install openai==0.28`
A detailed migration guide is available here: https://github.com/openai/openai-python/discussions/742
```
After briefly reading the mitigation steps I believe the suggested migration is from `openai.Moderation.create()` -> `client.moderations.create()`.
I believe the `validate_environment` of `OpenAIModerationChain` will want updating from ```values["client"] = openai.Moderation``` to using the recommended `client.moderations` syntax. (https://api.python.langchain.com/en/latest/_modules/langchain/chains/moderation.html#OpenAIModerationChain)
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
With OpenAI SDK >= v1.0.0 try to use `OpenAIModerationChain` to moderate a piece of content.
Error appears:
```
You tried to access openai.Moderation, but this is no longer supported in openai>=1.0.0 - see the README at https://github.com/openai/openai-python for the API.
You can run `openai migrate` to automatically upgrade your codebase to use the 1.0.0 interface.
Alternatively, you can pin your installation to the old version, e.g. `pip install openai==0.28`
A detailed migration guide is available here: https://github.com/openai/openai-python/discussions/742
```
### Expected behavior
When using `OpenAIModerationChain` with OpenAI SDK >= v1.0.0 I expect the chain to properly moderate content and not fail with an error. | OpenAIModerationChain with OpenAI SDK >= v1.0.0 Broken | https://api.github.com/repos/langchain-ai/langchain/issues/13685/comments | 9 | 2023-11-21T21:45:08Z | 2024-05-10T22:20:32Z | https://github.com/langchain-ai/langchain/issues/13685 | 2,005,192,238 | 13,685 |
[
"langchain-ai",
"langchain"
] | ### System Info
langchain version: 0.0.339
python: 3.9.17
### Who can help?
@hwchase17 @agola11
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
When creating a Bedrock model without `region_name`
```py
llm_bedrock = Bedrock(model_id='anthropic.claude-instant-v1')
```
It errors out
```py
ValidationError: 1 validation error for Bedrock
__root__
Could not load credentials to authenticate with AWS client. Please check that credentials in the specified profile name are valid. (type=value_error)
```
This works
```py
llm_bedrock = Bedrock(model_id='anthropic.claude-instant-v1', region_name='us-west-2')
```
The problem is from https://github.com/langchain-ai/langchain/blob/bfb980b96800020d90c9362aaad40d8817636711/libs/langchain/langchain/llms/bedrock.py#L203-L207
where `get_from_dict_or_env` raises an exception if region is neither specified nor in env and default is not set.
To fix it, change default from `None` to `session.region_name`.
### Expected behavior
No error is raised. | [Issue] Error with Bedrock when AWS region is not specified or not in environment variable | https://api.github.com/repos/langchain-ai/langchain/issues/13683/comments | 2 | 2023-11-21T21:28:42Z | 2023-11-30T03:55:47Z | https://github.com/langchain-ai/langchain/issues/13683 | 2,005,167,112 | 13,683 |
[
"langchain-ai",
"langchain"
] | ### Issue with current documentation:
Hey guys! Below is the code which i'm working on
```
import pandas as pd
from IPython.display import Markdown, display
from langchain.agents import create_csv_agent
from langchain.chat_models import ChatOpenAI
from langchain.llms import OpenAI
import os
import pandas as pd
os.environ["OPENAI_API_KEY"] = ""
df = pd.read_csv('https://gist.githubusercontent.com/armgilles/194bcff35001e7eb53a2a8b441e8b2c6/raw/92200bc0a673d5ce2110aaad4544ed6c4010f687/pokemon.csv')
df.to_csv('pokemon.csv', index=False)
# Load the agent
agent = create_csv_agent(OpenAI(temperature=0), "pokemon.csv", verbose=True)
gpt4_agent = create_csv_agent(
ChatOpenAI(temperature=0, model_name="gpt-4"), "pokemon.csv", verbose=True
)
agent.run("Which pokemon has the highest attack and which has lowest defense?")
```
In the above code, you can see the query has the normal question. How do i add the below prompt template as the question?
```
# Now you can use the json_data in your prompt
prompt_template = f'''If the dataset has the following columns: {json_data}
Understand user questions with different column names and convert them to a JSON format. Here's an example:
Example 1:
User Question: Top 2 zones by Housing Loan in the West and South in the year 2019 excluding small loans and with Discount > 5 and Term Deposit between 10k and 15k
{{
"start_date": "01-01-2019",
"end_date": "31-12-2019",
"time_stamp_col": "Month",
"agg_columns": ["Housing Loan"],
"trend": null,
"to_start_date": null,
"to_end_date": null,
"growth": null,
"variables_grpby": ["Zone"],
"filters": {{"Zone": ["West", "South"]}},
"not_in": {{"Loan Type": ["Small"]}},
"num_filter": {{
"gt": [
["Discount", 5],
["Term Deposit", 10000]
],
"lt": [
["Term Deposit", 15000]
]
}},
"percent": false,
"top": "2",
"bottom": null
}}
Example 2:
User Question: What is the Highest Loan Amount and Loan Outstanding by RM Name James in January 2020
{{
"start_date": "01-01-2020",
"end_date": "31-01-2020",
"time_stamp_col": "Due Date",
"agg_columns": ["Loan Amount", "Loan Outstanding"],
"trend": null,
"to_start_date": null,
"to_end_date": null,
"growth": null,
"variables_grpby": [],
"filters": {{"RM Name": ["James"]}},
"not_in": {{}},
"num_filter": {{}},
"percent": false,
"top": "1",
"bottom": null
}}
Example 3:
User Question: Which RM Name with respect to Region has the Highest Interest Outstanding and Principal Outstanding in the year 2019
{{
"start_date": "01-01-2019",
"end_date": "31-12-2019",
"time_stamp_col": "Due Date",
"agg_columns": ["Interest Outstanding", "Principal Outstanding"],
"trend": null,
"to_start_date": null,
"to_end_date": null,
"growth": null,
"variables_grpby": ["RM Name", "Region"],
"filters": {{}},
"not_in": {{}},
"num_filter": {{}},
"percent": false,
"top": "1",
"bottom": null
}}
Example 4:
User Question: Which Branch in North Carolina with respect to Cibil Score Bucket has the Highest Cibil Score in 2019
{{
"start_date": "01-01-2019",
"end_date": "31-12-2019",
"time_stamp_col": "Due Date",
"agg_columns": ["Cibil Score", "DPD Bucket"],
"trend": null,
"to_start_date": null,
"to_end_date": null,
"growth": null,
"variables_grpby": ["Branch"],
"filters": {{"Region": ["North Carolina"]}},
"not_in": {{}},
"num_filter": {{}},
"percent": false,
"top": "1",
"bottom": null
}}
Example 5:
User Question: With respect to Zone, Region, Branch, RM Name what is the Highest Loan Amount, Loan Tenure, Loan Outstanding, EMI Pending, Principal Outstanding
{{
"start_date": null,
"end_date": null,
"time_stamp_col": null,
"agg_columns": ["Loan Amount", "Loan Tenure", "Loan Outstanding", "EMI Pending", "Principal Outstanding"],
"trend": null,
"to_start_date": null,
"to_end_date": null,
"growth": null,
"variables_grpby": ["Zone", "Region", "Branch", "RM Name"],
"filters": {{}},
"not_in": {{}},
"num_filter": {{}},
"percent": false,
"top": "1",
"bottom": null
}}
Example 6:
User Question: Top 2 zones by Housing Loan in the year 2019
{{
"start_date": "01-01-2019",
"end_date": "31-12-2019",
"time_stamp_col": "Due Date",
"agg_columns": ["Housing Loan"],
"trend": null,
"to_start_date": null,
"to_end_date": null,
"growth": null,
"variables_grpby": ["Zone"],
"filters": {{"Product": ["Home Loan"]}},
"not_in": {{}},
"num_filter": {{}},
"percent": false,
"top": "2",
"bottom": null
}}
Our test dataset has the following columns: {json_data}
User Question (to be converted): {{user_question}}'''
user_question = "Which pokemon has the highest attack and which has lowest defense?"
```
Can anyone assist with this?
### Idea or request for content:
_No response_ | How to add prompt template to create_csv_agent? | https://api.github.com/repos/langchain-ai/langchain/issues/13682/comments | 4 | 2023-11-21T21:26:28Z | 2024-02-14T03:35:23Z | https://github.com/langchain-ai/langchain/issues/13682 | 2,005,164,418 | 13,682 |
[
"langchain-ai",
"langchain"
] | ### Issue with current documentation:
I have run the below code which will try to go through the csv file and return the answer
```
import pandas as pd
import json
from langchain.llms import OpenAI
from langchain.chains import LLMChain
openai_api_key = ''
# Read CSV file into a DataFrame
df = pd.read_csv('https://gist.githubusercontent.com/armgilles/194bcff35001e7eb53a2a8b441e8b2c6/raw/92200bc0a673d5ce2110aaad4544ed6c4010f687/pokemon.csv')
# del df['Due Date']
# del df['Closing Date']
# Create a dictionary of unique values per column
unique_values_per_column = {}
for column in df.select_dtypes(include=['object']).columns:
# Convert ndarray to list
unique_values_per_column[column] = df[column].unique().tolist()
# Convert the dictionary to JSON
json_data = json.dumps(unique_values_per_column, indent=4)
# Now you can use the json_data in your prompt
prompt_template = f'''If the dataset has the following columns: {json_data}
Understand user questions with different column names and convert them to a JSON format. Here's an example:
Example 1:
User Question: Top 2 zones by Housing Loan in the West and South in the year 2019 excluding small loans and with Discount > 5 and Term Deposit between 10k and 15k
{{
"start_date": "01-01-2019",
"end_date": "31-12-2019",
"time_stamp_col": "Month",
"agg_columns": ["Housing Loan"],
"trend": null,
"to_start_date": null,
"to_end_date": null,
"growth": null,
"variables_grpby": ["Zone"],
"filters": {{"Zone": ["West", "South"]}},
"not_in": {{"Loan Type": ["Small"]}},
"num_filter": {{
"gt": [
["Discount", 5],
["Term Deposit", 10000]
],
"lt": [
["Term Deposit", 15000]
]
}},
"percent": false,
"top": "2",
"bottom": null
}}
Example 2:
User Question: What is the Highest Loan Amount and Loan Outstanding by RM Name James in January 2020
{{
"start_date": "01-01-2020",
"end_date": "31-01-2020",
"time_stamp_col": "Due Date",
"agg_columns": ["Loan Amount", "Loan Outstanding"],
"trend": null,
"to_start_date": null,
"to_end_date": null,
"growth": null,
"variables_grpby": [],
"filters": {{"RM Name": ["James"]}},
"not_in": {{}},
"num_filter": {{}},
"percent": false,
"top": "1",
"bottom": null
}}
Example 3:
User Question: Which RM Name with respect to Region has the Highest Interest Outstanding and Principal Outstanding in the year 2019
{{
"start_date": "01-01-2019",
"end_date": "31-12-2019",
"time_stamp_col": "Due Date",
"agg_columns": ["Interest Outstanding", "Principal Outstanding"],
"trend": null,
"to_start_date": null,
"to_end_date": null,
"growth": null,
"variables_grpby": ["RM Name", "Region"],
"filters": {{}},
"not_in": {{}},
"num_filter": {{}},
"percent": false,
"top": "1",
"bottom": null
}}
Example 4:
User Question: Which Branch in North Carolina with respect to Cibil Score Bucket has the Highest Cibil Score in 2019
{{
"start_date": "01-01-2019",
"end_date": "31-12-2019",
"time_stamp_col": "Due Date",
"agg_columns": ["Cibil Score", "DPD Bucket"],
"trend": null,
"to_start_date": null,
"to_end_date": null,
"growth": null,
"variables_grpby": ["Branch"],
"filters": {{"Region": ["North Carolina"]}},
"not_in": {{}},
"num_filter": {{}},
"percent": false,
"top": "1",
"bottom": null
}}
Example 5:
User Question: With respect to Zone, Region, Branch, RM Name what is the Highest Loan Amount, Loan Tenure, Loan Outstanding, EMI Pending, Principal Outstanding
{{
"start_date": null,
"end_date": null,
"time_stamp_col": null,
"agg_columns": ["Loan Amount", "Loan Tenure", "Loan Outstanding", "EMI Pending", "Principal Outstanding"],
"trend": null,
"to_start_date": null,
"to_end_date": null,
"growth": null,
"variables_grpby": ["Zone", "Region", "Branch", "RM Name"],
"filters": {{}},
"not_in": {{}},
"num_filter": {{}},
"percent": false,
"top": "1",
"bottom": null
}}
Example 6:
User Question: Top 2 zones by Housing Loan in the year 2019
{{
"start_date": "01-01-2019",
"end_date": "31-12-2019",
"time_stamp_col": "Due Date",
"agg_columns": ["Housing Loan"],
"trend": null,
"to_start_date": null,
"to_end_date": null,
"growth": null,
"variables_grpby": ["Zone"],
"filters": {{"Product": ["Home Loan"]}},
"not_in": {{}},
"num_filter": {{}},
"percent": false,
"top": "2",
"bottom": null
}}
Our test dataset has the following columns: {json_data}
User Question (to be converted): {{user_question}}'''
# Use langchain to process the prompt
llm = OpenAI(api_key=openai_api_key, temperature=0.9)
chain = LLMChain(llm=llm, prompt=prompt_template)
user_question = "What is the Highest Loan Amount and Loan Outstanding by RM Name James in January 2020"
response = chain.run(user_question)
print(response)
```
Below is the error it's returning
```
WARNING! api_key is not default parameter.
api_key was transferred to model_kwargs.
Please confirm that api_key is what you intended.
---------------------------------------------------------------------------
ValidationError Traceback (most recent call last)
~\AppData\Local\Temp\ipykernel_22224\1265770286.py in <cell line: 169>()
167 # Use langchain to process the prompt
168 llm = OpenAI(api_key=openai_api_key, temperature=0.9)
--> 169 chain = LLMChain(llm=llm, prompt=prompt_template)
170
171
~\anaconda3\lib\site-packages\langchain\load\serializable.py in __init__(self, **kwargs)
72
73 def __init__(self, **kwargs: Any) -> None:
---> 74 super().__init__(**kwargs)
75 self._lc_kwargs = kwargs
76
~\anaconda3\lib\site-packages\pydantic\main.cp310-win_amd64.pyd in pydantic.main.BaseModel.__init__()
ValidationError: 1 validation error for LLMChain
prompt
value is not a valid dict (type=type_error.dict)
```
Can anyone assist what exactly is the error?
### Idea or request for content:
_No response_ | Returning "ValidationError: 1 validation error for LLMChain prompt value is not a valid dict (type=type_error.dict)" while trying to run the LLMChain | https://api.github.com/repos/langchain-ai/langchain/issues/13681/comments | 1 | 2023-11-21T21:15:55Z | 2024-02-14T03:35:23Z | https://github.com/langchain-ai/langchain/issues/13681 | 2,005,150,371 | 13,681 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
I am using conversational retreival chain .from_llm(). If i try to print the final prompt that is been sent to LLM there i see LLM calls total 2 times. I will paste both the prompt in both calls.
Below is the code i am using
`qa_chain = ConversationalRetrievalChain.from_llm(llm=llm, chain_type=chain_type,
retriever=vector_database.as_retriever(),
return_source_documents=True)
qa_chain({'question': 'summarize this document', "chat_history": [('what is this', 'this is something'),('who you are', 'i am nothing')]})`
Below i can see the prompt sending to LLM 2 times
**1st call to LLM:**
Given the following conversation and a follow up question, rephrase the follow up question to be a standalone question, in its original language.
Chat History:
human: what is this
assistant: this is something
human: who you are
assistant: i am nothing
Follow Up Input: summarize this document
Standalone question:"""
**2nd call to LLM:**
Use the following pieces of context to answer the question at the end. If you don't know the answer, just say that you don't know, don't try to make up an answer.
Title: Details
Title:Details
Tile:Details
Question: what is the summary of the document?
Helpful Answers:
In the above mentioned 2 calls i can see clearly whatever i have passed in **chat_history** as a question and answer tuple pair that is used to frame my question in the **1st call prompt**. and i can see clearly my original question **summarize this document** is been modified to **what is the summary of the document?**
My question or i am trying to understand is, In place of context i see 3 times Title:Details. what is this?? Can you please explain? . is this the output of vectordb.asretreiver? if yes why i am seeing Title:Details 3 times?. please help me understand what is **Title:Details.**? in the **2nd prompt call**
Let me know if u need additional context to understand my questions.
### Suggestion:
_No response_ | Issue: Queries on conversational retreival chain prompt | https://api.github.com/repos/langchain-ai/langchain/issues/13675/comments | 4 | 2023-11-21T20:28:07Z | 2024-02-27T16:05:54Z | https://github.com/langchain-ai/langchain/issues/13675 | 2,005,073,074 | 13,675 |
[
"langchain-ai",
"langchain"
] | ### System Info
LangChain version: `0.0.339`
Python version: `3.10`
### Who can help?
@hwchase17
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [X] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
When serializing the `ChatPromptTemplate`, it saves it in a JSON/YAML format like this:
```
{'input_variables': ['question', 'context'],
'output_parser': None,
'partial_variables': {},
'messages': [{'prompt': {'input_variables': ['context', 'question'],
'output_parser': None,
'partial_variables': {},
'template': "...",
'template_format': 'f-string',
'validate_template': True,
'_type': 'prompt'},
'additional_kwargs': {}}],
'_type': 'chat'}
```
Note that the `_type` is "chat".
However, LangChain's `load_prompt_from_config` [does not recognize "chat" as the supported prompt type](https://github.com/langchain-ai/langchain/blob/master/libs/core/langchain_core/prompts/loading.py#L19).
Here is a minimal example to reproduce the issue:
```python
from langchain.prompts import ChatPromptTemplate
from langchain.chains import RetrievalQA
from langchain.llms import OpenAI
from langchain.vectorstores import FAISS
from langchain.embeddings import OpenAIEmbeddings
from langchain.chains.loading import load_chain
TEMPLATE = """Answer the question based on the context:
{context}
Question: {question}
Answer:
"""
chat_prompt = ChatPromptTemplate.from_template(TEMPLATE)
llm = OpenAI()
def get_retriever(persist_dir = None):
vectorstore = FAISS.from_texts(
["harrison worked at kensho"], embedding=OpenAIEmbeddings()
)
return vectorstore.as_retriever()
chain_with_chat_prompt = RetrievalQA.from_chain_type(
llm=llm,
chain_type="stuff",
retriever=get_retriever(),
chain_type_kwargs={"prompt": chat_prompt},
)
chain_with_prompt_saved_path = "./chain_with_prompt.yaml"
chain_with_prompt.save(chain_with_prompt_saved_path)
loaded_chain = load_chain(chain_with_prompt_saved_path, retriever=get_retriever())
```
The above script failed with the error:
`ValueError: Loading chat prompt not supported`
### Expected behavior
Load a chain that contains `ChatPromptTemplate` should work. | Can not load chain with ChatPromptTemplate | https://api.github.com/repos/langchain-ai/langchain/issues/13667/comments | 1 | 2023-11-21T17:40:19Z | 2023-11-27T16:39:51Z | https://github.com/langchain-ai/langchain/issues/13667 | 2,004,820,202 | 13,667 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
I am creating a tool that takes a multiple input arguments (say input1, input2). I would like to validate their types and also make sure that the tool only receives input1 and input2. How do I validate this without breaking the llm chain. I would instead like to return a warning to the llm agent. something like, "The input passed were incorrect, please try again"
### Suggestion:
_No response_ | Issue: How to validate Tool input arguments without raising ValidationError | https://api.github.com/repos/langchain-ai/langchain/issues/13662/comments | 19 | 2023-11-21T16:23:53Z | 2024-07-29T16:06:18Z | https://github.com/langchain-ai/langchain/issues/13662 | 2,004,693,724 | 13,662 |
[
"langchain-ai",
"langchain"
] | ### System Info
Langchain Version: 0.0.339
Python Version: 3.10.12
### Who can help?
@hwchase17
When trying to execute the example from the GraphSparqlQAChain docs a ValueError arises, in my understanding because it cannot parse the query correctly and execute it against the graph.
```
> Entering new GraphSparqlQAChain chain...
Llama.generate: prefix-match hit
The URI of Tim Berners-Lee's work homepage is <https://www.w3.org/People/Berners-Lee/>.
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
[<ipython-input-13-432f483fbf1b>](https://localhost:8080/#) in <cell line: 2>()
1 llm_chain = GraphSparqlQAChain.from_llm(llm=llm, graph=graph, verbose=True)
----> 2 llm_chain.run("What is Tim Berners-Lee's work homepage?")
3 frames
[/usr/local/lib/python3.10/dist-packages/langchain/chains/base.py](https://localhost:8080/#) in run(self, callbacks, tags, metadata, *args, **kwargs)
503 if len(args) != 1:
504 raise ValueError("`run` supports only one positional argument.")
--> 505 return self(args[0], callbacks=callbacks, tags=tags, metadata=metadata)[
506 _output_key
507 ]
[/usr/local/lib/python3.10/dist-packages/langchain/chains/base.py](https://localhost:8080/#) in __call__(self, inputs, return_only_outputs, callbacks, tags, metadata, run_name, include_run_info)
308 except BaseException as e:
309 run_manager.on_chain_error(e)
--> 310 raise e
311 run_manager.on_chain_end(outputs)
312 final_outputs: Dict[str, Any] = self.prep_outputs(
[/usr/local/lib/python3.10/dist-packages/langchain/chains/base.py](https://localhost:8080/#) in __call__(self, inputs, return_only_outputs, callbacks, tags, metadata, run_name, include_run_info)
302 try:
303 outputs = (
--> 304 self._call(inputs, run_manager=run_manager)
305 if new_arg_supported
306 else self._call(inputs)
[/usr/local/lib/python3.10/dist-packages/langchain/chains/graph_qa/sparql.py](https://localhost:8080/#) in _call(self, inputs, run_manager)
101 intent = "UPDATE"
102 else:
--> 103 raise ValueError(
104 "I am sorry, but this prompt seems to fit none of the currently "
105 "supported SPARQL query types, i.e., SELECT and UPDATE."
ValueError: I am sorry, but this prompt seems to fit none of the currently supported SPARQL query types, i.e., SELECT and UPDATE.
```
### Information
- [X] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Steps to reproduce the behaviour:
1. Follow the example as per the documentation: https://python.langchain.com/docs/use_cases/graph/graph_sparql_qa
2. Instead of the OpenAPI use a custom llamacpp deploy i.e.
```
llm = LlamaCpp(
model_path=model_path,
n_gpu_layers=32,
n_batch=512,
callback_manager=callback_manager,
n_ctx=2048,
verbose=True, # Verbose is required to pass to the callback manager
)
```
3. Run the chain:
```
llm_chain = GraphSparqlQAChain.from_llm(llm=llm, graph=graph, verbose=True)
llm_chain.run("What is Tim Berners-Lee's work homepage?")
```
### Expected behavior
Apart from giving the right result (which it does) the SPARQL query should be shown and run against the graph, as per documentation this should be the reply:
```
> Entering new GraphSparqlQAChain chain...
Identified intent:
SELECT
Generated SPARQL:
PREFIX foaf: <http://xmlns.com/foaf/0.1/>
SELECT ?homepage
WHERE {
?person foaf:name "Tim Berners-Lee" .
?person foaf:workplaceHomepage ?homepage .
}
Full Context:
[]
> Finished chain.
"Tim Berners-Lee's work homepage is http://www.w3.org/People/Berners-Lee/."
``` | Error in executing SPARQL query with GraphSparqlQAChain and custom LLM (llamacpp) | https://api.github.com/repos/langchain-ai/langchain/issues/13656/comments | 5 | 2023-11-21T14:51:15Z | 2024-03-13T19:55:45Z | https://github.com/langchain-ai/langchain/issues/13656 | 2,004,456,391 | 13,656 |
[
"langchain-ai",
"langchain"
] | ### System Info
Langchain 0.0.326
Windows
Python 3.11.5
Nvidia P6-16Q GPU
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [X] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Loading up all available GPU layers appears to cause LlamaCppEmbeddings to fail?
- Edit: this was my original submission because I was using -1 for the n_gpu_layers but I did further testing and noted that it was only if you used 34 or 35 - for 33 it works, but not 34 or 35.
For example this fails:
`llama = LlamaCppEmbeddings(model_path=r"mistral-7b-instruct-v0.1.Q5_K_M.gguf", n_ctx=512, n_gpu_layers=34)`
```
llm_load_tensors: ggml ctx size = 0.09 MB
llm_load_tensors: using CUDA for GPU acceleration
llm_load_tensors: mem required = 86.05 MB (+ 128.00 MB per state)
llm_load_tensors: offloading 32 repeating layers to GPU
llm_load_tensors: offloading non-repeating layers to GPU
llm_load_tensors: offloading v cache to GPU
llm_load_tensors: offloaded 34/35 layers to GPU
llm_load_tensors: VRAM used: 4840 MB
```
With this error message:
```
Traceback (most recent call last):
File "c:\Users\x\Desktop\Embedding\EmbeddingScript.py", line 11, in <module>
query_result = llama.embed_query("Hello World")
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\x\AppData\Local\Programs\Python\Python311\Lib\site-packages\langchain\embeddings\llamacpp.py", line 125, in embed_query
embedding = self.client.embed(text)
^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\x\AppData\Local\Programs\Python\Python311\Lib\site-packages\llama_cpp\llama.py", line 860, in embed
return list(map(float, self.create_embedding(input)["data"][0]["embedding"]))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\x\AppData\Local\Programs\Python\Python311\Lib\site-packages\llama_cpp\llama.py", line 824, in create_embedding
self.eval(tokens)
File "C:\Users\x\AppData\Local\Programs\Python\Python311\Lib\site-packages\llama_cpp\llama.py", line 491, in eval
return_code = llama_cpp.llama_eval(
^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\x\AppData\Local\Programs\Python\Python311\Lib\site-packages\llama_cpp\llama_cpp.py", line 808, in llama_eval
return _lib.llama_eval(ctx, tokens, n_tokens, n_past, n_threads)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
OSError: exception: access violation writing 0x0000000000000000
```
However, this does not fail:
`llama = LlamaCppEmbeddings(model_path=r"mistral-7b-instruct-v0.1.Q5_K_M.gguf", n_ctx=512, n_gpu_layers=33)`
```
lm_load_tensors: ggml ctx size = 0.09 MB
llm_load_tensors: using CUDA for GPU acceleration
llm_load_tensors: mem required = 86.05 MB (+ 128.00 MB per state)
llm_load_tensors: offloading 32 repeating layers to GPU
llm_load_tensors: offloading non-repeating layers to GPU
llm_load_tensors: offloaded 33/35 layers to GPU
llm_load_tensors: VRAM used: 4808 MB
```
It may not be a langchain problem, I am not knowledgeable enough to know - but not sure why you can't load up all the available layers of the GPU?
### Expected behavior
I would have thought that using all available layers of the GPU would not cause it to fail. | LlamaCppEmbeddings - fails if all available GPU layers are used. | https://api.github.com/repos/langchain-ai/langchain/issues/13655/comments | 3 | 2023-11-21T14:42:36Z | 2024-06-01T00:16:36Z | https://github.com/langchain-ai/langchain/issues/13655 | 2,004,439,132 | 13,655 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
To develop a scalable solution, I hoped to store the AgentExecutor object in centralized memory, such as Redis. However, due to its non-serializable nature, this has not been possible. Is there a workaround to achieve this?
### Suggestion:
_No response_ | Issue: Unable to Serialize AgentExecutor Object for Centralized Storage | https://api.github.com/repos/langchain-ai/langchain/issues/13653/comments | 6 | 2023-11-21T14:03:40Z | 2024-08-07T22:38:49Z | https://github.com/langchain-ai/langchain/issues/13653 | 2,004,360,284 | 13,653 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
def delete_embeddings(file_path, persist_directory):
chroma_db = chromadb.PersistentClient(path=persist_directory)
collection = chroma_db.get_or_create_collection(name="langchain")
ids = collection.get(where={"source": file_path})['ids']
collection.delete(where={"source": file_path},ids=ids)
print("delete successfully")
below is the error i am getting
File "/home/aaditya/DBChat/CustomBot/user_projects/views.py", line 1028, in project_pages
delete_embeddings(names, persist_directory)
File "/home/aaditya/DBChat/CustomBot/accounts/common_langcain_qa.py", line 134, in delete_embeddings
chroma_db = chromadb.PersistentClient(path=persist_directory)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/aaditya/DBChat/chat/lib/python3.11/site-packages/chromadb/__init__.py", line 106, in PersistentClient
return Client(settings)
^^^^^^^^^^^^^^^^
File "/home/aaditya/DBChat/chat/lib/python3.11/site-packages/chromadb/__init__.py", line 143, in Client
api = system.instance(API)
^^^^^^^^^^^^^^^^^^^^
File "/home/aaditya/DBChat/chat/lib/python3.11/site-packages/chromadb/config.py", line 248, in instance
impl = type(self)
^^^^^^^^^^
File "/home/aaditya/DBChat/chat/lib/python3.11/site-packages/chromadb/api/segment.py", line 81, in __init__
self._sysdb = self.require(SysDB)
^^^^^^^^^^^^^^^^^^^
File "/home/aaditya/DBChat/chat/lib/python3.11/site-packages/chromadb/config.py", line 189, in require
inst = self._system.instance(type)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/aaditya/DBChat/chat/lib/python3.11/site-packages/chromadb/config.py", line 248, in instance
impl = type(self)
### Suggestion:
_No response_ | Issue: getting error in deletion of embeddings for a file | https://api.github.com/repos/langchain-ai/langchain/issues/13651/comments | 2 | 2023-11-21T13:25:26Z | 2024-02-27T16:06:04Z | https://github.com/langchain-ai/langchain/issues/13651 | 2,004,285,505 | 13,651 |
[
"langchain-ai",
"langchain"
] | ### System Info
Ubuntu 22
python 3.10.12
Langchain==0.0.339
duckduckgo-search==3.9.6
### Who can help?
@timonpalm who created [the PR](https://github.com/langchain-ai/langchain/pull/8292)
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [X] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Steps are straight from [this help page](https://python.langchain.com/docs/integrations/tools/ddg) which is [this notebook](https://github.com/langchain-ai/langchain/blob/611e1e0ca45343b86debc0d24db45703ee63643b/docs/docs/integrations/tools/ddg.ipynb#L69)
```
from langchain.tools import DuckDuckGoSearchResults
search = DuckDuckGoSearchResults(backend="news")
search.run("Obama")
```
This results in error:
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/USER/.local/lib/python3.10/site-packages/langchain/tools/base.py", line 365, in run
raise e
File "/home/USER/.local/lib/python3.10/site-packages/langchain/tools/base.py", line 337, in run
self._run(*tool_args, run_manager=run_manager, **tool_kwargs)
File "/home/USER/.local/lib/python3.10/site-packages/langchain/tools/ddg_search/tool.py", line 61, in _run
res = self.api_wrapper.results(query, self.num_results, backend=self.backend)
File "/home/USER/.local/lib/python3.10/site-packages/langchain/utilities/duckduckgo_search.py", line 108, in results
for i, res in enumerate(results, 1):
File "/home/USER/.local/lib/python3.10/site-packages/duckduckgo_search/duckduckgo_search.py", line 96, in text
for i, result in enumerate(results, start=1):
UnboundLocalError: local variable 'results' referenced before assignment
```
Looking at the source code of duckduckgo-search package I see this function:
```
def text(
self,
keywords: str,
region: str = "wt-wt",
safesearch: str = "moderate",
timelimit: Optional[str] = None,
backend: str = "api",
max_results: Optional[int] = None,
) -> Iterator[Dict[str, Optional[str]]]:
"""DuckDuckGo text search generator. Query params: https://duckduckgo.com/params
Args:
keywords: keywords for query.
region: wt-wt, us-en, uk-en, ru-ru, etc. Defaults to "wt-wt".
safesearch: on, moderate, off. Defaults to "moderate".
timelimit: d, w, m, y. Defaults to None.
backend: api, html, lite. Defaults to api.
api - collect data from https://duckduckgo.com,
html - collect data from https://html.duckduckgo.com,
lite - collect data from https://lite.duckduckgo.com.
max_results: max number of results. If None, returns results only from the first response. Defaults to None.
Yields:
dict with search results.
"""
if backend == "api":
results = self._text_api(keywords, region, safesearch, timelimit, max_results)
elif backend == "html":
results = self._text_html(keywords, region, safesearch, timelimit, max_results)
elif backend == "lite":
results = self._text_lite(keywords, region, timelimit, max_results)
for i, result in enumerate(results, start=1):
yield result
if max_results and i >= max_results:
break
```
My assessment is that langchain should raise an exception if backend is not part of ["api", "html", "lite"] and the notebook should not mention this "news" feature anymore.
### Expected behavior
Crashing when creating instance of ` DuckDuckGoSearchResults` with invalid `backend` argument and updating the documentation. | ddg error: backend "news" is obsolete so should raise an error and example should be updated | https://api.github.com/repos/langchain-ai/langchain/issues/13648/comments | 6 | 2023-11-21T12:10:21Z | 2024-03-16T15:41:57Z | https://github.com/langchain-ai/langchain/issues/13648 | 2,004,139,873 | 13,648 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
Im trying to develop an sql agent chatbot that will generate and exceute SQL queries based on the user free-text questions.
so far I have developed a generic and basic agent with ChatGPT-4 and it worked pretty well (generated complex SQL and returned an answer).
When I tried to switch the LLM with Claude v2 (hosted on AWS Bedrock) things started to break.
sometimes the agent get stuck in a loop of the same input (for example):
```
> Entering new AgentExecutor chain...
Here is my thought process and actions to answer the question "how many alerts?":
Thought: I need to first see what tables are available in the database.
Action: sql_db_list_tables
Action Input:
Observation: alerts, detectors
Thought: Here is my response to the question "how many alerts?":
Thought: I need to first see what tables are available in the database.
Action: sql_db_list_tables
Action Input:
Observation: alerts, detectors
Thought: Here is my response to the question "how many alerts?":
Thought: I need to first see what tables are available in the database.
Action: sql_db_list_tables
Action Input:
Observation: alerts, detectors
Thought: Here is my response to the question "how many alerts?":
Thought: I need to first see what tables are available in the database.
Action: sql_db_list_tables
Action Input:
Observation: alerts, detectors
Thought: Here is my response to the question "how many alerts?":
```
sometimes it gets to the point that it actually query the db and get the result but don't stop the run and return the answer.
Here is how I configured the LLM and agent:
```
def aws_bedrock():
config = AwsBedrockConfig()
client = boto3.client(
'bedrock-runtime',
region_name=config.region_name,
aws_access_key_id=config.aws_access_key_id,
aws_secret_access_key=config.aws_secret_access_key,
)
model_kwargs = {
"prompt": "\n\nHuman: Hello world\n\nAssistant:",
"max_tokens_to_sample": 100000,
"temperature": 0,
"top_k": 10,
"top_p": 1,
"stop_sequences": [
"\n\nHuman:"
],
"anthropic_version": "bedrock-2023-05-31"
}
return Bedrock(
client=client,
model_id=config.model_id,
model_kwargs=model_kwargs
)
```
(also tried to change the model arguments with no success)
```
memory = ConversationBufferMemory(memory_key="history", chat_memory=history)
toolkit = SQLDatabaseToolkit(db=db, llm=self.llm)
self.agent = create_sql_agent(
llm=self.llm,
toolkit=toolkit,
extra_tools=[TableCommentsTool(db=db)],
prefix=self.format_sql_prefix(filters),
suffix=HISTORY_SQL_SUFFIX,
agent_type=AgentType.ZERO_SHOT_REACT_DESCRIPTION,
agent_executor_kwargs={"memory": memory},
verbose=True,
input_variables=["input", "history", "agent_scratchpad"]
)
```
### Suggestion:
_No response_ | SQL Agent not working well with Claude v2 model | https://api.github.com/repos/langchain-ai/langchain/issues/13647/comments | 3 | 2023-11-21T12:00:56Z | 2024-04-09T08:50:03Z | https://github.com/langchain-ai/langchain/issues/13647 | 2,004,123,075 | 13,647 |
[
"langchain-ai",
"langchain"
] | ### Feature request
Hello,
I'm not sure if this is already supported or not, I couldn't find anything in the documentation.
Is there a way to make chains support streaming ? It would be nice if we can get it working with something like the `load_summarize_chain`.
Or something like this:
```
doc_prompt = PromptTemplate.from_template("{page_content}")
chain = (
{
"content": lambda docs: "\n\n".join(
format_document(doc, doc_prompt) for doc in docs
)
}
| PromptTemplate.from_template("Summarize the following content:\n\n{content}")
| OpenAI(
temperature=1,
model_name=llm_model,
stream=True,
)
| StrOutputParser()
)
docs = [
Document(
page_content=split,
metadata={"source": "https://en.wikipedia.org/wiki/Nuclear_power_in_space"},
)
for split in text.split()
]
for partial_result in chain.invoke(docs):
print(partial_result)
```
### Motivation
I have long documents to summarize, so I would like to show the partial results in streaming mode and not make the user wait so long to get the final result.
### Your contribution
No. If it's not possible, I'm willing to implement the summarization chain from scratch and use the OpenAI lib. | Support for streaming in the langchain chains (eg., load_summarize_chain) | https://api.github.com/repos/langchain-ai/langchain/issues/13644/comments | 3 | 2023-11-21T11:20:41Z | 2024-03-13T19:55:39Z | https://github.com/langchain-ai/langchain/issues/13644 | 2,004,052,880 | 13,644 |
[
"langchain-ai",
"langchain"
] | ### Issue with current documentation:
Regarding:
https://github.com/langchain-ai/langchain/blob/master/templates/rag-gpt-crawler/README.md
I found 2 issues:
1. no such file as "server.py" ( there is "rag_gpt_crawler.ipynb" instead )
2. in docs linked line
`from rag_chroma import chain as rag_gpt_crawler`
is not the same as shown in terminal
`from rag_gpt_crawler import chain as rag_gpt_crawler_chain`
I think the first is incorrect.
### Idea or request for content:
_No response_ | DOC: Template "rag-gpt-crawler" doc is incorrect | https://api.github.com/repos/langchain-ai/langchain/issues/13640/comments | 2 | 2023-11-21T09:39:28Z | 2023-11-22T22:06:11Z | https://github.com/langchain-ai/langchain/issues/13640 | 2,003,853,667 | 13,640 |
[
"langchain-ai",
"langchain"
] | ### Issue with current documentation:
The documentation on [creating documents](https://js.langchain.com/docs/modules/data_connection/document_loaders/how_to/creating_documents) covers optional document metadata but doesn't mention that it's possible to create text metadata in `page_content`. For example if only a filename is given to `CSVLoader` it will assume the header is metadata and [delimit each key-value pair with a newline](https://github.com/aws-samples/multi-tenant-chatbot-using-rag-with-amazon-bedrock/pull/14).
Chat LangChain will talk about this source of metadata but I can't find any additional information in the provided references.
> In the context of the RAG model, if the CSV data is used as a source for retrieval or generation, the fieldnames can be utilized in several ways:
>
> • Retrieval: The fieldnames can be used as query terms to retrieve relevant documents from the CSV dataset. The RAG model can leverage the fieldnames to understand the user's query and retrieve documents that match the specified criteria.
>
> • Generation: The fieldnames can provide context and constraints for generating responses. The RAG model can use the fieldnames as prompts or conditioning information to generate responses that are specific to the content of the corresponding columns in the CSV file.
>
> By incorporating the CSV fieldnames into the retrieval and generation processes, the RAG model can produce more accurate and contextually relevant results based on the specific attributes and structure of the CSV dataset.
### Idea or request for content:
A description of the consequences of including text metadata for different scenarios, models, data stores.
Strategies for including text metadata. | DOC: Text metadata | https://api.github.com/repos/langchain-ai/langchain/issues/13639/comments | 2 | 2023-11-21T09:39:12Z | 2023-11-24T10:28:28Z | https://github.com/langchain-ai/langchain/issues/13639 | 2,003,853,198 | 13,639 |
[
"langchain-ai",
"langchain"
] | ### Feature request
Add a progress bar to `GooglePalmEmbeddings.embed_documents()` function. [tqdm](https://github.com/tqdm/tqdm) would work just fine.
In my opinion, all embedders should have a progress bar.
### Motivation
When processing embeddings the user should have an idea of how much time is going to take to embed the data. While using GooglePalmEmbeddings, which are not the fastest, I couldn't see a progress bar and that was frustrating me because I had no idea if it was even accessing GooglePalm correctly or how much time it was going to take.
### Your contribution
```python
from __future__ import annotations
import logging
from typing import Any, Callable, Dict, List, Optional
from tqdm import tqdm
from langchain_core.pydantic_v1 import BaseModel, root_validator
from langchain_core.schema.embeddings import Embeddings
from tenacity import (
before_sleep_log,
retry,
retry_if_exception_type,
stop_after_attempt,
wait_exponential,
)
from langchain.utils import get_from_dict_or_env
logger = logging.getLogger(__name__)
...
class GooglePalmEmbeddings(BaseModel, Embeddings):
"""Google's PaLM Embeddings APIs."""
client: Any
google_api_key: Optional[str]
model_name: str = "models/embedding-gecko-001"
"""Model name to use."""
@root_validator()
def validate_environment(cls, values: Dict) -> Dict:
"""Validate api key, python package exists."""
google_api_key = get_from_dict_or_env(
values, "google_api_key", "GOOGLE_API_KEY"
)
try:
import google.generativeai as genai
genai.configure(api_key=google_api_key)
except ImportError:
raise ImportError("Could not import google.generativeai python package.")
values["client"] = genai
return values
def embed_documents(self, texts: List[str]) -> List[List[float]]:
return [self.embed_query(text) for text in tqdm(texts)]
def embed_query(self, text: str) -> List[float]:
"""Embed query text."""
embedding = embed_with_retry(self, self.model_name, text)
return embedding["embedding"]
``` | Add progress bar to GooglePalmEmbeddings | https://api.github.com/repos/langchain-ai/langchain/issues/13637/comments | 3 | 2023-11-21T09:16:16Z | 2024-02-27T16:06:19Z | https://github.com/langchain-ai/langchain/issues/13637 | 2,003,810,406 | 13,637 |
[
"langchain-ai",
"langchain"
] | ### System Info
When using the Jira wrapper in LangChain to parse data from Jira tickets, the application encounters a TypeError if the ticket information is empty. This issue occurs specifically when the priority field of a ticket is not set, leading to a 'NoneType' object is not subscriptable error.
### Environment Details
LangChain version: [specify version]
Jira Wrapper version: [specify version]
Python version: 3.10
Operating System: [specify OS]
### Error Logs/Stack Traces
```
Traceback (most recent call last):
...
File "/path/to/langchain/tools/jira/tool.py", line 53, in _run
return self.api_wrapper.run(self.mode, instructions)
...
File "/path/to/langchain/utilities/jira.py", line 72, in parse_issues
priority = issue["fields"]["priority"]["name"]
TypeError: 'NoneType' object is not subscriptable
```
### Proposed Solution
I propose adding a check before parsing the ticket information. If the information is empty, return an empty string instead of 'None'. This modification successfully prevented the application from breaking in my tests.
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [X] Agents / Agent Executors
- [X] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Steps to Reproduce
1- Execute a JQL query to fetch issues from a Jira project (e.g., project = SN and sprint = 'SN Sprint 8 + 9' and labels = 'fe').
2- Ensure that one of the fetched issues has an empty priority field.
3- Observe the application breaking with a TypeError.
### Expected behavior
The Jira wrapper should handle cases where ticket information, such as the priority field, is empty, without causing the application to break. | TypeError in Jira Wrapper When Parsing Empty Ticket Information | https://api.github.com/repos/langchain-ai/langchain/issues/13636/comments | 3 | 2023-11-21T08:52:06Z | 2024-02-27T16:06:24Z | https://github.com/langchain-ai/langchain/issues/13636 | 2,003,767,185 | 13,636 |
[
"langchain-ai",
"langchain"
] | ### System Info
langchain 0.0.326
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [X] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
```
from langchain.chains import RetrievalQA
from langchain.memory import ConversationBufferMemory
from langchain.prompts import PromptTemplate
# This text splitter is used to create the parent documents - The big chunks
parent_splitter = RecursiveCharacterTextSplitter(chunk_size=2000, chunk_overlap=400)
# This text splitter is used to create the child documents - The small chunks
# It should create documents smaller than the parent
child_splitter = RecursiveCharacterTextSplitter(chunk_size=400)
# The vectorstore to use to index the child chunks
from chromadb.errors import InvalidDimensionException
try:
vectorstore = Chroma(collection_name="split_parents", embedding_function=bge_embeddings, persist_directory="chroma_db")
except InvalidDimensionException:
Chroma().delete_collection()
vectorstore = Chroma(collection_name="split_parents", embedding_function=bge_embeddings, persist_directory="chroma_db")
#vectorstore = Chroma(collection_name="split_parents", embedding_function=bge_embeddings)
# The storage layer for the parent documents
store = InMemoryStore()
big_chunks_retriever = ParentDocumentRetriever(
vectorstore=vectorstore,
docstore=store,
child_splitter=child_splitter,
parent_splitter=parent_splitter,
)
big_chunks_retriever.add_documents(documents)
qa_template = """
Nutze die folgenden Informationen aus dem Kontext (getrennt mit <ctx></ctx>), um die Frage zu beantworten.
Antworte nur auf Deutsch, weil der Nutzer kein Englisch versteht! \
Falls du die Antwort nicht weißt, antworte mit "Leider fehlen mir dazu die Informationen." \
Wenn du nicht genügend Informationen unten findest, antworte ebenfalls mit "Leider fehlen mir dazu die Informationen." \
------
<ctx>
{context}
</ctx>
------
{query}
Answer:
"""
prompt = PromptTemplate(template=qa_template,
input_variables=['context','history', 'question'])
chain_type_kwargs={
"verbose": True,
"prompt": prompt,
"memory": ConversationSummaryMemory(
llm=build_llm(),
memory_key="history",
input_key="question",
return_messages=True)}
refine = RetrievalQA.from_chain_type(llm=build_llm(),
chain_type="refine",
return_source_documents=True,
chain_type_kwargs=chain_type_kwargs,
retriever=big_chunks_retriever,
verbose=True)
query = "Hi, I am Max, can you help me??"
refine(query)
```
### Expected behavior
Hi,
in the code you see how I built my RAG model with the ParentDocumentRetriever from Langchain and with Memory. At the moment I am using the RetrievalQA-Chain with the default `chain_type="stuff"`. However I want to try different chain types like "map_reduce" or "refine". But when replacing `chain_type="refine"` and creating the Retrieval QA chain, I get the following Error:
```
ValidationError: 1 validation error for RefineDocumentsChain
prompt
extra fields not permitted (type=value_error.extra)
```
How can I solve this? | Chain Type Refine Error: 1 validation error for RefineDocumentsChain prompt extra fields not permitted | https://api.github.com/repos/langchain-ai/langchain/issues/13635/comments | 3 | 2023-11-21T08:36:32Z | 2024-02-27T16:06:29Z | https://github.com/langchain-ai/langchain/issues/13635 | 2,003,734,187 | 13,635 |
[
"langchain-ai",
"langchain"
] | ### System Info
Lanchain V: 0.339
### Who can help?
@hwchase17
@agola11
### Information
- [X] The official example notebooks/scripts
- [X] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [X] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
Following any example that uses a `langchain.schema.runnable` object. For example the "[Adding memory](https://python.langchain.com/docs/expression_language/cookbook/memory)" tutorial uses `RunnableLambda` and `RunnablePassthrough`.
- I no longer see `langchain.schema` in the API docs (see image below).
- Searching in the API docs also doesn't return any results when searching for `RunnablePassthrough`
<img width="363" alt="image" src="https://github.com/langchain-ai/langchain/assets/94480542/1b3ded17-c669-406d-8309-4f953f42c1f6">
### Expected behavior
- I don't see anything in the release notes about the `langchain.schema.runnables` being removed or relocated.
- I would have expected to see them in the API docs, or at least for them to still be returned when searching for them.
- Not sure if this is a documentation build issue and the modules are still importable as I have not updated my Langchain version yet. I was just using the docs as a reference and then starting getting 404 errors upon page refresh (e.g. [this](https://api.python.langchain.com/en/latest/schema/langchain.schema.messages.AIMessageChunk.html#langchain.schema.messages.AIMessageChunk) page for `AIMessageChunk` also no longer exists) | Langchain.schema.runnable now missing from docs? | https://api.github.com/repos/langchain-ai/langchain/issues/13631/comments | 4 | 2023-11-20T23:53:15Z | 2024-03-13T19:55:48Z | https://github.com/langchain-ai/langchain/issues/13631 | 2,003,206,976 | 13,631 |
[
"langchain-ai",
"langchain"
] | ### System Info
LangChain Version: 0.0.339
Python version: 3.10.8
Windows 10 Enterprise 21H2
When creating a ConversationalRetrievalChain as follows:
CONVERSATION_RAG_CHAIN_WITH_SUMMARY_BUFFER = ConversationalRetrievalChain(
combine_docs_chain=combine_docs_chain,
memory=summary_memory,
retriever=rag_retriever,
question_generator=question_generator_chain
)
With
LLM = AzureChatOpenAI(...)
The following error occurs:
"
Traceback (most recent call last):
File "C:\Users\om403f\AppData\Local\Programs\Python\Python310\lib\threading.py", line 1016, in _bootstrap_inner
self.run()
File "C:\Users\om403f\AppData\Local\Programs\Python\Python310\lib\threading.py", line 953, in run
self._target(*self._args, **self._kwargs)
File "C:\Users\om403f\Documents\Applied_Research\Deep_Learning\web_app\app.py", line 1456, in llm_task
history_rag_buffer_result = CONVERSATION_RAG_CHAIN_WITH_SUMMARY_BUFFER.invoke({'question':user_query, 'chat_history':summary_memory})
File "C:\Users\om403f\AppData\Local\Programs\Python\Python310\lib\site-packages\langchain\chains\base.py", line 87, in invoke
return self(
File "C:\Users\om403f\AppData\Local\Programs\Python\Python310\lib\site-packages\langchain\chains\base.py", line 310, in __call__
raise e
File "C:\Users\om403f\AppData\Local\Programs\Python\Python310\lib\site-packages\langchain\chains\base.py", line 304, in __call__
self._call(inputs, run_manager=run_manager)
File "C:\Users\om403f\AppData\Local\Programs\Python\Python310\lib\site-packages\langchain\chains\conversational_retrieval\base.py", line 135, in _call
chat_history_str = get_chat_history(inputs["chat_history"])
File "C:\Users\om403f\AppData\Local\Programs\Python\Python310\lib\site-packages\langchain\chains\conversational_retrieval\base.py", line 44, in _get_chat_history
ai = "Assistant: " + dialogue_turn[1]
TypeError: can only concatenate str (not "AzureChatOpenAI") to str
"
Alternately, when using CTransformers as follows:
LLM = CTransformers(model=llm_model, model_type="llama", config=config, streaming=True, callbacks=[StreamingStdOutCallbackHandler()])
The following error occurs:
"
Traceback (most recent call last):
File "C:\Users\om403f\AppData\Local\Programs\Python\Python310\lib\threading.py", line 1016, in _bootstrap_inner
self.run()
File "C:\Users\om403f\AppData\Local\Programs\Python\Python310\lib\threading.py", line 953, in run
self._target(*self._args, **self._kwargs)
File "C:\Users\om403f\Documents\Applied_Research\Deep_Learning\web_app\app.py", line 1456, in llm_task
history_rag_buffer_result = CONVERSATION_RAG_CHAIN_WITH_SUMMARY_BUFFER.invoke({'question':user_query, 'chat_history':summary_memory})
File "C:\Users\om403f\AppData\Local\Programs\Python\Python310\lib\site-packages\langchain\chains\base.py", line 87, in invoke
return self(
File "C:\Users\om403f\AppData\Local\Programs\Python\Python310\lib\site-packages\langchain\chains\base.py", line 310, in __call__
raise e
File "C:\Users\om403f\AppData\Local\Programs\Python\Python310\lib\site-packages\langchain\chains\base.py", line 304, in __call__
self._call(inputs, run_manager=run_manager)
File "C:\Users\om403f\AppData\Local\Programs\Python\Python310\lib\site-packages\langchain\chains\conversational_retrieval\base.py", line 135, in _call
chat_history_str = get_chat_history(inputs["chat_history"])
File "C:\Users\om403f\AppData\Local\Programs\Python\Python310\lib\site-packages\langchain\chains\conversational_retrieval\base.py", line 44, in _get_chat_history
ai = "Assistant: " + dialogue_turn[1]
TypeError: can only concatenate str (not "CTransformers") to str
"
Hope this is an accurate bug report and it helps! Apologies if this is in fact a dumb report and actually an error at my end.
### Who can help?
@hwchase17
@agola11
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [X] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [X] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler
from langchain.schema.vectorstore import VectorStoreRetriever
from langchain.memory import ConversationSummaryBufferMemory
from langchain.embeddings import HuggingFaceEmbeddings
from langchain.prompts import PromptTemplate
from langchain.vectorstores import Chroma
from langchain.chains import LLMChainn
from langchain.chains import StuffDocumentsChain
from langchain.chains import ConversationalRetrievalChain
from langchain.llms import CTransformers
VECTOR_STORE = Chroma(persist_directory=VECTORDB_SBERT_FOLDER, embedding_function=HuggingFaceEmbeddings())
LLM = CTransformers(model=llm_model, model_type="llama", config=config, streaming=True, callbacks=[StreamingStdOutCallbackHandler()])
document_prompt = PromptTemplate(
input_variables=["page_content"],
template="{page_content}"
)
document_variable_name = "context"
temp_StuffDocumentsChain_prompt = PromptTemplate.from_template(
"Summarize this content: {context}"
)
llm_chain_for_StuffDocumentsChain = LLMChain(llm=LLM, prompt=temp_StuffDocumentsChain_prompt)
combine_docs_chain = StuffDocumentsChain(
llm_chain=llm_chain_for_StuffDocumentsChain,
document_prompt=document_prompt,
document_variable_name=document_variable_name
)
summary_memory = ConversationSummaryBufferMemory(llm=LLM, max_token_limit=100)
retriever=VECTOR_STORE.as_retriever()
rag_retriever = VectorStoreRetriever(vectorstore=VECTOR_STORE)
temp_template = (
"""
Combine the chat history and qustion into a standalone question:
Chat history: {chat_history}
question: {user_query}
"""
)
temp_prompt = PromptTemplate.from_template(temp_template)
question_generator_chain = LLMChain(llm=LLM, prompt=temp_prompt)
CONVERSATION_RAG_CHAIN_WITH_SUMMARY_BUFFER = ConversationalRetrievalChain(
combine_docs_chain=combine_docs_chain,
memory=summary_memory,
retriever=rag_retriever,
question_generator=question_generator_chain
)
CONVERSATION_RAG_CHAIN_WITH_SUMMARY_BUFFER.invoke({'question':user_query, 'chat_history':summary_memory})
### Expected behavior
Should work according to code example and API specs as described in the official LangChain API docs:
https://api.python.langchain.com/en/latest/chains/langchain.chains.conversational_retrieval.base.ConversationalRetrievalChain.html#langchain.chains.conversational_retrieval.base.ConversationalRetrievalChain | Potential Bug in ConversationalRetrievalChain - TypeError: can only concatenate str (not "CTransformers") to str | TypeError: can only concatenate str (not "AzureChatOpenAI") to str | https://api.github.com/repos/langchain-ai/langchain/issues/13628/comments | 3 | 2023-11-20T23:14:44Z | 2024-03-17T16:06:11Z | https://github.com/langchain-ai/langchain/issues/13628 | 2,003,171,128 | 13,628 |
[
"langchain-ai",
"langchain"
] | ### System Info
LangChain Version: 0.0.339
Python version: 3.10.8
Windows 10 Enterprise 21H2
When creating a ConversationalRetrievalChain as follows:
CONVERSATION_RAG_CHAIN_WITH_SUMMARY_BUFFER = ConversationalRetrievalChain(
combine_docs_chain=combine_docs_chain,
memory=summary_memory,
retriever=rag_retriever,
question_generator=question_generator_chain
)
With rag_retriever = VectorStoreRetrieverMemory(retriever=VECTOR_STORE.as_retriever())
The following error occurs:
"
Traceback (most recent call last):
File "C:\Users\om403f\AppData\Local\Programs\Python\Python310\lib\threading.py", line 1016, in _bootstrap_inner
self.run()
File "C:\Users\om403f\AppData\Local\Programs\Python\Python310\lib\threading.py", line 953, in run
self._target(*self._args, **self._kwargs)
File "C:\Users\om403f\Documents\Applied_Research\Deep_Learning\web_app\app.py", line 1438, in llm_task
CONVERSATION_RAG_CHAIN_WITH_SUMMARY_BUFFER = ConversationalRetrievalChain(
File "C:\Users\om403f\AppData\Local\Programs\Python\Python310\lib\site-packages\langchain\load\serializable.py", line 97, in __init__
super().__init__(**kwargs)
File "pydantic\main.py", line 341, in pydantic.main.BaseModel.__init__
pydantic.error_wrappers.ValidationError: 1 validation error for ConversationalRetrievalChain
retriever
Can't instantiate abstract class BaseRetriever with abstract method _get_relevant_documents (type=type_error)
"
Name mangling may be occurring as described here: https://stackoverflow.com/questions/31457855/cant-instantiate-abstract-class-with-abstract-methods
retriever.py implements the abstract method _get_relevant_documents: https://github.com/langchain-ai/langchain/blob/4eec47b19128fa168e58b9a218a9da049275f6ce/libs/langchain/langchain/schema/retriever.py#L136
Hope this is an accurate bug report and it helps! Apologies if this is in fact a dumb report and actually an error at my end.
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
from langchain.memory import ConversationSummaryBufferMemory
from langchain.memory import VectorStoreRetrieverMemory
from langchain.prompts import PromptTemplate
from langchain.vectorstores import Chroma
from langchain.chains import LLMChain
from langchain.chains import RetrievalQA
from langchain.chains import ConversationChain
from langchain.chains import StuffDocumentsChain
from langchain.chains import ConversationalRetrievalChain
VECTOR_STORE = Chroma(persist_directory=VECTORDB_SBERT_FOLDER, embedding_function=HuggingFaceEmbeddings())
LLM = AzureChatOpenAI()
document_prompt = PromptTemplate(
input_variables=["page_content"],
template="{page_content}"
)
document_variable_name = "context"
temp_StuffDocumentsChain_prompt = PromptTemplate.from_template(
"Summarize this content: {context}"
)
llm_chain_for_StuffDocumentsChain = LLMChain(llm=LLM, prompt=temp_StuffDocumentsChain_prompt)
combine_docs_chain = StuffDocumentsChain(
llm_chain=llm_chain_for_StuffDocumentsChain,
document_prompt=document_prompt,
document_variable_name=document_variable_name
)
summary_memory = ConversationSummaryBufferMemory(llm=LLM, max_token_limit=100)
retriever=VECTOR_STORE.as_retriever()
rag_retriever = VectorStoreRetrieverMemory(retriever=retriever)
temp_template = (
"""
Combine the chat history and qustion into a standalone question:
Chat history: {chat_history}
question: {user_query}
"""
)
temp_prompt = PromptTemplate.from_template(temp_template)
question_generator_chain = LLMChain(llm=LLM, prompt=temp_prompt)
CONVERSATION_RAG_CHAIN_WITH_SUMMARY_BUFFER = ConversationalRetrievalChain(
combine_docs_chain=combine_docs_chain,
memory=summary_memory,
retriever=rag_retriever,
question_generator=question_generator_chain
)
### Expected behavior
Example code here works: https://api.python.langchain.com/en/latest/chains/langchain.chains.conversational_retrieval.base.ConversationalRetrievalChain.html#langchain.chains.conversational_retrieval.base.ConversationalRetrievalChain | Potential Bug in Retriever.py: Can't instantiate abstract class BaseRetriever with abstract method _get_relevant_documents | https://api.github.com/repos/langchain-ai/langchain/issues/13624/comments | 9 | 2023-11-20T21:27:06Z | 2024-07-01T04:57:23Z | https://github.com/langchain-ai/langchain/issues/13624 | 2,003,033,612 | 13,624 |
[
"langchain-ai",
"langchain"
] | ### Feature request
I have been using Ollama with Langchain for various tasks, but sometimes Ollama takes too long to respond depending on my local hardware. Is it possible to add a configurable timeout to the Ollama base class so that I can adjust this setting to avoid timeouts when using agents. Currently, I am getting a `httpx` timeout error when using Ollama.
### Motivation
This feature will help to leverage local LLMs on a variety of hardware and let's experiment and build with local LLMs before using any third party APIs.
### Your contribution
If this is something that would be considered as a feature I am happy to add in a PR for this feature. | Configurable timeout for Ollama | https://api.github.com/repos/langchain-ai/langchain/issues/13622/comments | 3 | 2023-11-20T21:10:54Z | 2023-11-20T21:36:40Z | https://github.com/langchain-ai/langchain/issues/13622 | 2,003,012,832 | 13,622 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
The compatible versions for library `pgvector` listed in `poetry.lock` are really old: `"pgvector (>=0.1.6,<0.2.0)"`.
Are we able to update them to more recent ones?
### Suggestion:
Update versions to recent ones. | Issue: `pgvector` versions in `poetry.lock` are really old | https://api.github.com/repos/langchain-ai/langchain/issues/13617/comments | 3 | 2023-11-20T19:32:27Z | 2024-03-17T16:06:06Z | https://github.com/langchain-ai/langchain/issues/13617 | 2,002,864,501 | 13,617 |
[
"langchain-ai",
"langchain"
] | ### System Info
# Create and load Redis with documents
vectorstore = RedisVectorStore.from_texts(
texts=texts,
metadatas=metadatas,
embedding=embedding,
index_name=index_name,
redis_url=redis_url
)
The error i faced
Redis cannot be used as a vector database without RediSearch >=2.4Please head to https://redis.io/docs/stack/search/quick_start/to know more about installing the RediSearch module within Redis Stack.
### Who can help?
_No response_
### Information
- [ ] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [ ] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [ ] Document Loaders
- [ ] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
.
### Expected behavior
. | ValueError: Redis failed to connect: | https://api.github.com/repos/langchain-ai/langchain/issues/13611/comments | 4 | 2023-11-20T16:10:48Z | 2024-06-19T11:39:45Z | https://github.com/langchain-ai/langchain/issues/13611 | 2,002,541,905 | 13,611 |
[
"langchain-ai",
"langchain"
] | ### System Info
I encountered an exception and a type checking notice in PyCharm while working with the following code snippet:
```
split_documents = text_splitter.split_documents(raw_documents)
cached_embedder.embed_documents(**split_documents**)
```
The type checking notice indicates that there is a mismatch in the expected type for a Document. According to the type definition, a Document should have properties such as page_content, type, and metadata. However, the function embed_documents seems to be designed to handle a list of strings instead of documents.
To align with the expected type for a Document, it is suggested to consider renaming the function from embed_documents to something like embed_strings or embed_texts. This change would accurately reflect the input type expected by the function and help avoid type-related issues during development.
Thank you for your attention to this matter.
```
cached_embedder.embed_documents(split_documents)
...
File "venv/lib/python3.11/site-packages/langchain/embeddings/cache.py", line 26, in _hash_string_to_uuid
hash_value = hashlib.sha1(input_string.encode("utf-8")).hexdigest()
^^^^^^^^^^^^^^^^^^^
AttributeError: 'Document' object has no attribute 'encode'
```
### Who can help?
_No response_
### Information
- [X] The official example notebooks/scripts
- [ ] My own modified scripts
### Related Components
- [ ] LLMs/Chat Models
- [X] Embedding Models
- [ ] Prompts / Prompt Templates / Prompt Selectors
- [ ] Output Parsers
- [X] Document Loaders
- [X] Vector Stores / Retrievers
- [ ] Memory
- [ ] Agents / Agent Executors
- [ ] Tools / Toolkits
- [ ] Chains
- [ ] Callbacks/Tracing
- [ ] Async
### Reproduction
loader = DirectoryLoader(
"./data",
glob="**/*.md",
show_progress=True,
use_multithreading=True,
loader_cls=UnstructuredMarkdownLoader,
loader_kwargs={"mode": "elements"},
)
raw_documents = loader.load()
text_splitter = CharacterTextSplitter(chunk_size=1000, chunk_overlap=0)
split_documents = text_splitter.split_documents(raw_documents)
cached_embedder.embed_documents(split_documents)
### Expected behavior
method should either accept a list of Documents or be renamed to embed_strings | Type Checking issue: CacheBackedEmbeddings.split_documents does not take a list of Documents | https://api.github.com/repos/langchain-ai/langchain/issues/13610/comments | 3 | 2023-11-20T15:27:25Z | 2024-02-26T16:05:48Z | https://github.com/langchain-ai/langchain/issues/13610 | 2,002,454,320 | 13,610 |
[
"langchain-ai",
"langchain"
] | ### Issue you'd like to raise.
Hello, I am working on a conversational chatbot, here is a snippet of the code :
```
general_system_template = ""You a chatbot...
---
{summaries}"""
general_user_template = "Question: {question}"
messages = [
SystemMessagePromptTemplate.from_template(general_system_template),
HumanMessagePromptTemplate.from_template(general_user_template)
]
qa_prompt = ChatPromptTemplate(
messages=messages,
input_variables=['question', 'summaries']
)
q = Queue()
llm_chat = ChatVertexAI(
temperature=0,
model_name="chat-bison",
streaming=True,
callbacks=[QueueCallback(q)],
verbose=False
)
retriever = docsearch.as_retriever(
search_type="similarity",
search_kwargs={
'k': 2,
'filter': {'source': {'$in': sources}}
}
)
llm_text = VertexAI(
temperature=0,
model_name="text-bison"
)
combine_docs_chain = load_qa_with_sources_chain(
llm=llm_chat,
chain_type="stuff",
prompt=qa_prompt
)
condense_question_template = (
"""Given the following conversation and a follow up question, rephrase the follow up question to be a standalone question, in its original language.
Chat History: {chat_history}
Follow Up Input: {question}"""
)
condense_question_prompt = PromptTemplate.from_template(condense_question_template)
condense_chain = LLMChain(
llm=llm_text,
prompt=condense_question_prompt,
verbose=True
)
chain = ConversationalRetrievalChain(
combine_docs_chain_=combine_docs_chain,
retriever=retriever,
question_generator=condense_chain
)
```
When running the code I have the following error :
```
pydantic.error_wrappers.ValidationError: 2 validation errors for ConversationalRetrievalChain
combine_docs_chain
field required (type=value_error.missing)
combine_docs_chain_
extra fields not permitted (type=value_error.extra)
```
How could I solve this ? Is there any way to have a more detailed error ?
### Suggestion:
_No response_ | Issue: Validation errors for ConversationalRetrievalChain (combine_docs_chain) | https://api.github.com/repos/langchain-ai/langchain/issues/13607/comments | 3 | 2023-11-20T13:30:37Z | 2024-02-26T16:05:53Z | https://github.com/langchain-ai/langchain/issues/13607 | 2,002,214,770 | 13,607 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.