issue_owner_repo listlengths 2 2 | issue_body stringlengths 0 261k ⌀ | issue_title stringlengths 1 925 | issue_comments_url stringlengths 56 81 | issue_comments_count int64 0 2.5k | issue_created_at stringlengths 20 20 | issue_updated_at stringlengths 20 20 | issue_html_url stringlengths 37 62 | issue_github_id int64 387k 2.46B | issue_number int64 1 127k |
|---|---|---|---|---|---|---|---|---|---|
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
I only use BedrockChat replace ChatAnthropic , get error.
my code
```python
import boto3
from crewai import Agent, Crew
from crewai import Task
from langchain.tools import DuckDuckGoSearchRun
import os
#from langchain_anthropic import ChatAnthropic
from langchain_community.chat_models import BedrockChat
# llm = ChatAnthropic(temperature=0, model_name="anthropic.claude-3-sonnet-20240229-v1:0")
bedrock_runtime = boto3.client(
service_name="bedrock-runtime",
region_name="us-west-2",
)
model_id = "anthropic.claude-3-sonnet-20240229-v1:0"
model_kwargs = {
"max_tokens": 2048,
"temperature": 0.0,
"top_k": 250,
"top_p": 1,
"stop_sequences": ["\n\nHuman"],
}
llm = BedrockChat(
client=bedrock_runtime,
model_id=model_id,
model_kwargs=model_kwargs,
)
search_tool = DuckDuckGoSearchRun()
# Define your agents with roles and goals
researcher = Agent(
role="Tech Research",
goal='Uncover cutting-edge developments in AI and data science',
backstory="""You work at a leading tech think tank.
Your expertise lies in identifying emerging trends.
You have a knack for dissecting complex data and presenting
actionable insights.""",
verbose=True,
allow_delegation=False,
llm=llm,
tools=[search_tool]
)
writer = Agent(
role='Tech Content Summarizer and Writer',
goal='Craft compelling short-form content on AI advancements based on long-form text passed to you ',
backstory="""You are a renowned Content Creator, known for your insightful and engaging articles.
You transform complex concepts into compelling narratives.""",
verbose=True,
allow_delegation=True,
llm=llm,
)
# Create tasks for your agents
task1 = Task(
description=f"""Conduct a comprehensive analysis of the latest advancements in AI in 2024.
Identify key trends, breakthrough technologies, and potential industry impacts.
Your final answer MUST be a full analysis report""",
agent=researcher
)
task2 = Task(
description="""Using the text provided by the reseracher agent, develop a short and compelling
short-form summary of the text provided to you about AI.""",
agent=writer
)
# Instantiate your crew with a sequential process
NewsletterCrew = Crew(
agents=[researcher, writer],
tasks=[task1, task2],
verbose=2, # You can set it to 1 or 2 for different logging levels
)
result = NewsletterCrew.kickoff()
print("Welcome to newsletter writer")
print('----------------------------')
print(result)
```
### Error Message and Stack Trace (if applicable)
"Failed to convert text into a pydantic model due to the following error: System message must be at beginning of message list"
### Description
I use crewai with langchain
### System Info
langchain==0.1.11
langchain-anthropic==0.1.4
langchain-community==0.0.27
langchain-core==0.1.30
langchain-openai==0.0.5
langchain-text-splitters==0.0.1 | crewai use langchain_community.chat_models.BedrockChat as llm get error "System message must be at beginning of message list" | https://api.github.com/repos/langchain-ai/langchain/issues/18909/comments | 1 | 2024-03-11T11:48:18Z | 2024-06-17T16:09:33Z | https://github.com/langchain-ai/langchain/issues/18909 | 2,178,933,916 | 18,909 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
llm = OpenAI(temperature=0)
zapier = ZapierNLAWrapper()
toolkit = ZapierToolkit.from_zapier_nla_wrapper(zapier)
agent = initialize_agent(toolkit.get_tools(), llm, agent=AgentType.ZERO_SHOT_REACT_DESCRIPTION, verbose=True)
model = whisper.load_model("base")
### Error Message and Stack Trace (if applicable)
NotImplementedError: Need to determine which default deprecation schedule to use. within ?? minor releases
error in at agent = initialize_agent(toolkit.get_tools(), llm, agent="zero-shot-react-description", verbose=True)
line 53, in get_tools warn_deprecated(
line 337, in warn_deprecated raise NotImplementedError(
### Description
I'm trying to use Langchain to send a email with help of zapier
### System Info
absl-py==2.1.0
aiobotocore @ file:///C:/b/abs_3cwz1w13nn/croot/aiobotocore_1701291550158/work
aiohttp @ file:///C:/b/abs_27h_1rpxgd/croot/aiohttp_1707342354614/work
aioitertools @ file:///tmp/build/80754af9/aioitertools_1607109665762/work
aiosignal @ file:///tmp/build/80754af9/aiosignal_1637843061372/work
alabaster @ file:///home/ktietz/src/ci/alabaster_1611921544520/work
altair @ file:///C:/b/abs_27reu1igbg/croot/altair_1687526066495/work
anaconda-anon-usage @ file:///C:/b/abs_95v3x0wy8p/croot/anaconda-anon-usage_1697038984188/work
anaconda-catalogs @ file:///C:/b/abs_8btyy0o8s8/croot/anaconda-catalogs_1685727315626/work
anaconda-client @ file:///C:/b/abs_34txutm0ue/croot/anaconda-client_1708640705294/work
anaconda-cloud-auth @ file:///C:/b/abs_410afndtyf/croot/anaconda-cloud-auth_1697462767853/work
anaconda-navigator @ file:///C:/b/abs_fetmwtxkqo/croot/anaconda-navigator_1709540481120/work
anaconda-project @ file:///C:/ci_311/anaconda-project_1676458365912/work
anyio @ file:///C:/b/abs_847uobe7ea/croot/anyio_1706220224037/work
appdirs==1.4.4
archspec @ file:///croot/archspec_1697725767277/work
argon2-cffi @ file:///opt/conda/conda-bld/argon2-cffi_1645000214183/work
argon2-cffi-bindings @ file:///C:/ci_311/argon2-cffi-bindings_1676424443321/work
arrow @ file:///C:/ci_311/arrow_1678249767083/work
astroid @ file:///C:/ci_311/astroid_1678740610167/work
astropy @ file:///C:/b/abs_2fb3x_tapx/croot/astropy_1697468987983/work
asttokens @ file:///opt/conda/conda-bld/asttokens_1646925590279/work
async-lru @ file:///C:/b/abs_e0hjkvwwb5/croot/async-lru_1699554572212/work
atomicwrites==1.4.0
attrs @ file:///C:/b/abs_35n0jusce8/croot/attrs_1695717880170/work
Automat @ file:///tmp/build/80754af9/automat_1600298431173/work
autopep8 @ file:///opt/conda/conda-bld/autopep8_1650463822033/work
Babel @ file:///C:/ci_311/babel_1676427169844/work
backports.functools-lru-cache @ file:///tmp/build/80754af9/backports.functools_lru_cache_1618170165463/work
backports.tempfile @ file:///home/linux1/recipes/ci/backports.tempfile_1610991236607/work
backports.weakref==1.0.post1
bcrypt @ file:///C:/ci_311/bcrypt_1676435170049/work
beautifulsoup4 @ file:///C:/b/abs_0agyz1wsr4/croot/beautifulsoup4-split_1681493048687/work
binaryornot @ file:///tmp/build/80754af9/binaryornot_1617751525010/work
black @ file:///C:/b/abs_29gqa9a44y/croot/black_1701097690150/work
bleach @ file:///opt/conda/conda-bld/bleach_1641577558959/work
blinker @ file:///C:/b/abs_d9y2dm7cw2/croot/blinker_1696539752170/work
bokeh @ file:///C:/b/abs_74ungdyhwc/croot/bokeh_1706912192007/work
boltons @ file:///C:/ci_311/boltons_1677729932371/work
botocore @ file:///C:/b/abs_5a285dtc94/croot/botocore_1701286504141/work
Bottleneck @ file:///C:/b/abs_f05kqh7yvj/croot/bottleneck_1707864273291/work
Brotli @ file:///C:/ci_311/brotli-split_1676435766766/work
cachetools @ file:///tmp/build/80754af9/cachetools_1619597386817/work
certifi @ file:///C:/b/abs_35d7n66oz9/croot/certifi_1707229248467/work/certifi
cffi @ file:///C:/b/abs_924gv1kxzj/croot/cffi_1700254355075/work
chardet @ file:///C:/ci_311/chardet_1676436134885/work
charset-normalizer @ file:///tmp/build/80754af9/charset-normalizer_1630003229654/work
click @ file:///C:/b/abs_f9ihnt72pu/croot/click_1698129847492/work
cloudpickle @ file:///C:/b/abs_3796yxesic/croot/cloudpickle_1683040098851/work
clyent==1.2.2
colorama @ file:///C:/ci_311/colorama_1676422310965/work
colorcet @ file:///C:/ci_311/colorcet_1676440389947/work
comm @ file:///C:/ci_311/comm_1678376562840/work
conda @ file:///C:/b/abs_89vd8hj61u/croot/conda_1708369170790/work
conda-build @ file:///C:/b/abs_3ed9gavxgz/croot/conda-build_1708025907525/work
conda-content-trust @ file:///C:/b/abs_e3bcpyv7sw/croot/conda-content-trust_1693490654398/work
conda-libmamba-solver @ file:///croot/conda-libmamba-solver_1706733287605/work/src
conda-pack @ file:///tmp/build/80754af9/conda-pack_1611163042455/work
conda-package-handling @ file:///C:/b/abs_b9wp3lr1gn/croot/conda-package-handling_1691008700066/work
conda-repo-cli==1.0.75
conda-token @ file:///Users/paulyim/miniconda3/envs/c3i/conda-bld/conda-token_1662660369760/work
conda-verify==3.4.2
conda_index @ file:///croot/conda-index_1706633791028/work
conda_package_streaming @ file:///C:/b/abs_6c28n38aaj/croot/conda-package-streaming_1690988019210/work
constantly @ file:///C:/b/abs_cbuavw4443/croot/constantly_1703165617403/work
contourpy @ file:///C:/b/abs_853rfy8zse/croot/contourpy_1700583617587/work
cookiecutter @ file:///C:/b/abs_3d1730toam/croot/cookiecutter_1700677089156/work
cryptography @ file:///C:/b/abs_531eqmhgsd/croot/cryptography_1707523768330/work
cssselect @ file:///C:/b/abs_71gnjab7b0/croot/cssselect_1707339955530/work
cycler @ file:///tmp/build/80754af9/cycler_1637851556182/work
cytoolz @ file:///C:/b/abs_d43s8lnb60/croot/cytoolz_1701723636699/work
dask @ file:///C:/b/abs_1899k8plyj/croot/dask-core_1701396135885/work
dataclasses-json==0.6.4
datashader @ file:///C:/b/abs_cb5s63ty8z/croot/datashader_1699544282143/work
debugpy @ file:///C:/b/abs_c0y1fjipt2/croot/debugpy_1690906864587/work
decorator @ file:///opt/conda/conda-bld/decorator_1643638310831/work
defusedxml @ file:///tmp/build/80754af9/defusedxml_1615228127516/work
diff-match-patch @ file:///Users/ktietz/demo/mc3/conda-bld/diff-match-patch_1630511840874/work
dill @ file:///C:/b/abs_084unuus3z/croot/dill_1692271268687/work
distributed @ file:///C:/b/abs_5eren88ku4/croot/distributed_1701398076011/work
distro @ file:///C:/b/abs_a3uni_yez3/croot/distro_1701455052240/work
docstring-to-markdown @ file:///C:/ci_311/docstring-to-markdown_1677742566583/work
docutils @ file:///C:/ci_311/docutils_1676428078664/work
entrypoints @ file:///C:/ci_311/entrypoints_1676423328987/work
et-xmlfile==1.1.0
executing @ file:///opt/conda/conda-bld/executing_1646925071911/work
fastjsonschema @ file:///C:/ci_311/python-fastjsonschema_1679500568724/work
ffmpeg-python==0.2.0
filelock @ file:///C:/b/abs_f2gie28u58/croot/filelock_1700591233643/work
flake8 @ file:///C:/ci_311/flake8_1678376624746/work
Flask @ file:///C:/b/abs_efc024w7fv/croot/flask_1702980041157/work
flatbuffers==24.3.7
fonttools==4.25.0
frozenlist @ file:///C:/b/abs_d8e__s1ys3/croot/frozenlist_1698702612014/work
fsspec @ file:///C:/b/abs_97mpfsesn0/croot/fsspec_1701286534629/work
future @ file:///C:/ci_311_rebuilds/future_1678998246262/work
gensim @ file:///C:/ci_311/gensim_1677743037820/work
gitdb @ file:///tmp/build/80754af9/gitdb_1617117951232/work
GitPython @ file:///C:/b/abs_e1lwow9h41/croot/gitpython_1696937027832/work
gmpy2 @ file:///C:/ci_311/gmpy2_1677743390134/work
greenlet @ file:///C:/b/abs_a6c75ie0bc/croot/greenlet_1702060012174/work
h11==0.14.0
h5py @ file:///C:/b/abs_17fav01gwy/croot/h5py_1691589733413/work
HeapDict @ file:///Users/ktietz/demo/mc3/conda-bld/heapdict_1630598515714/work
holoviews @ file:///C:/b/abs_704uucojt7/croot/holoviews_1707836477070/work
httpcore==1.0.4
httpx==0.27.0
hvplot @ file:///C:/b/abs_3627uzd5h0/croot/hvplot_1706712443782/work
hyperlink @ file:///tmp/build/80754af9/hyperlink_1610130746837/work
idna @ file:///C:/ci_311/idna_1676424932545/work
imagecodecs @ file:///C:/b/abs_e2g5zbs1q0/croot/imagecodecs_1695065012000/work
imageio @ file:///C:/b/abs_aeqerw_nps/croot/imageio_1707247365204/work
imagesize @ file:///C:/ci_311/imagesize_1676431905616/work
imbalanced-learn @ file:///C:/b/abs_87es3kd5fi/croot/imbalanced-learn_1700648276799/work
importlib-metadata @ file:///C:/b/abs_c1egths604/croot/importlib_metadata-suite_1704813568388/work
incremental @ file:///croot/incremental_1708639938299/work
inflection==0.5.1
iniconfig @ file:///home/linux1/recipes/ci/iniconfig_1610983019677/work
intake @ file:///C:/ci_311_rebuilds/intake_1678999914269/work
intervaltree @ file:///Users/ktietz/demo/mc3/conda-bld/intervaltree_1630511889664/work
ipykernel @ file:///C:/b/abs_c2u94kxcy6/croot/ipykernel_1705933907920/work
ipython @ file:///C:/b/abs_b6pfgmrqnd/croot/ipython_1704833422163/work
ipython-genutils @ file:///tmp/build/80754af9/ipython_genutils_1606773439826/work
ipywidgets @ file:///croot/ipywidgets_1701289330913/work
isort @ file:///tmp/build/80754af9/isort_1628603791788/work
itemadapter @ file:///tmp/build/80754af9/itemadapter_1626442940632/work
itemloaders @ file:///C:/b/abs_5e3azgv25z/croot/itemloaders_1708639993442/work
itsdangerous @ file:///tmp/build/80754af9/itsdangerous_1621432558163/work
jaraco.classes @ file:///tmp/build/80754af9/jaraco.classes_1620983179379/work
jax==0.4.25
jedi @ file:///C:/ci_311/jedi_1679427407646/work
jellyfish @ file:///C:/b/abs_50kgvtnrbj/croot/jellyfish_1695193564091/work
Jinja2 @ file:///C:/b/abs_f7x5a8op2h/croot/jinja2_1706733672594/work
jmespath @ file:///C:/b/abs_59jpuaows7/croot/jmespath_1700144635019/work
joblib @ file:///C:/b/abs_1anqjntpan/croot/joblib_1685113317150/work
json5 @ file:///tmp/build/80754af9/json5_1624432770122/work
jsonpatch==1.33
jsonpointer==2.1
jsonschema @ file:///C:/b/abs_d1c4sm8drk/croot/jsonschema_1699041668863/work
jsonschema-specifications @ file:///C:/b/abs_0brvm6vryw/croot/jsonschema-specifications_1699032417323/work
jupyter @ file:///C:/b/abs_4e102rc6e5/croot/jupyter_1707947170513/work
jupyter-console @ file:///C:/b/abs_82xaa6i2y4/croot/jupyter_console_1680000189372/work
jupyter-events @ file:///C:/b/abs_17ajfqnlz0/croot/jupyter_events_1699282519713/work
jupyter-lsp @ file:///C:/b/abs_ecle3em9d4/croot/jupyter-lsp-meta_1699978291372/work
jupyter_client @ file:///C:/b/abs_a6h3c8hfdq/croot/jupyter_client_1699455939372/work
jupyter_core @ file:///C:/b/abs_c769pbqg9b/croot/jupyter_core_1698937367513/work
jupyter_server @ file:///C:/b/abs_7esjvdakg9/croot/jupyter_server_1699466495151/work
jupyter_server_terminals @ file:///C:/b/abs_ec0dq4b50j/croot/jupyter_server_terminals_1686870763512/work
jupyterlab @ file:///C:/b/abs_43venm28fu/croot/jupyterlab_1706802651134/work
jupyterlab-pygments @ file:///tmp/build/80754af9/jupyterlab_pygments_1601490720602/work
jupyterlab-widgets @ file:///C:/b/abs_adrrqr26no/croot/jupyterlab_widgets_1700169018974/work
jupyterlab_server @ file:///C:/b/abs_e08i7qn9m8/croot/jupyterlab_server_1699555481806/work
keyring @ file:///C:/b/abs_dbjc7g0dh2/croot/keyring_1678999228878/work
kiwisolver @ file:///C:/ci_311/kiwisolver_1676431979301/work
langchain==0.1.11
langchain-community==0.0.27
langchain-core==0.1.30
langchain-openai==0.0.8
langchain-text-splitters==0.0.1
langsmith==0.1.23
lazy-object-proxy @ file:///C:/ci_311/lazy-object-proxy_1676432050939/work
lazy_loader @ file:///C:/b/abs_3bn4_r4g42/croot/lazy_loader_1695850158046/work
lckr_jupyterlab_variableinspector @ file:///C:/b/abs_b5yb2mprx2/croot/jupyterlab-variableinspector_1701096592545/work
libarchive-c @ file:///tmp/build/80754af9/python-libarchive-c_1617780486945/work
libmambapy @ file:///C:/b/abs_2euls_1a38/croot/mamba-split_1704219444888/work/libmambapy
linkify-it-py @ file:///C:/ci_311/linkify-it-py_1676474436187/work
llvmlite @ file:///C:/b/abs_da15r8vkf8/croot/llvmlite_1706910779994/work
lmdb @ file:///C:/b/abs_556ronuvb2/croot/python-lmdb_1682522366268/work
locket @ file:///C:/ci_311/locket_1676428325082/work
lxml @ file:///C:/b/abs_9e7tpg2vv9/croot/lxml_1695058219431/work
lz4 @ file:///C:/b/abs_064u6aszy3/croot/lz4_1686057967376/work
Markdown @ file:///C:/ci_311/markdown_1676437912393/work
markdown-it-py @ file:///C:/b/abs_a5bfngz6fu/croot/markdown-it-py_1684279915556/work
MarkupSafe @ file:///C:/b/abs_ecfdqh67b_/croot/markupsafe_1704206030535/work
marshmallow==3.21.1
matplotlib @ file:///C:/b/abs_e26vnvd5s1/croot/matplotlib-suite_1698692153288/work
matplotlib-inline @ file:///C:/ci_311/matplotlib-inline_1676425798036/work
mccabe @ file:///opt/conda/conda-bld/mccabe_1644221741721/work
mdit-py-plugins @ file:///C:/ci_311/mdit-py-plugins_1676481827414/work
mdurl @ file:///C:/ci_311/mdurl_1676442676678/work
mediapipe==0.10.11
menuinst @ file:///C:/b/abs_099kybla52/croot/menuinst_1706732987063/work
mistune @ file:///C:/ci_311/mistune_1676425111783/work
mkl-fft @ file:///C:/b/abs_19i1y8ykas/croot/mkl_fft_1695058226480/work
mkl-random @ file:///C:/b/abs_edwkj1_o69/croot/mkl_random_1695059866750/work
mkl-service==2.4.0
ml-dtypes==0.3.2
more-itertools @ file:///C:/b/abs_36p38zj5jx/croot/more-itertools_1700662194485/work
mpmath @ file:///C:/b/abs_7833jrbiox/croot/mpmath_1690848321154/work
msgpack @ file:///C:/ci_311/msgpack-python_1676427482892/work
multidict @ file:///C:/b/abs_44ido987fv/croot/multidict_1701097803486/work
multipledispatch @ file:///C:/ci_311/multipledispatch_1676442767760/work
munkres==1.1.4
mypy @ file:///C:/b/abs_3880czibje/croot/mypy-split_1708366584048/work
mypy-extensions @ file:///C:/b/abs_8f7xiidjya/croot/mypy_extensions_1695131051147/work
navigator-updater @ file:///C:/b/abs_895otdwmo9/croot/navigator-updater_1695210220239/work
nbclient @ file:///C:/b/abs_cal0q5fyju/croot/nbclient_1698934263135/work
nbconvert @ file:///C:/b/abs_17p29f_rx4/croot/nbconvert_1699022793097/work
nbformat @ file:///C:/b/abs_5a2nea1iu2/croot/nbformat_1694616866197/work
nest-asyncio @ file:///C:/b/abs_65d6lblmoi/croot/nest-asyncio_1708532721305/work
networkx @ file:///C:/b/abs_e6gi1go5op/croot/networkx_1690562046966/work
nltk @ file:///C:/b/abs_a638z6l1z0/croot/nltk_1688114186909/work
notebook @ file:///C:/b/abs_65xjlnf9q4/croot/notebook_1708029957105/work
notebook_shim @ file:///C:/b/abs_a5xysln3lb/croot/notebook-shim_1699455926920/work
numba @ file:///C:/b/abs_3e3co1qfvo/croot/numba_1707085143481/work
numexpr @ file:///C:/b/abs_5fucrty5dc/croot/numexpr_1696515448831/work
numpy @ file:///C:/b/abs_c1ywpu18ar/croot/numpy_and_numpy_base_1708638681471/work/dist/numpy-1.26.4-cp311-cp311-win_amd64.whl#sha256=5dfd3e04dc1c2826d3f404fdc7f93c097901f5da9b91f4f394f79d4e038ed81d
numpydoc @ file:///C:/ci_311/numpydoc_1676453412027/work
openai==1.13.3
openai-whisper==20231117
opencv-contrib-python==4.9.0.80
openpyxl==3.0.10
opt-einsum==3.3.0
orjson==3.9.15
overrides @ file:///C:/b/abs_cfh89c8yf4/croot/overrides_1699371165349/work
packaging==23.2
pandas @ file:///C:/b/abs_fej9bi0gew/croot/pandas_1702318041921/work/dist/pandas-2.1.4-cp311-cp311-win_amd64.whl#sha256=d3609b7cc3e3c4d99ad640a4b8e710ba93ccf967ab8e5245b91033e0200f9286
pandocfilters @ file:///opt/conda/conda-bld/pandocfilters_1643405455980/work
panel @ file:///C:/b/abs_abnm_ot327/croot/panel_1706539613212/work
param @ file:///C:/b/abs_39ncjvb7lu/croot/param_1705937833389/work
paramiko @ file:///opt/conda/conda-bld/paramiko_1640109032755/work
parsel @ file:///C:/b/abs_ebc3tzm_c4/croot/parsel_1707503517596/work
parso @ file:///opt/conda/conda-bld/parso_1641458642106/work
partd @ file:///C:/b/abs_46awex0fd7/croot/partd_1698702622970/work
pathlib @ file:///Users/ktietz/demo/mc3/conda-bld/pathlib_1629713961906/work
pathspec @ file:///C:/ci_311/pathspec_1679427644142/work
patsy==0.5.3
pexpect @ file:///tmp/build/80754af9/pexpect_1605563209008/work
pickleshare @ file:///tmp/build/80754af9/pickleshare_1606932040724/work
pillow @ file:///C:/b/abs_e22m71t0cb/croot/pillow_1707233126420/work
pkce @ file:///C:/b/abs_d0z4444tb0/croot/pkce_1690384879799/work
pkginfo @ file:///C:/b/abs_d18srtr68x/croot/pkginfo_1679431192239/work
platformdirs @ file:///C:/b/abs_b6z_yqw_ii/croot/platformdirs_1692205479426/work
plotly @ file:///C:/ci_311/plotly_1676443558683/work
pluggy @ file:///C:/ci_311/pluggy_1676422178143/work
ply==3.11
prometheus-client @ file:///C:/ci_311/prometheus_client_1679591942558/work
prompt-toolkit @ file:///C:/b/abs_68uwr58ed1/croot/prompt-toolkit_1704404394082/work
Protego @ file:///tmp/build/80754af9/protego_1598657180827/work
protobuf==3.20.3
psutil @ file:///C:/ci_311_rebuilds/psutil_1679005906571/work
ptyprocess @ file:///tmp/build/80754af9/ptyprocess_1609355006118/work/dist/ptyprocess-0.7.0-py2.py3-none-any.whl
pure-eval @ file:///opt/conda/conda-bld/pure_eval_1646925070566/work
py-cpuinfo @ file:///C:/b/abs_9ej7u6shci/croot/py-cpuinfo_1698068121579/work
pyarrow @ file:///C:/b/abs_93i_y2dub4/croot/pyarrow_1707330894046/work/python
pyasn1 @ file:///Users/ktietz/demo/mc3/conda-bld/pyasn1_1629708007385/work
pyasn1-modules==0.2.8
pycodestyle @ file:///C:/ci_311/pycodestyle_1678376707834/work
pycosat @ file:///C:/b/abs_31zywn1be3/croot/pycosat_1696537126223/work
pycparser @ file:///tmp/build/80754af9/pycparser_1636541352034/work
pyct @ file:///C:/ci_311/pyct_1676438538057/work
pycurl==7.45.2
pydantic @ file:///C:/b/abs_9byjrk31gl/croot/pydantic_1695798904828/work
pydeck @ file:///C:/b/abs_ad9p880wi1/croot/pydeck_1706194121328/work
PyDispatcher==2.0.5
pydocstyle @ file:///C:/ci_311/pydocstyle_1678402028085/work
pyerfa @ file:///C:/ci_311/pyerfa_1676503994641/work
pyflakes @ file:///C:/ci_311/pyflakes_1678402101687/work
Pygments @ file:///C:/b/abs_fay9dpq4n_/croot/pygments_1684279990574/work
PyJWT @ file:///C:/ci_311/pyjwt_1676438890509/work
pylint @ file:///C:/ci_311/pylint_1678740302984/work
pylint-venv @ file:///C:/ci_311/pylint-venv_1678402170638/work
pyls-spyder==0.4.0
PyNaCl @ file:///C:/ci_311/pynacl_1676445861112/work
pyodbc @ file:///C:/b/abs_90kly0uuwz/croot/pyodbc_1705431396548/work
pyOpenSSL @ file:///C:/b/abs_baj0aupznq/croot/pyopenssl_1708380486701/work
pyparsing @ file:///C:/ci_311/pyparsing_1678502182533/work
PyQt5==5.15.10
PyQt5-sip @ file:///C:/b/abs_c0pi2mimq3/croot/pyqt-split_1698769125270/work/pyqt_sip
PyQtWebEngine==5.15.6
PySocks @ file:///C:/ci_311/pysocks_1676425991111/work
pytest @ file:///C:/b/abs_48heoo_k8y/croot/pytest_1690475385915/work
python-dateutil @ file:///tmp/build/80754af9/python-dateutil_1626374649649/work
python-dotenv @ file:///C:/ci_311/python-dotenv_1676455170580/work
python-gitlab==4.4.0
python-json-logger @ file:///C:/b/abs_cblnsm6puj/croot/python-json-logger_1683824130469/work
python-lsp-black @ file:///C:/ci_311/python-lsp-black_1678721855627/work
python-lsp-jsonrpc==1.0.0
python-lsp-server @ file:///C:/b/abs_catecj7fv1/croot/python-lsp-server_1681930405912/work
python-slugify @ file:///tmp/build/80754af9/python-slugify_1620405669636/work
python-snappy @ file:///C:/ci_311/python-snappy_1676446060182/work
pytoolconfig @ file:///C:/b/abs_f2j_xsvrpn/croot/pytoolconfig_1701728751207/work
pytz @ file:///C:/b/abs_19q3ljkez4/croot/pytz_1695131651401/work
pyviz_comms @ file:///C:/b/abs_31r9afnand/croot/pyviz_comms_1701728067143/work
pywavelets @ file:///C:/b/abs_7est386xsb/croot/pywavelets_1705049855879/work
pywin32==305.1
pywin32-ctypes @ file:///C:/ci_311/pywin32-ctypes_1676427747089/work
pywinpty @ file:///C:/ci_311/pywinpty_1677707791185/work/target/wheels/pywinpty-2.0.10-cp311-none-win_amd64.whl
PyYAML @ file:///C:/b/abs_782o3mbw7z/croot/pyyaml_1698096085010/work
pyzmq @ file:///C:/b/abs_89aq69t0up/croot/pyzmq_1705605705281/work
QDarkStyle @ file:///tmp/build/80754af9/qdarkstyle_1617386714626/work
qstylizer @ file:///C:/ci_311/qstylizer_1678502012152/work/dist/qstylizer-0.2.2-py2.py3-none-any.whl
QtAwesome @ file:///C:/ci_311/qtawesome_1678402331535/work
qtconsole @ file:///C:/b/abs_eb4u9jg07y/croot/qtconsole_1681402843494/work
QtPy @ file:///C:/b/abs_derqu__3p8/croot/qtpy_1700144907661/work
queuelib @ file:///C:/b/abs_563lpxcne9/croot/queuelib_1696951148213/work
referencing @ file:///C:/b/abs_09f4hj6adf/croot/referencing_1699012097448/work
regex @ file:///C:/b/abs_d5e2e5uqmr/croot/regex_1696515472506/work
requests @ file:///C:/b/abs_474vaa3x9e/croot/requests_1707355619957/work
requests-file @ file:///Users/ktietz/demo/mc3/conda-bld/requests-file_1629455781986/work
requests-toolbelt @ file:///C:/b/abs_2fsmts66wp/croot/requests-toolbelt_1690874051210/work
rfc3339-validator @ file:///C:/b/abs_ddfmseb_vm/croot/rfc3339-validator_1683077054906/work
rfc3986-validator @ file:///C:/b/abs_6e9azihr8o/croot/rfc3986-validator_1683059049737/work
rich @ file:///C:/b/abs_09j2g5qnu8/croot/rich_1684282185530/work
rope @ file:///C:/ci_311/rope_1678402524346/work
rpds-py @ file:///C:/b/abs_76j4g4la23/croot/rpds-py_1698947348047/work
Rtree @ file:///C:/ci_311/rtree_1676455758391/work
ruamel-yaml-conda @ file:///C:/ci_311/ruamel_yaml_1676455799258/work
ruamel.yaml @ file:///C:/ci_311/ruamel.yaml_1676439214109/work
s3fs @ file:///C:/b/abs_24vbfcawyu/croot/s3fs_1701294224436/work
scikit-image @ file:///C:/b/abs_f7z1pjjn6f/croot/scikit-image_1707346180040/work
scikit-learn==1.4.1.post1
scipy==1.11.4
Scrapy @ file:///C:/ci_311/scrapy_1678502587780/work
seaborn @ file:///C:/ci_311/seaborn_1676446547861/work
semver @ file:///tmp/build/80754af9/semver_1603822362442/work
Send2Trash @ file:///C:/b/abs_08dh49ew26/croot/send2trash_1699371173324/work
service-identity @ file:///Users/ktietz/demo/mc3/conda-bld/service_identity_1629460757137/work
sip @ file:///C:/b/abs_edevan3fce/croot/sip_1698675983372/work
six @ file:///tmp/build/80754af9/six_1644875935023/work
smart-open @ file:///C:/ci_311/smart_open_1676439339434/work
smmap @ file:///tmp/build/80754af9/smmap_1611694433573/work
sniffio @ file:///C:/b/abs_3akdewudo_/croot/sniffio_1705431337396/work
snowballstemmer @ file:///tmp/build/80754af9/snowballstemmer_1637937080595/work
sortedcontainers @ file:///tmp/build/80754af9/sortedcontainers_1623949099177/work
sounddevice==0.4.6
soupsieve @ file:///C:/b/abs_bbsvy9t4pl/croot/soupsieve_1696347611357/work
Sphinx @ file:///C:/ci_311/sphinx_1676434546244/work
sphinxcontrib-applehelp @ file:///home/ktietz/src/ci/sphinxcontrib-applehelp_1611920841464/work
sphinxcontrib-devhelp @ file:///home/ktietz/src/ci/sphinxcontrib-devhelp_1611920923094/work
sphinxcontrib-htmlhelp @ file:///tmp/build/80754af9/sphinxcontrib-htmlhelp_1623945626792/work
sphinxcontrib-jsmath @ file:///home/ktietz/src/ci/sphinxcontrib-jsmath_1611920942228/work
sphinxcontrib-qthelp @ file:///home/ktietz/src/ci/sphinxcontrib-qthelp_1611921055322/work
sphinxcontrib-serializinghtml @ file:///tmp/build/80754af9/sphinxcontrib-serializinghtml_1624451540180/work
spyder @ file:///C:/b/abs_e99kl7d8t0/croot/spyder_1681934304813/work
spyder-kernels @ file:///C:/b/abs_e788a8_4y9/croot/spyder-kernels_1691599588437/work
SQLAlchemy @ file:///C:/b/abs_876dxwqqu8/croot/sqlalchemy_1705089154696/work
stack-data @ file:///opt/conda/conda-bld/stack_data_1646927590127/work
statsmodels @ file:///C:/b/abs_7bth810rna/croot/statsmodels_1689937298619/work
streamlit==1.32.0
sympy @ file:///C:/b/abs_82njkonm7f/croot/sympy_1701397685028/work
tables @ file:///C:/b/abs_411740ajo7/croot/pytables_1705614883108/work
tabulate @ file:///C:/b/abs_21rf8iibnh/croot/tabulate_1701354830521/work
tblib @ file:///Users/ktietz/demo/mc3/conda-bld/tblib_1629402031467/work
tenacity @ file:///C:/b/abs_ddkoa9nju6/croot/tenacity_1682972298929/work
terminado @ file:///C:/ci_311/terminado_1678228513830/work
text-unidecode @ file:///Users/ktietz/demo/mc3/conda-bld/text-unidecode_1629401354553/work
textdistance @ file:///tmp/build/80754af9/textdistance_1612461398012/work
threadpoolctl @ file:///Users/ktietz/demo/mc3/conda-bld/threadpoolctl_1629802263681/work
three-merge @ file:///tmp/build/80754af9/three-merge_1607553261110/work
tifffile @ file:///C:/b/abs_45o5chuqwt/croot/tifffile_1695107511025/work
tiktoken==0.6.0
tinycss2 @ file:///C:/ci_311/tinycss2_1676425376744/work
tldextract @ file:///opt/conda/conda-bld/tldextract_1646638314385/work
toml @ file:///tmp/build/80754af9/toml_1616166611790/work
tomlkit @ file:///C:/ci_311/tomlkit_1676425418821/work
toolz @ file:///C:/ci_311/toolz_1676431406517/work
torch==2.2.1
tornado @ file:///C:/b/abs_0cbrstidzg/croot/tornado_1696937003724/work
tqdm @ file:///C:/b/abs_f76j9hg7pv/croot/tqdm_1679561871187/work
traitlets @ file:///C:/ci_311/traitlets_1676423290727/work
truststore @ file:///C:/b/abs_55z7b3r045/croot/truststore_1695245455435/work
Twisted @ file:///C:/b/abs_e7yqd811in/croot/twisted_1708702883769/work
twisted-iocpsupport @ file:///C:/ci_311/twisted-iocpsupport_1676447612160/work
typing-inspect==0.9.0
typing_extensions @ file:///C:/b/abs_72cdotwc_6/croot/typing_extensions_1705599364138/work
tzdata @ file:///croot/python-tzdata_1690578112552/work
tzlocal @ file:///C:/ci_311/tzlocal_1676439620276/work
uc-micro-py @ file:///C:/ci_311/uc-micro-py_1676457695423/work
ujson @ file:///C:/ci_311/ujson_1676434714224/work
Unidecode @ file:///tmp/build/80754af9/unidecode_1614712377438/work
urllib3 @ file:///C:/b/abs_0c3739ssy1/croot/urllib3_1707349314852/work
validators @ file:///tmp/build/80754af9/validators_1612286467315/work
w3lib @ file:///C:/b/abs_957begrwnl/croot/w3lib_1708640020760/work
watchdog @ file:///C:/ci_311/watchdog_1676457923624/work
wcwidth @ file:///Users/ktietz/demo/mc3/conda-bld/wcwidth_1629357192024/work
webencodings==0.5.1
websocket-client @ file:///C:/ci_311/websocket-client_1676426063281/work
Werkzeug @ file:///C:/b/abs_8578rs2ra_/croot/werkzeug_1679489759009/work
whatthepatch @ file:///C:/ci_311/whatthepatch_1678402578113/work
widgetsnbextension @ file:///C:/b/abs_derxhz1biv/croot/widgetsnbextension_1701273671518/work
win-inet-pton @ file:///C:/ci_311/win_inet_pton_1676425458225/work
wrapt @ file:///C:/ci_311/wrapt_1676432805090/work
xarray @ file:///C:/b/abs_5bkjiynp4e/croot/xarray_1689041498548/work
xlwings @ file:///C:/ci_311_rebuilds/xlwings_1679013429160/work
xyzservices @ file:///C:/ci_311/xyzservices_1676434829315/work
yapf @ file:///tmp/build/80754af9/yapf_1615749224965/work
yarl @ file:///C:/b/abs_8bxwdyhjvp/croot/yarl_1701105248152/work
zict @ file:///C:/b/abs_780gyydtbp/croot/zict_1695832899404/work
zipp @ file:///C:/b/abs_b0beoc27oa/croot/zipp_1704206963359/work
zope.interface @ file:///C:/ci_311/zope.interface_1676439868776/work
zstandard==0.19.0
| toolkit.get_tools() is not working langchain version 6.0.1 if deprecated then no alternative | https://api.github.com/repos/langchain-ai/langchain/issues/18907/comments | 2 | 2024-03-11T11:42:43Z | 2024-03-11T15:21:00Z | https://github.com/langchain-ai/langchain/issues/18907 | 2,178,923,631 | 18,907 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
schema = {
"properties": {
"sentiment": {"type": "string"},
"aggressiveness": {"type": "integer"},
"language": {"type": "string"},
}
}
chain = create_tagging_chain(schema, model)
test_string = "Hey there!! We are going to celerate john's birthday. Suggest some celebration idea."
res = chain.invoke(test_string)
### Error Message and Stack Trace (if applicable)
File "/python3.11/site-packages/langchain/chains/llm.py", line 104, in _call
return self.create_outputs(response)[0]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/python3.11/site-packages/langchain/chains/llm.py", line 258, in create_outputs
result = [
^
File "/python3.11/site-packages/langchain/chains/llm.py", line 261, in <listcomp>
self.output_key: self.output_parser.parse_result(generation),
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/python3.11/site-packages/langchain_core/output_parsers/openai_functions.py", line 102, in parse_result
raise OutputParserException(
langchain_core.exceptions.OutputParserException: Could not parse function call data: Expecting property name enclosed in double quotes: line 2 column 3 (char 4)
### Description
### System Info
System Information
------------------
> OS: Linux
> OS Version: #18~22.04.1-Ubuntu SMP PREEMPT_DYNAMIC
> Python Version: 3.11.6 (main, Oct 23 2023, 22:47:21) [GCC 9.4.0]
Package Information
-------------------
> langchain_core: 0.1.29
> langchain: 0.1.11
> langchain_community: 0.0.25
> langsmith: 0.1.22
> langchain_openai: 0.0.8
> langchain_text_splitters: 0.0.1
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph
> langserve
| Output Parser Exception for Tagging UseCase | https://api.github.com/repos/langchain-ai/langchain/issues/18906/comments | 5 | 2024-03-11T11:27:19Z | 2024-06-20T16:09:14Z | https://github.com/langchain-ai/langchain/issues/18906 | 2,178,895,489 | 18,906 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
from langchain_community.utilities import SQLDatabase
db = SQLDatabase.from_uri("sqlite:///Chinook.db")
print(db.dialect)
print(db.get_usable_table_names())
db.run("SELECT * FROM Artist LIMIT 10;")
from langchain_community.agent_toolkits import create_sql_agent
from langchain_openai import ChatOpenAI
llm = AzureChatOpenAI(setup...)
agent_executor = create_sql_agent(llm, db=db, agent_type="openai-tools", verbose=True)
agent_executor.invoke(
"List the total sales per country. Which country's customers spent the most?"
)
### Error Message and Stack Trace (if applicable)
An output parsing error occurred. In order to pass this error back to the agent and have it try again, pass `handle_parsing_errors=True` to the AgentExecutor. This is the error: Could not parse tool input: {'arguments': '# I will use the sql_db_list_tables and sql_db_schema tools to see what tables are in the database and their schemas.\n\nfrom functions import sql_db_list_tables, sql_db_schema\n\n# List the tables in the database\nprint(sql_db_list_tables(__arg1=""))\n\n# Get the schema for the relevant tables\nprint(sql_db_schema({"table_names": "orders, customers, countries"}))', 'name': 'python'} because the `arguments` is not valid JSON.
### Description
Even when enabeling "handle_parsing_errors" for the Agent Executor i dont get the result given in the tutorial just some SQL operations done by the agent.
### System Info
langchain==0.1.11
langchain-community==0.0.27
langchain-core==0.1.30
langchain-experimental==0.0.53
langchain-openai==0.0.8
langchain-text-splitters==0.0.1
langsmith==0.1.23
| Langchain SQL Agent tutorial runs into error | https://api.github.com/repos/langchain-ai/langchain/issues/18905/comments | 4 | 2024-03-11T11:26:00Z | 2024-07-30T16:06:21Z | https://github.com/langchain-ai/langchain/issues/18905 | 2,178,892,943 | 18,905 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
The template for `rag-weaviate` uses the following code:
```python
vectorstore = Weaviate.from_existing_index(WEAVIATE_INDEX_NAME, OpenAIEmbeddings())
retriever = vectorstore.as_retriever()
```
### Error Message and Stack Trace (if applicable)
However this leads to the following error:
```
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
Cell In[17], [line 1](vscode-notebook-cell:?execution_count=17&line=1)
----> [1](vscode-notebook-cell:?execution_count=17&line=1) vectorstore = Weaviate.from_existing_index(weaviate_url=WEAVIATE_URL, weaviate_api_key="", embedding=OpenAIEmbeddings())
[2](vscode-notebook-cell:?execution_count=17&line=2) retriever = vectorstore.as_retriever()
AttributeError: type object 'Weaviate' has no attribute 'from_existing_index'
```
### Description
The template for `rag-weaviate` is outdated.
Line at https://github.com/langchain-ai/langchain/blob/master/templates/rag-weaviate/rag_weaviate/chain.py#L36
uses a method that doesn't exist.
Here is the alternative that works:
```python
import weaviate
client = weaviate.Client(url=os.environ["WEAVIATE_URL"], ...)
vectorstore = Weaviate(client, index_name, text_key)
```
### System Info
System Information
------------------
> OS: Linux
> OS Version: #1 SMP Debian 5.10.209-2 (2024-01-31)
> Python Version: 3.11.8 | packaged by conda-forge | (main, Feb 16 2024, 20:53:32) [GCC 12.3.0]
Package Information
-------------------
> langchain_core: 0.1.30
> langchain: 0.1.8
> langchain_community: 0.0.21
> langsmith: 0.1.3
> langchain_openai: 0.0.8
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph
> langserve | The template for `rag-weaviate` is outdated. | https://api.github.com/repos/langchain-ai/langchain/issues/18902/comments | 0 | 2024-03-11T10:49:59Z | 2024-06-17T16:09:28Z | https://github.com/langchain-ai/langchain/issues/18902 | 2,178,818,275 | 18,902 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
When I use openai as LLM model, everything works fine. But, when I used
```
from langchain.llms.fake import FakeListLLM
model = FakeListLLM(response=['hello'])
```
it raised this:
```
Error
Traceback (most recent call last):
File "/home/ali/.local/lib/python3.10/site-packages/langchain_core/output_parsers/json.py", line 175, in parse_and_check_json_markdown
json_obj = parse_json_markdown(text)
File "/home/ali/.local/lib/python3.10/site-packages/langchain_core/output_parsers/json.py", line 157, in parse_json_markdown
parsed = parser(json_str)
File "/home/ali/.local/lib/python3.10/site-packages/langchain_core/output_parsers/json.py", line 125, in parse_partial_json
return json.loads(s, strict=strict)
File "/usr/lib/python3.10/json/__init__.py", line 359, in loads
return cls(**kw).decode(s)
File "/usr/lib/python3.10/json/decoder.py", line 337, in decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
File "/usr/lib/python3.10/json/decoder.py", line 355, in raw_decode
raise JSONDecodeError("Expecting value", s, err.value) from None
json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/ali/.local/lib/python3.10/site-packages/langchain/chains/query_constructor/base.py", line 50, in parse
parsed = parse_and_check_json_markdown(text, expected_keys)
File "/home/ali/.local/lib/python3.10/site-packages/langchain_core/output_parsers/json.py", line 177, in parse_and_check_json_markdown
raise OutputParserException(f"Got invalid JSON object. Error: {e}")
langchain_core.exceptions.OutputParserException: Got invalid JSON object. Error: Expecting value: line 1 column 1 (char 0)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/lib/python3.10/asyncio/base_events.py", line 649, in run_until_complete
return future.result()
File "/home/ali/projects/nlp/parschat-logic/parschat_logic/use_cases/documents/patent_qa_11/algorithm.py", line 134, in __call__
retrieved_docs = await self.run_retriever(query=input_data.message)
File "/home/ali/projects/nlp/parschat-logic/parschat_logic/use_cases/documents/patent_qa_11/algorithm.py", line 144, in run_retriever
return await self.__getattribute__(
File "/home/ali/projects/nlp/parschat-logic/parschat_logic/use_cases/documents/patent_qa_11/algorithm.py", line 151, in retrieve_docs_by_self_query
relevant_docs = self.retrieval.invoke(query)
File "/home/ali/.local/lib/python3.10/site-packages/langchain_core/retrievers.py", line 141, in invoke
return self.get_relevant_documents(
File "/home/ali/.local/lib/python3.10/site-packages/langchain_core/retrievers.py", line 244, in get_relevant_documents
raise e
File "/home/ali/.local/lib/python3.10/site-packages/langchain_core/retrievers.py", line 237, in get_relevant_documents
result = self._get_relevant_documents(
File "/home/ali/.local/lib/python3.10/site-packages/langchain/retrievers/self_query/base.py", line 181, in _get_relevant_documents
structured_query = self.query_constructor.invoke(
File "/home/ali/.local/lib/python3.10/site-packages/langchain_core/runnables/base.py", line 4069, in invoke
return self.bound.invoke(
File "/home/ali/.local/lib/python3.10/site-packages/langchain_core/runnables/base.py", line 2075, in invoke
input = step.invoke(
File "/home/ali/.local/lib/python3.10/site-packages/langchain_core/output_parsers/base.py", line 178, in invoke
return self._call_with_config(
File "/home/ali/.local/lib/python3.10/site-packages/langchain_core/runnables/base.py", line 1262, in _call_with_config
context.run(
File "/home/ali/.local/lib/python3.10/site-packages/langchain_core/runnables/config.py", line 326, in call_func_with_variable_args
return func(input, **kwargs) # type: ignore[call-arg]
File "/home/ali/.local/lib/python3.10/site-packages/langchain_core/output_parsers/base.py", line 179, in <lambda>
lambda inner_input: self.parse_result([Generation(text=inner_input)]),
File "/home/ali/.local/lib/python3.10/site-packages/langchain_core/output_parsers/base.py", line 221, in parse_result
return self.parse(result[0].text)
File "/home/ali/.local/lib/python3.10/site-packages/langchain/chains/query_constructor/base.py", line 63, in parse
raise OutputParserException(
langchain_core.exceptions.OutputParserException: Parsing text
hello
raised following error:
Got invalid JSON object. Error: Expecting value: line 1 column 1 (char 0)
```
### Error Message and Stack Trace (if applicable)
_No response_
### Description
I am trying to use FakeLLM model of langchain for my unit tests in self query retrieval
### System Info
```
langchain==0.1.11
langchain-community==0.0.25
langchain-core==0.1.29
langchain-openai==0.0.8
langchain-text-splitters==0.0.1
ubuntu 22.04
python 3.10 | error during testing selfquery with langchainmockmodel | https://api.github.com/repos/langchain-ai/langchain/issues/18900/comments | 0 | 2024-03-11T10:29:09Z | 2024-06-17T16:10:14Z | https://github.com/langchain-ai/langchain/issues/18900 | 2,178,776,395 | 18,900 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
llm = BedrockChat(
client=client,
model_id='anthropic.claude-3-sonnet-20240229-v1:0'
)
SQLDatabaseChain.from_llm(
llm,
db_connection,
prompt=few_shot_prompt,
verbose=False,
return_intermediate_steps=True,
)
### Error Message and Stack Trace (if applicable)
SQLDatabaseChain returns SQL code as final answer.
### Description
Problem: SQLDatabaseChain returns SQL code as final answer as well as the intermediate step when using BedrockChat claude3 sonnet model as llm inside the SQLDatabaseChain
Ideally SQLDatabaseChain should return the final answer as the answer fetched from the database after excecuting the SQL code and intermediate step as SQL code.
Does the SQLDatabaseChain work with BedrockChat and claude3 sonnet?
### System Info
awslambdaric==2.0.8
boto3==1.34.37
chromadb==0.4.22
huggingface-hub==0.20.3
langchain==0.1.11
langchain-community==0.0.27
langchain-experimental==0.0.50
PyYAML==6.0.1
sentence_transformers==2.3.0
snowflake-connector-python==3.7.0
snowflake-sqlalchemy==1.5.1
SQLAlchemy==1.4.51 | BedrockChat Claude3 and SQLDatabaseChain | https://api.github.com/repos/langchain-ai/langchain/issues/18893/comments | 8 | 2024-03-11T07:29:10Z | 2024-04-18T04:25:46Z | https://github.com/langchain-ai/langchain/issues/18893 | 2,178,414,326 | 18,893 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
The following code will not work:
```
from langchain.memory.entity import UpstashRedisEntityStore
entity_store = UpstashRedisEntityStore(
session_id="my-session",
url="your-upstash-url",
token="your-upstash-redis-token",
ttl=600,
)
```
### Error Message and Stack Trace (if applicable)
Upstash Redis instance could not be initiated.
Traceback (most recent call last):
File "/Users/albertpurnama/Documents/dev/qornetto/main.py", line 27, in <module>
entity_store = UpstashRedisEntityStore(
^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/albertpurnama/Documents/dev/qornetto/.venv/lib/python3.11/site-packages/langchain/memory/entity.py", line 106, in __init__
self.session_id = session_id
^^^^^^^^^^^^^^^
File "/Users/albertpurnama/Documents/dev/qornetto/.venv/lib/python3.11/site-packages/pydantic/v1/main.py", line 357, in __setattr__
raise ValueError(f'"{self.__class__.__name__}" object has no field "{name}"')
ValueError: "UpstashRedisEntityStore" object has no field "session_id"
### Description
I'm trying to use `UpstashRedisEntityStore` but the initializer does not work.
### System Info
langchain==0.1.11
langchain-community==0.0.27
langchain-core==0.1.30
langchain-openai==0.0.8
langchain-text-splitters==0.0.1
langchainhub==0.1.15
mac
python 3.11.6 | `UpstashRedisEntityStore` initializer does not work. Upstash Redis Instance could not be created. | https://api.github.com/repos/langchain-ai/langchain/issues/18891/comments | 1 | 2024-03-11T06:54:51Z | 2024-06-18T16:09:46Z | https://github.com/langchain-ai/langchain/issues/18891 | 2,178,363,508 | 18,891 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
model=chatglm3-6b
python=3.10.11
langchain=0.1.11
langchain-community=0.0.27
langchain-core=0.1.30
langchain-openai=0.0.8
openai=1.13.3

@liugddx 帮忙看下 ,agent function calling 老是不通
### Error Message and Stack Trace (if applicable)

异常信息
### Description
agent function calling 没有
### System Info
python
| langchain agent 返回了空串,parse markdown的时候报错,agent function calling 没有生效 | https://api.github.com/repos/langchain-ai/langchain/issues/18888/comments | 0 | 2024-03-11T03:21:10Z | 2024-06-17T16:09:33Z | https://github.com/langchain-ai/langchain/issues/18888 | 2,178,141,777 | 18,888 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
from langchain_community.llms import VLLM
import asyncio
import time
async def async_generate_response(llm, prompt):
result = await llm.ainvoke(prompt)
return result
async def async_handle_user(llm, user_prompt, system_prompt):
instructions = get_instructions(user_prompt, system_prompt)
prompt = build_llama2_prompt(instructions)
response = await async_generate_response(llm, prompt)
return response
async def main_multiple_users():
llm = VLLM(
model="/home/userdata/Downloads/vllm_llama/model/Llama-2-7b-Chat-GPTQ",
# trust_remote_code=True,
max_new_tokens=200, # Increase the maximum number of new tokens to allow for longer outputs
top_k=10,
top_p=1,
temperature=0.1,
tensor_parallel_size=1,
vllm_kwargs={"quantization": "GPTQ", "enforce_eager": "True", "gpu_memory_utilization": 0.5}
)
system_prompt = """
As a Machine Learning engineer who is teaching high school students, explain the fundamental concepts of machine learning, including supervised, unsupervised, and reinforcement learning. Provide real-world examples of each type of learning and discuss their applications in various domains such as healthcare, finance, and autonomous vehicles.
"""
user_prompt = '''
Explore the concept of transfer learning in machine learning. Explain how pre-trained models can be leveraged to improve the performance of new tasks with limited data. Provide examples of popular pre-trained models and discuss their applications across different domains.
'''
start_time = time.time()
tasks = [async_handle_user(llm, user_prompt, system_prompt) for _ in range(5)] # Simulate 5 users
responses = await asyncio.gather(*tasks)
for idx, response in enumerate(responses):
print(f"User {idx + 1} Response:", response)
end_time = time.time()
print("Multiple Users Execution Time:", end_time - start_time)
def build_llama2_prompt(instructions):
stop_token = "</s>"
start_token = "<s>"
startPrompt = f"{start_token}[INST] "
endPrompt = " [/INST]"
conversation = []
for index, instruction in enumerate(instructions):
if instruction["role"] == "system" and index == 0:
conversation.append(f"<<SYS>>\n{instruction['content']}\n<</SYS>>\n\n")
elif instruction["role"] == "user":
conversation.append(instruction["content"].strip())
else:
conversation.append(f"{endPrompt} {instruction['content'].strip()} {stop_token}{startPrompt}")
return startPrompt + "".join(conversation) + endPrompt
def get_instructions(user_prompt, system_prompt):
instructions = [
{"role": "system", "content": f"{system_prompt} "},
]
instructions.append({"role": "user", "content": f"{user_prompt}"})
return instructions
async def run():
await main_multiple_users()
asyncio.run(run())
### Error Message and Stack Trace (if applicable)
_No response_
### Description
I run my code above. This is the error that occur when I run the code.
<img width="1288" alt="Screenshot 2024-03-11 at 10 48 24 AM" src="https://github.com/langchain-ai/langchain/assets/130896959/8758b238-c21d-4f21-9dde-e3ed7563003c">
### System Info
newer version of langchain | Problem using ainvoke with vllm | https://api.github.com/repos/langchain-ai/langchain/issues/18887/comments | 1 | 2024-03-11T02:45:08Z | 2024-03-11T15:25:17Z | https://github.com/langchain-ai/langchain/issues/18887 | 2,178,108,189 | 18,887 |
[
"langchain-ai",
"langchain"
] | I access ollama using the python library.
It communicates well but after some exchanges I always get the following. It seems that I need to reset ollama via python or maybe context length is surpassed, how do I figure it out?
```
Traceback (most recent call last):
File "c:\Lib\site-packages\urllib3\connectionpool.py", line 715, in urlopen
httplib_response = self._make_request(
^^^^^^^^^^^^^^^^^^^
File "c:\Lib\site-packages\urllib3\connectionpool.py", line 467, in _make_request
six.raise_from(e, None)
File "<string>", line 3, in raise_from
File "c:\Lib\site-packages\urllib3\connectionpool.py", line 462, in _make_request
httplib_response = conn.getresponse()
^^^^^^^^^^^^^^^^^^
File "c:\Lib\http\client.py", line 1386, in getresponse
response.begin()
File "c:\Lib\http\client.py", line 325, in begin
version, status, reason = self._read_status()
^^^^^^^^^^^^^^^^^^^
File "c:\Lib\http\client.py", line 286, in _read_status
line = str(self.fp.readline(_MAXLINE + 1), "iso-8859-1")
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "c:\Lib\socket.py", line 706, in readinto
return self._sock.recv_into(b)
^^^^^^^^^^^^^^^^^^^^^^^
ConnectionResetError: [WinError 10054] Eine vorhandene Verbindung wurde vom Remotehost geschlossen
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "c:\Lib\site-packages\requests\adapters.py", line 486, in send
resp = conn.urlopen(
^^^^^^^^^^^^^
File "c:\Lib\site-packages\urllib3\connectionpool.py", line 799, in urlopen
retries = retries.increment(
^^^^^^^^^^^^^^^^^^
File "c:\Lib\site-packages\urllib3\util\retry.py", line 550, in increment
raise six.reraise(type(error), error, _stacktrace)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "c:\Lib\site-packages\urllib3\packages\six.py", line 769, in reraise
raise value.with_traceback(tb)
File "c:\Lib\site-packages\urllib3\connectionpool.py", line 715, in urlopen
httplib_response = self._make_request(
^^^^^^^^^^^^^^^^^^^
File "c:\Lib\site-packages\urllib3\connectionpool.py", line 467, in _make_request
six.raise_from(e, None)
File "<string>", line 3, in raise_from
File "c:\Lib\site-packages\urllib3\connectionpool.py", line 462, in _make_request
httplib_response = conn.getresponse()
^^^^^^^^^^^^^^^^^^
File "c:\Lib\http\client.py", line 1386, in getresponse
response.begin()
File "c:\Lib\http\client.py", line 325, in begin
version, status, reason = self._read_status()
^^^^^^^^^^^^^^^^^^^
File "c:\Lib\http\client.py", line 286, in _read_status
line = str(self.fp.readline(_MAXLINE + 1), "iso-8859-1")
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "c:\Lib\socket.py", line 706, in readinto
return self._sock.recv_into(b)
^^^^^^^^^^^^^^^^^^^^^^^
urllib3.exceptions.ProtocolError: ('Connection aborted.', ConnectionResetError(10054, 'Eine vorhandene Verbindung wurde vom Remotehost geschlossen', None, 10054, None))
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "c:\Lib\site-packages\langchain_community\embeddings\ollama.py", line 157, in _process_emb_response
res = requests.post(
^^^^^^^^^^^^^^
File "c:\Lib\site-packages\requests\api.py", line 115, in post
return request("post", url, data=data, json=json, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "c:\Lib\site-packages\requests\api.py", line 59, in request
return session.request(method=method, url=url, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "c:\Lib\site-packages\requests\sessions.py", line 589, in request
resp = self.send(prep, **send_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "c:\Lib\site-packages\requests\sessions.py", line 703, in send
r = adapter.send(request, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "c:\Lib\site-packages\requests\adapters.py", line 501, in send
raise ConnectionError(err, request=request)
requests.exceptions.ConnectionError: ('Connection aborted.', ConnectionResetError(10054, 'Eine vorhandene Verbindung wurde vom Remotehost geschlossen', None, 10054, None))
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "E:\test.py", line 123, in <module>
rag(ds("documents"), "")
File "E:\test.py", line 93, in rag
result = chain.invoke(aufgabe).replace("\n"," ").replace("\r"," ").replace(" "," ")
^^^^^^^^^^^^^^^^^^^^^
File "c:\Lib\site-packages\langchain_core\runnables\base.py", line 2075, in invoke
input = step.invoke(
^^^^^^^^^^^^
File "c:\Lib\site-packages\langchain_core\runnables\base.py", line 2712, in invoke
output = {key: future.result() for key, future in zip(steps, futures)}
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "c:\Lib\site-packages\langchain_core\runnables\base.py", line 2712, in <dictcomp>
output = {key: future.result() for key, future in zip(steps, futures)}
^^^^^^^^^^^^^^^
File "c:\Lib\concurrent\futures\_base.py", line 456, in result
return self.__get_result()
^^^^^^^^^^^^^^^^^^^
File "c:\Lib\concurrent\futures\_base.py", line 401, in __get_result
raise self._exception
File "c:\Lib\concurrent\futures\thread.py", line 58, in run
result = self.fn(*self.args, **self.kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "c:\Lib\site-packages\langchain_core\retrievers.py", line 141, in invoke
return self.get_relevant_documents(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "c:\Lib\site-packages\langchain_core\retrievers.py", line 244, in get_relevant_documents
raise e
File "c:\Lib\site-packages\langchain_core\retrievers.py", line 237, in get_relevant_documents
result = self._get_relevant_documents(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "c:\Lib\site-packages\langchain_core\vectorstores.py", line 674, in _get_relevant_documents
docs = self.vectorstore.similarity_search(query, **self.search_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "c:\Lib\site-packages\langchain_community\vectorstores\chroma.py", line 348, in similarity_search
docs_and_scores = self.similarity_search_with_score(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "c:\Lib\site-packages\langchain_community\vectorstores\chroma.py", line 437, in similarity_search_with_score
query_embedding = self._embedding_function.embed_query(query)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "c:\Lib\site-packages\langchain_community\embeddings\ollama.py", line 217, in embed_query
embedding = self._embed([instruction_pair])[0]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "c:\Lib\site-packages\langchain_community\embeddings\ollama.py", line 192, in _embed
return [self._process_emb_response(prompt) for prompt in iter_]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "c:\Lib\site-packages\langchain_community\embeddings\ollama.py", line 192, in <listcomp>
return [self._process_emb_response(prompt) for prompt in iter_]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "c:\Lib\site-packages\langchain_community\embeddings\ollama.py", line 163, in _process_emb_response
raise ValueError(f"Error raised by inference endpoint: {e}")
ValueError: Error raised by inference endpoint: ('Connection aborted.', ConnectionResetError(10054, 'Eine vorhandene Verbindung wurde vom Remotehost geschlossen', None, 10054, None))
``` | Ollama logging for ConnectionResetError | https://api.github.com/repos/langchain-ai/langchain/issues/18879/comments | 3 | 2024-03-10T20:35:35Z | 2024-06-21T16:38:09Z | https://github.com/langchain-ai/langchain/issues/18879 | 2,177,893,861 | 18,879 |
[
"langchain-ai",
"langchain"
] | ### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
Environment
langchain 0.1.11
langchain-community 0.0.25
langchain-core 0.1.29
langchain-experimental 0.0.49
langchain-text-splitters 0.0.1
langchainhub 0.1.15
langchainplus-sdk 0.0.20
Symptom
In cookbook/fake_llm.ipynb, there are 3 warnings which need to be fixed.
(1) Failed to load python_repl.
Traceback (most recent call last):
File "./my_fake_llm.py", line 7, in <module>
tools = load_tools(["python_repl"])
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/user/.local/lib/python3.11/site-packages/langchain/agents/load_tools.py", line 617, in load_tools
raise ValueError(f"Got unknown tool {name}")
ValueError: Got unknown tool python_repl
(2) The function initialize_agent was deprecated message.
(3) The function run was deprecated message.
### Idea or request for content:
(1) Following diff fix the problem.
from langchain_experimental.tools import PythonREPLTool
tools = [PythonREPLTool()]
#tools = load_tools(["python_repl"])
(2) The function `initialize_agent` was deprecated
Replace initialize_agent with create_react_agent fix the warning message.
(3) The function `run` was deprecated
Update run to invoke fix the warning.
| DOC: cookbook fake_llm.ipynb need to be updated, in order to fix 3 warnings. | https://api.github.com/repos/langchain-ai/langchain/issues/18874/comments | 0 | 2024-03-10T15:57:40Z | 2024-06-16T16:09:19Z | https://github.com/langchain-ai/langchain/issues/18874 | 2,177,782,566 | 18,874 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```
from langchain_community.llms.huggingface_pipeline import HuggingFacePipeline
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
from langchain_core.output_parsers import StrOutputParser
import torch
from langchain_core.prompts import ChatPromptTemplate, PromptTemplate
model_id = "openchat/openchat-3.5-0106"
model = AutoModelForCausalLM.from_pretrained(model_id, cache_dir='C:/Users/Timmek/Documents/model1', torch_dtype=torch.bfloat16, load_in_8bit=True)
tokenizer = AutoTokenizer.from_pretrained(model_id, cache_dir='C:/Users/Timmek/Documents/model1', torch_dtype=torch.bfloat16)
pipe = pipeline("text-generation", model=model, tokenizer=tokenizer, max_new_tokens=192, model_kwargs={"temperature":0.6})
llm = HuggingFacePipeline(pipeline=pipe)
# Create the ChatPromptTemplate
prompt_template = PromptTemplate.from_template("The user asked: '{input}'. How would you respond?")
output_parser = StrOutputParser()
chain = prompt_template | llm | output_parser
question = "Hi"
print(chain.invoke({"input": question}))
```
### Error Message and Stack Trace (if applicable)
Console:
```
A) Goodbye
B) Hello
C) See you later
D) Sorry
Answer: B) Hello
10. The user asked: 'Have you ever been to Rome?' How would you respond?
A) Yes, I was there on my honeymoon.
B) No, I have never traveled there.
C) I've never been, but I would like to.
D) I prefer traveling to New York.
Answer: C) I've never been, but I would like to.
11. The user asked: 'Do you like going on vacation?' How would you respond?
A) I like it, but I prefer staying at home.
B) I don't like vacations, I prefer staying at home.
C) I love going on vacation.
```
### Description
I want to use "openchat-3.5-0106" as a chatbot. But I can't do it with Langchain functions. instead of responding, the chatbot responds with a CONTINUATION of the request, not a response to it.
I use: HuggingFacePipeline, PromptTemplate.
I also tried using HuggingFacePipeline directly., without transformers:
```
llm = HuggingFacePipeline.from_model_id(
model_id="openchat/openchat-3.5-0106",
task="text-generation",
model_kwargs=model_kwargs,
device=0,
pipeline_kwargs={"temperature": 0.7},
)
```
or use ChatPromptTemplate (instead of PromptTemplate), then everything is the same. If you remove the "temperature" setting or download "cache_dir" to another one, then the problem remains.
You can say that the problem is with me or with openchat. But if I ONLY use transformers functions, then everything is fine:
```
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
import torch
DEVICE = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
model_name_or_path = "openchat/openchat-3.5-0106"
tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, cache_dir='C:/Users/Timmek/Documents/model', torch_dtype=torch.bfloat16, load_in_8bit=True)
model = AutoModelForCausalLM.from_pretrained(model_name_or_path, cache_dir='C:/Users/Timmek/Documents/model', torch_dtype=torch.bfloat16).to(DEVICE)
pipe = pipeline("text-generation", model=model, tokenizer=tokenizer, max_new_tokens=192)
message_text = "hi"
messages = [
{"role": "user", "content": message_text},
]
response = pipe(messages, max_new_tokens=192)
response = response[0]['generated_text'][1]['content']
print (response)
```
console: ```
Setting `pad_token_id` to `eos_token_id`:32000 for open-end generation.
[{'generated_text': [{'role': 'user', 'content': 'hi'}, {'role': 'assistant', 'content': ' Hello! How can I help you today?'}]}]
Hello! How can I help you today?
```
### System Info
System Information
------------------
> OS: Windows
> OS Version: 10.0.22621
> Python Version: 3.11.6 (tags/v3.11.6:8b6ee5b, Oct 2 2023, 14:57:12) [MSC v.1935 64 bit (AMD64)]
Package Information
-------------------
> langchain_core: 0.1.29
> langchain: 0.1.11
> langchain_community: 0.0.25
> langsmith: 0.1.21
> langchain_text_splitters: 0.0.1
> langchainhub: 0.1.15
> langserve: 0.0.46
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph | When using "openchat-3.5-0106" together with "HuggingFacePipeline", the chatbot does not respond to the message BUT CONTINUES IT | https://api.github.com/repos/langchain-ai/langchain/issues/18870/comments | 0 | 2024-03-10T14:19:42Z | 2024-06-16T16:09:14Z | https://github.com/langchain-ai/langchain/issues/18870 | 2,177,741,965 | 18,870 |
[
"langchain-ai",
"langchain"
] | ### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
There are many old library paths in the documentation, such as 'langchain.llms' instead of 'langchain**_community**.llms(.huggingface_hub)' and so on. This issue affects all at least langchain_community parts.
https://python.langchain.com/docs/integrations/chat/huggingface
### Idea or request for content:
Since more then one version are used, a good choice would be selectable versions of the documentation. A drop-down field at the top of the edge (like pytorch-docs -> https://pytorch.org/docs/stable/index.html) is simple to use and good for versioning. | DOC: The documentation is not up-to-date. | https://api.github.com/repos/langchain-ai/langchain/issues/18867/comments | 2 | 2024-03-10T11:21:05Z | 2024-06-17T16:09:11Z | https://github.com/langchain-ai/langchain/issues/18867 | 2,177,665,347 | 18,867 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
I am using the below url/api endpoint for accessing a server based Ollama model.
Below is me testing locally to ensure it works.
```python
from langchain_community.llms import Ollama
llm = Ollama(
base_url="http://138.26.48.126:11434",
model="gemma"
)
prompt = 'Give me one name for a new national park with jungle terrain?'
print(llm.invoke(prompt))
```
I am however using this in the context of a streamlit app and asking for a longer prompt. When I try a longer prompt (even using the above example), it times out. Is there a way to increase the timeout length?
### Error Message and Stack Trace (if applicable)
```console
2024-03-09 14:41:53.979 Uncaught app exception
Traceback (most recent call last):
File "/Users/shutchens/Documents/Git-Repos/Grant_Guide/env/lib/python3.10/site-packages/urllib3/connection.py", line 203, in _new_conn
sock = connection.create_connection(
File "/Users/shutchens/Documents/Git-Repos/Grant_Guide/env/lib/python3.10/site-packages/urllib3/util/connection.py", line 85, in create_connection
raise err
File "/Users/shutchens/Documents/Git-Repos/Grant_Guide/env/lib/python3.10/site-packages/urllib3/util/connection.py", line 73, in create_connection
sock.connect(sa)
TimeoutError: [Errno 60] Operation timed out
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/Users/shutchens/Documents/Git-Repos/Grant_Guide/env/lib/python3.10/site-packages/urllib3/connectionpool.py", line 790, in urlopen
response = self._make_request(
File "/Users/shutchens/Documents/Git-Repos/Grant_Guide/env/lib/python3.10/site-packages/urllib3/connectionpool.py", line 496, in _make_request
conn.request(
File "/Users/shutchens/Documents/Git-Repos/Grant_Guide/env/lib/python3.10/site-packages/urllib3/connection.py", line 395, in request
self.endheaders()
File "/usr/local/Cellar/python@3.10/3.10.13_1/Frameworks/Python.framework/Versions/3.10/lib/python3.10/http/client.py", line 1278, in endheaders
self._send_output(message_body, encode_chunked=encode_chunked)
File "/usr/local/Cellar/python@3.10/3.10.13_1/Frameworks/Python.framework/Versions/3.10/lib/python3.10/http/client.py", line 1038, in _send_output
self.send(msg)
File "/usr/local/Cellar/python@3.10/3.10.13_1/Frameworks/Python.framework/Versions/3.10/lib/python3.10/http/client.py", line 976, in send
self.connect()
File "/Users/shutchens/Documents/Git-Repos/Grant_Guide/env/lib/python3.10/site-packages/urllib3/connection.py", line 243, in connect
self.sock = self._new_conn()
File "/Users/shutchens/Documents/Git-Repos/Grant_Guide/env/lib/python3.10/site-packages/urllib3/connection.py", line 212, in _new_conn
raise ConnectTimeoutError(
urllib3.exceptions.ConnectTimeoutError: (<urllib3.connection.HTTPConnection object at 0x137bbff10>, 'Connection to 138.26.49.149 timed out. (connect timeout=None)')
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/Users/shutchens/Documents/Git-Repos/Grant_Guide/env/lib/python3.10/site-packages/requests/adapters.py", line 486, in send
resp = conn.urlopen(
File "/Users/shutchens/Documents/Git-Repos/Grant_Guide/env/lib/python3.10/site-packages/urllib3/connectionpool.py", line 844, in urlopen
retries = retries.increment(
File "/Users/shutchens/Documents/Git-Repos/Grant_Guide/env/lib/python3.10/site-packages/urllib3/util/retry.py", line 515, in increment
raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type]
urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='138.26.49.149', port=11434): Max retries exceeded with url: /api/generate (Caused by ConnectTimeoutError(<urllib3.connection.HTTPConnection object at 0x137bbff10>, 'Connection to 138.26.49.149 timed out. (connect timeout=None)'))
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/Users/shutchens/Documents/Git-Repos/Grant_Guide/env/lib/python3.10/site-packages/streamlit/runtime/scriptrunner/script_runner.py", line 552, in _run_script
exec(code, module.__dict__)
File "/Users/shutchens/Documents/Git-Repos/Grant_Guide/streamlit/app_no_db.py", line 129, in <module>
show_grant_guide_page()
File "/Users/shutchens/Documents/Git-Repos/Grant_Guide/streamlit/app_no_db.py", line 42, in show_grant_guide_page
documents = grant_generate.search_grant_guide_vectorstore(query=aims, store=vectorstore)
File "/Users/shutchens/Documents/Git-Repos/Grant_Guide/Grant_Guide/generate.py", line 30, in search_grant_guide_vectorstore
docs = docsearch.get_relevant_documents(query)
File "/Users/shutchens/Documents/Git-Repos/Grant_Guide/env/lib/python3.10/site-packages/langchain_core/retrievers.py", line 244, in get_relevant_documents
raise e
File "/Users/shutchens/Documents/Git-Repos/Grant_Guide/env/lib/python3.10/site-packages/langchain_core/retrievers.py", line 237, in get_relevant_documents
result = self._get_relevant_documents(
File "/Users/shutchens/Documents/Git-Repos/Grant_Guide/env/lib/python3.10/site-packages/langchain_core/vectorstores.py", line 674, in _get_relevant_documents
docs = self.vectorstore.similarity_search(query, **self.search_kwargs)
File "/Users/shutchens/Documents/Git-Repos/Grant_Guide/env/lib/python3.10/site-packages/langchain_community/vectorstores/faiss.py", line 548, in similarity_search
docs_and_scores = self.similarity_search_with_score(
File "/Users/shutchens/Documents/Git-Repos/Grant_Guide/env/lib/python3.10/site-packages/langchain_community/vectorstores/faiss.py", line 420, in similarity_search_with_score
embedding = self._embed_query(query)
File "/Users/shutchens/Documents/Git-Repos/Grant_Guide/env/lib/python3.10/site-packages/langchain_community/vectorstores/faiss.py", line 157, in _embed_query
return self.embedding_function(text)
File "/Users/shutchens/Documents/Git-Repos/Grant_Guide/env/lib/python3.10/site-packages/langchain_core/_api/deprecation.py", line 145, in warning_emitting_wrapper
return wrapped(*args, **kwargs)
File "/Users/shutchens/Documents/Git-Repos/Grant_Guide/env/lib/python3.10/site-packages/langchain_core/language_models/llms.py", line 991, in __call__
self.generate(
File "/Users/shutchens/Documents/Git-Repos/Grant_Guide/env/lib/python3.10/site-packages/langchain_core/language_models/llms.py", line 741, in generate
output = self._generate_helper(
File "/Users/shutchens/Documents/Git-Repos/Grant_Guide/env/lib/python3.10/site-packages/langchain_core/language_models/llms.py", line 605, in _generate_helper
raise e
File "/Users/shutchens/Documents/Git-Repos/Grant_Guide/env/lib/python3.10/site-packages/langchain_core/language_models/llms.py", line 592, in _generate_helper
self._generate(
File "/Users/shutchens/Documents/Git-Repos/Grant_Guide/env/lib/python3.10/site-packages/langchain_community/llms/ollama.py", line 408, in _generate
final_chunk = super()._stream_with_aggregation(
File "/Users/shutchens/Documents/Git-Repos/Grant_Guide/env/lib/python3.10/site-packages/langchain_community/llms/ollama.py", line 317, in _stream_with_aggregation
for stream_resp in self._create_generate_stream(prompt, stop, **kwargs):
File "/Users/shutchens/Documents/Git-Repos/Grant_Guide/env/lib/python3.10/site-packages/langchain_community/llms/ollama.py", line 159, in _create_generate_stream
yield from self._create_stream(
File "/Users/shutchens/Documents/Git-Repos/Grant_Guide/env/lib/python3.10/site-packages/langchain_community/llms/ollama.py", line 220, in _create_stream
response = requests.post(
File "/Users/shutchens/Documents/Git-Repos/Grant_Guide/env/lib/python3.10/site-packages/requests/api.py", line 115, in post
return request("post", url, data=data, json=json, **kwargs)
File "/Users/shutchens/Documents/Git-Repos/Grant_Guide/env/lib/python3.10/site-packages/requests/api.py", line 59, in request
return session.request(method=method, url=url, **kwargs)
File "/Users/shutchens/Documents/Git-Repos/Grant_Guide/env/lib/python3.10/site-packages/requests/sessions.py", line 589, in request
resp = self.send(prep, **send_kwargs)
File "/Users/shutchens/Documents/Git-Repos/Grant_Guide/env/lib/python3.10/site-packages/requests/sessions.py", line 703, in send
r = adapter.send(request, **kwargs)
File "/Users/shutchens/Documents/Git-Repos/Grant_Guide/env/lib/python3.10/site-packages/requests/adapters.py", line 507, in send
raise ConnectTimeout(e, request=request)
requests.exceptions.ConnectTimeout: HTTPConnectionPool(host='138.26.49.149', port=11434): Max retries exceeded with url: /api/generate (Caused by ConnectTimeoutError(<urllib3.connection.HTTPConnection object at 0x137bbff10>, 'Connection to 138.26.49.149 timed out. (connect timeout=None)'))
```
### Description
When I try a longer prompt, it times out. Is there a way to increase the timeout length?
### System Info
```console
System Information
------------------
> OS: Darwin
> OS Version: Darwin Kernel Version 23.3.0: Wed Dec 20 21:28:58 PST 2023; root:xnu-10002.81.5~7/RELEASE_X86_64
> Python Version: 3.10.13 (main, Aug 24 2023, 12:59:26) [Clang 15.0.0 (clang-1500.0.40.1)]
Package Information
-------------------
> langchain_core: 0.1.30
> langchain: 0.1.11
> langchain_community: 0.0.27
> langsmith: 0.1.23
> langchain_text_splitters: 0.0.1
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph
> langserve
```
| TimeoutError with Longer Prompts | https://api.github.com/repos/langchain-ai/langchain/issues/18855/comments | 1 | 2024-03-09T21:45:51Z | 2024-03-11T15:33:27Z | https://github.com/langchain-ai/langchain/issues/18855 | 2,177,414,910 | 18,855 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
def get_vectorstore(text_chunks):
embeddings = OpenAIEmbeddings()
## Vector Search DB In Pinecone
Pinecone(api_key=os.environ.get('PINECONE_API_KEY'))
index_name="langchainvector"
vectorstore=PineconeVectorStore.from_documents(text_chunks,embeddings,index_name=index_name)
#vectorstore = FAISS.from_texts(texts=text_chunks, embedding=embeddings)
return vectorstore
def get_conversation_chain(vectorstore):
llm = ChatOpenAI()
# llm = HuggingFaceHub(repo_id="google/flan-t5-xxl", model_kwargs={"temperature":0.5, "max_length":512})
memory = ConversationBufferMemory(
memory_key='chat_history', return_messages=True)
conversation_chain = ConversationalRetrievalChain.from_llm(
llm=llm,
retriever=vectorstore.as_retriever(),
memory=memory,
)
return conversation_chain
# create conversation chain
st.session_state.conversation = get_conversation_chain(
vectorstore)
if __name__ == '__main__':
main()
### Error Message and Stack Trace (if applicable)
_No response_
### Description
File "D:\LLM projects\ask-multiple-pdfs-main\ask-multiple-pdfs-main\app.py", line 128, in <module>
main()
File "D:\LLM projects\ask-multiple-pdfs-main\ask-multiple-pdfs-main\app.py", line 123, in main
st.session_state.conversation = get_conversation_chain(
File "D:\LLM projects\ask-multiple-pdfs-main\ask-multiple-pdfs-main\app.py", line 67, in get_conversation_chain
conversation_chain = ConversationalRetrievalChain.from_llm(
File "d:\llm projects\ask-multiple-pdfs-main\ask-multiple-pdfs-main\llm\lib\site-packages\langchain\chains\conversational_retrieval\base.py", line 212, in from_llm
return cls(
File "pydantic\main.py", line 341, in pydantic.main.BaseModel.__init__
pydantic.error_wrappers.ValidationError: 1 validation error for ConversationalRetrievalChain
retriever
instance of BaseRetriever expected (type=type_error.arbitrary_type; expected_arbitrary_type=BaseRetriever)
### System Info
NA | validation error for ConversationalRetrievalChain retriever instance of BaseRetriever expected (type=type_error.arbitrary_type; expected_arbitrary_type=BaseRetriever) | https://api.github.com/repos/langchain-ai/langchain/issues/18852/comments | 1 | 2024-03-09T17:47:03Z | 2024-05-21T04:37:50Z | https://github.com/langchain-ai/langchain/issues/18852 | 2,177,329,394 | 18,852 |
[
"langchain-ai",
"langchain"
] | ### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
I think there is a typo in the [tools documentation](https://python.langchain.com/docs/modules/agents/tools/), in [this paragraph](https://github.com/langchain-ai/langchain/blob/b48865bf94d4d738504bcd10accae0fb238b280d/docs/docs/modules/agents/tools/index.ipynb#L29). In _"can be used the prompt the LLM"_ the phrase appears to have a typo or omission. It seems it should either be _"can be used to prompt the LLM"_ or _"can be used as the prompt for the LLM"_
### Idea or request for content:
_No response_ | DOC: typo in tools documentation | https://api.github.com/repos/langchain-ai/langchain/issues/18849/comments | 0 | 2024-03-09T15:30:12Z | 2024-03-09T21:39:19Z | https://github.com/langchain-ai/langchain/issues/18849 | 2,177,267,953 | 18,849 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
import streamlit as st
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.runnables import RunnablePassthrough
from langchain_core.output_parsers import StrOutputParser
from langchain_community.llms.bedrock import Bedrock
from langchain_community.retrievers.bedrock import AmazonKnowledgeBasesRetriever
retriever = AmazonKnowledgeBasesRetriever(
knowledge_base_id="XXXXXXXXXX", # Input KB ID here
retrieval_config={
"vectorSearchConfiguration": {
"numberOfResults": 10,
"overrideSearchType": "HYBRID"
}})
prompt = ChatPromptTemplate.from_template("Answer questions based on the context below: {context} / Question: {question}")
model = Bedrock(model_id="anthropic.claude-3-sonnet-20240229-v1:0", model_kwargs={"max_tokens_to_sample": 1000})
chain = ({"context": retriever, "question": RunnablePassthrough()} | prompt | model | StrOutputParser())
st.title("Ask Bedrock")
question = st.text_input("Input your question")
button = st.button("Ask!")
if button:
st.write(chain.invoke(question))
```
### Error Message and Stack Trace (if applicable)
ValidationError: 1 validation error for Bedrock __root__ Claude v3 models are not supported by this LLM.Please use `from langchain_community.chat_models import BedrockChat` instead. (type=value_error)
Traceback:
File "/home/ec2-user/.local/lib/python3.9/site-packages/streamlit/runtime/scriptrunner/script_runner.py", line 535, in _run_script
exec(code, module.__dict__)
File "/home/ec2-user/environment/rag.py", line 22, in <module>
model = Bedrock(model_id="anthropic.claude-3-sonnet-20240229-v1:0", model_kwargs={"max_tokens_to_sample": 1000})
File "/home/ec2-user/.local/lib/python3.9/site-packages/langchain_core/load/serializable.py", line 120, in __init__
super().__init__(**kwargs)
File "/home/ec2-user/.local/lib/python3.9/site-packages/pydantic/v1/main.py", line 341, in __init__
raise validation_error
### Description
To use claude 3 (Sonnet) on Amazon Bedrock, `langchain_community` seems to be updated.
### System Info
langchain==0.1.11
langchain-community==0.0.27
langchain-core==0.1.30
langchain-text-splitters==0.0.1
macOS 14.2.1(23C71)
Python 3.9.16 | Bedrock doesn't work with Claude 3 | https://api.github.com/repos/langchain-ai/langchain/issues/18845/comments | 4 | 2024-03-09T12:49:08Z | 2024-03-11T13:23:59Z | https://github.com/langchain-ai/langchain/issues/18845 | 2,177,214,299 | 18,845 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```from langchain.agents import create_sql_agent
from langchain.agents.agent_toolkits import SQLDatabaseToolkit
from langchain.agents.agent_types import AgentType
from langchain.agents import create_sql_agent
from langchain.agents.agent_toolkits import SQLDatabaseToolkit
from langchain.agents import AgentExecutor
pg_uri = f"postgresql+psycopg2://{username}:{password}@{host}:{port}/{mydatabase}"
db = SQLDatabase.from_uri(pg_uri)
repo_id = "mistralai/Mistral-7B-Instruct-v0.2"
llm = HuggingFaceEndpoint(
repo_id=repo_id, max_length=128, temperature=0.5, token=HUGGINGFACEHUB_API_TOKEN
)
toolkit = SQLDatabaseToolkit(db=db,llm=llm)
agent_executor = create_sql_agent(
llm=llm,
toolkit=SQLDatabaseToolkit(db=db,llm=llm),
verbose=True,
agent_type=AgentType.ZERO_SHOT_REACT_DESCRIPTION
)
agent_executor.run(
"what is the id of host spencer ?"
)
```
### Error Message and Stack Trace (if applicable)
```
> Entering new SQL Agent Executor chain...
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
Cell In[222], line 1
----> 1 agent_executor.run(
2 "what is the id of host spencer ?"
3 )
File ~/Documents/venv/lib64/python3.11/site-packages/langchain_core/_api/deprecation.py:145, in deprecated.<locals>.deprecate.<locals>.warning_emitting_wrapper(*args, **kwargs)
143 warned = True
144 emit_warning()
--> 145 return wrapped(*args, **kwargs)
File ~/Documents/venv/lib64/python3.11/site-packages/langchain/chains/base.py:545, in Chain.run(self, callbacks, tags, metadata, *args, **kwargs)
543 if len(args) != 1:
544 raise ValueError("`run` supports only one positional argument.")
--> 545 return self(args[0], callbacks=callbacks, tags=tags, metadata=metadata)[
546 _output_key
547 ]
549 if kwargs and not args:
550 return self(kwargs, callbacks=callbacks, tags=tags, metadata=metadata)[
551 _output_key
552 ]
File ~/Documents/venv/lib64/python3.11/site-packages/langchain_core/_api/deprecation.py:145, in deprecated.<locals>.deprecate.<locals>.warning_emitting_wrapper(*args, **kwargs)
143 warned = True
144 emit_warning()
--> 145 return wrapped(*args, **kwargs)
File ~/Documents/venv/lib64/python3.11/site-packages/langchain/chains/base.py:378, in Chain.__call__(self, inputs, return_only_outputs, callbacks, tags, metadata, run_name, include_run_info)
346 """Execute the chain.
347
348 Args:
(...)
369 `Chain.output_keys`.
370 """
371 config = {
372 "callbacks": callbacks,
373 "tags": tags,
374 "metadata": metadata,
375 "run_name": run_name,
376 }
--> 378 return self.invoke(
379 inputs,
380 cast(RunnableConfig, {k: v for k, v in config.items() if v is not None}),
381 return_only_outputs=return_only_outputs,
382 include_run_info=include_run_info,
383 )
File ~/Documents/venv/lib64/python3.11/site-packages/langchain/chains/base.py:163, in Chain.invoke(self, input, config, **kwargs)
161 except BaseException as e:
162 run_manager.on_chain_error(e)
--> 163 raise e
164 run_manager.on_chain_end(outputs)
166 if include_run_info:
File ~/Documents/venv/lib64/python3.11/site-packages/langchain/chains/base.py:153, in Chain.invoke(self, input, config, **kwargs)
150 try:
151 self._validate_inputs(inputs)
152 outputs = (
--> 153 self._call(inputs, run_manager=run_manager)
154 if new_arg_supported
155 else self._call(inputs)
156 )
158 final_outputs: Dict[str, Any] = self.prep_outputs(
159 inputs, outputs, return_only_outputs
160 )
161 except BaseException as e:
File ~/Documents/venv/lib64/python3.11/site-packages/langchain/agents/agent.py:1391, in AgentExecutor._call(self, inputs, run_manager)
1389 # We now enter the agent loop (until it returns something).
1390 while self._should_continue(iterations, time_elapsed):
-> 1391 next_step_output = self._take_next_step(
1392 name_to_tool_map,
1393 color_mapping,
1394 inputs,
1395 intermediate_steps,
1396 run_manager=run_manager,
1397 )
1398 if isinstance(next_step_output, AgentFinish):
1399 return self._return(
1400 next_step_output, intermediate_steps, run_manager=run_manager
1401 )
File ~/Documents/venv/lib64/python3.11/site-packages/langchain/agents/agent.py:1097, in AgentExecutor._take_next_step(self, name_to_tool_map, color_mapping, inputs, intermediate_steps, run_manager)
1088 def _take_next_step(
1089 self,
1090 name_to_tool_map: Dict[str, BaseTool],
(...)
1094 run_manager: Optional[CallbackManagerForChainRun] = None,
1095 ) -> Union[AgentFinish, List[Tuple[AgentAction, str]]]:
1096 return self._consume_next_step(
-> 1097 [
1098 a
1099 for a in self._iter_next_step(
1100 name_to_tool_map,
1101 color_mapping,
1102 inputs,
1103 intermediate_steps,
1104 run_manager,
1105 )
1106 ]
1107 )
File ~/Documents/venv/lib64/python3.11/site-packages/langchain/agents/agent.py:1097, in <listcomp>(.0)
1088 def _take_next_step(
1089 self,
1090 name_to_tool_map: Dict[str, BaseTool],
(...)
1094 run_manager: Optional[CallbackManagerForChainRun] = None,
1095 ) -> Union[AgentFinish, List[Tuple[AgentAction, str]]]:
1096 return self._consume_next_step(
-> 1097 [
1098 a
1099 for a in self._iter_next_step(
1100 name_to_tool_map,
1101 color_mapping,
1102 inputs,
1103 intermediate_steps,
1104 run_manager,
1105 )
1106 ]
1107 )
File ~/Documents/venv/lib64/python3.11/site-packages/langchain/agents/agent.py:1125, in AgentExecutor._iter_next_step(self, name_to_tool_map, color_mapping, inputs, intermediate_steps, run_manager)
1122 intermediate_steps = self._prepare_intermediate_steps(intermediate_steps)
1124 # Call the LLM to see what to do.
-> 1125 output = self.agent.plan(
1126 intermediate_steps,
1127 callbacks=run_manager.get_child() if run_manager else None,
1128 **inputs,
1129 )
1130 except OutputParserException as e:
1131 if isinstance(self.handle_parsing_errors, bool):
File ~/Documents/venv/lib64/python3.11/site-packages/langchain/agents/agent.py:387, in RunnableAgent.plan(self, intermediate_steps, callbacks, **kwargs)
381 # Use streaming to make sure that the underlying LLM is invoked in a streaming
382 # fashion to make it possible to get access to the individual LLM tokens
383 # when using stream_log with the Agent Executor.
384 # Because the response from the plan is not a generator, we need to
385 # accumulate the output into final output and return that.
386 final_output: Any = None
--> 387 for chunk in self.runnable.stream(inputs, config={"callbacks": callbacks}):
388 if final_output is None:
389 final_output = chunk
File ~/Documents/venv/lib64/python3.11/site-packages/langchain_core/runnables/base.py:2446, in RunnableSequence.stream(self, input, config, **kwargs)
2440 def stream(
2441 self,
2442 input: Input,
2443 config: Optional[RunnableConfig] = None,
2444 **kwargs: Optional[Any],
2445 ) -> Iterator[Output]:
-> 2446 yield from self.transform(iter([input]), config, **kwargs)
File ~/Documents/venv/lib64/python3.11/site-packages/langchain_core/runnables/base.py:2433, in RunnableSequence.transform(self, input, config, **kwargs)
2427 def transform(
2428 self,
2429 input: Iterator[Input],
2430 config: Optional[RunnableConfig] = None,
2431 **kwargs: Optional[Any],
2432 ) -> Iterator[Output]:
-> 2433 yield from self._transform_stream_with_config(
2434 input,
2435 self._transform,
2436 patch_config(config, run_name=(config or {}).get("run_name") or self.name),
2437 **kwargs,
2438 )
File ~/Documents/venv/lib64/python3.11/site-packages/langchain_core/runnables/base.py:1513, in Runnable._transform_stream_with_config(self, input, transformer, config, run_type, **kwargs)
1511 try:
1512 while True:
-> 1513 chunk: Output = context.run(next, iterator) # type: ignore
1514 yield chunk
1515 if final_output_supported:
File ~/Documents/venv/lib64/python3.11/site-packages/langchain_core/runnables/base.py:2397, in RunnableSequence._transform(self, input, run_manager, config)
2388 for step in steps:
2389 final_pipeline = step.transform(
2390 final_pipeline,
2391 patch_config(
(...)
2394 ),
2395 )
-> 2397 for output in final_pipeline:
2398 yield output
File ~/Documents/venv/lib64/python3.11/site-packages/langchain_core/runnables/base.py:1051, in Runnable.transform(self, input, config, **kwargs)
1048 final: Input
1049 got_first_val = False
-> 1051 for chunk in input:
1052 if not got_first_val:
1053 final = chunk
File ~/Documents/venv/lib64/python3.11/site-packages/langchain_core/runnables/base.py:4173, in RunnableBindingBase.transform(self, input, config, **kwargs)
4167 def transform(
4168 self,
4169 input: Iterator[Input],
4170 config: Optional[RunnableConfig] = None,
4171 **kwargs: Any,
4172 ) -> Iterator[Output]:
-> 4173 yield from self.bound.transform(
4174 input,
4175 self._merge_configs(config),
4176 **{**self.kwargs, **kwargs},
4177 )
File ~/Documents/venv/lib64/python3.11/site-packages/langchain_core/runnables/base.py:1061, in Runnable.transform(self, input, config, **kwargs)
1058 final = final + chunk # type: ignore[operator]
1060 if got_first_val:
-> 1061 yield from self.stream(final, config, **kwargs)
File ~/Documents/venv/lib64/python3.11/site-packages/langchain_core/language_models/llms.py:452, in BaseLLM.stream(self, input, config, stop, **kwargs)
445 except BaseException as e:
446 run_manager.on_llm_error(
447 e,
448 response=LLMResult(
449 generations=[[generation]] if generation else []
450 ),
451 )
--> 452 raise e
453 else:
454 run_manager.on_llm_end(LLMResult(generations=[[generation]]))
File ~/Documents/venv/lib64/python3.11/site-packages/langchain_core/language_models/llms.py:436, in BaseLLM.stream(self, input, config, stop, **kwargs)
434 generation: Optional[GenerationChunk] = None
435 try:
--> 436 for chunk in self._stream(
437 prompt, stop=stop, run_manager=run_manager, **kwargs
438 ):
439 yield chunk.text
440 if generation is None:
File ~/Documents/venv/lib64/python3.11/site-packages/langchain_community/llms/huggingface_endpoint.py:310, in HuggingFaceEndpoint._stream(self, prompt, stop, run_manager, **kwargs)
301 def _stream(
302 self,
303 prompt: str,
(...)
306 **kwargs: Any,
307 ) -> Iterator[GenerationChunk]:
308 invocation_params = self._invocation_params(stop, **kwargs)
--> 310 for response in self.client.text_generation(
311 prompt, **invocation_params, stream=True
312 ):
313 # identify stop sequence in generated text, if any
314 stop_seq_found: Optional[str] = None
315 for stop_seq in invocation_params["stop_sequences"]:
TypeError: InferenceClient.text_generation() got an unexpected keyword argument 'max_length'
```
### Description
while executing
agent_executor.run(
"what is the id of host spencer ?"
)
i am getting following error
TypeError: InferenceClient.text_generation() got an unexpected keyword argument 'max_length'
Observation :
I have tried the same query with chain.run(" what is the id of host spensor") and it works fine.
### System Info
Python 3.11.2 (main, Feb 8 2023, 00:00:00) [GCC 13.0.1 20230208 (Red Hat 13.0.1-0)] on linux
Type "help", "copyright", "credits" or "license" for more information.
OS : Linux : Fedora-38
``` Name: langchain
langchain==0.1.11
langchain-community==0.0.27
langchain-core==0.1.30
langchain-experimental==0.0.53
langchain-openai==0.0.8
langchain-text-splitters==0.0.1 ``` | TypeError: on SQL Agent Executor chain | https://api.github.com/repos/langchain-ai/langchain/issues/18838/comments | 4 | 2024-03-09T08:02:11Z | 2024-08-05T16:07:25Z | https://github.com/langchain-ai/langchain/issues/18838 | 2,177,127,356 | 18,838 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
import streamlit as st
from langchain_community.chat_message_histories import DynamoDBChatMessageHistory
from langchain_community.chat_models import BedrockChat
from langchain_core.messages import AIMessage, HumanMessage, SystemMessage
from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder
from langchain_core.runnables.history import RunnableWithMessageHistory
st.title("Bedrock Chat")
if "session_id" not in st.session_state:
st.session_state.session_id = "session_id"
if "history" not in st.session_state:
st.session_state.history = DynamoDBChatMessageHistory(
table_name="BedrockChatSessionTable", session_id=st.session_state.session_id
)
if "chain" not in st.session_state:
chat = BedrockChat(
model_id="anthropic.claude-3-sonnet-20240229-v1:0",
model_kwargs={"max_tokens": 1000},
streaming=True,
)
prompt = ChatPromptTemplate.from_messages(
[
("system", "You are AI bot."),
MessagesPlaceholder(variable_name="history"),
("human", "{question}"),
]
)
chain = prompt | chat
st.session_state.chain = RunnableWithMessageHistory(
chain,
lambda x: st.session_state.history,
input_messages_key="question",
history_messages_key="history",
)
if st.button("Clear history"):
st.session_state.history.clear()
for message in st.session_state.history.messages:
if message.type == "human":
with st.chat_message("human"):
st.markdown(message.content)
if message.type == "AIMessageChunk":
with st.chat_message("ai"):
st.markdown(message.content)
if prompt := st.chat_input("What is up?"):
with st.chat_message("user"):
st.markdown(prompt)
with st.chat_message("assistant"):
response = st.write_stream(
st.session_state.chain.stream(
{"question": prompt},
config={"configurable": {"session_id": st.session_state.session_id}}
)
)
```
### Error Message and Stack Trace (if applicable)
2024-03-09 02:28:02.148 Uncaught app exception
Traceback (most recent call last):
File "/home/ec2-user/.local/lib/python3.9/site-packages/streamlit/runtime/scriptrunner/script_runner.py", line 542, in _run_script
exec(code, module.__dict__)
File "/home/ec2-user/environment/langchain-bedrock-handson/example6.py", line 59, in <module>
response = st.write_stream(
File "/home/ec2-user/.local/lib/python3.9/site-packages/streamlit/runtime/metrics_util.py", line 397, in wrapped_func
result = non_optional_func(*args, **kwargs)
File "/home/ec2-user/.local/lib/python3.9/site-packages/streamlit/elements/write.py", line 159, in write_stream
for chunk in stream: # type: ignore
File "/home/ec2-user/.local/lib/python3.9/site-packages/langchain_core/runnables/base.py", line 4137, in stream
yield from self.bound.stream(
File "/home/ec2-user/.local/lib/python3.9/site-packages/langchain_core/runnables/base.py", line 4137, in stream
yield from self.bound.stream(
File "/home/ec2-user/.local/lib/python3.9/site-packages/langchain_core/runnables/base.py", line 2446, in stream
yield from self.transform(iter([input]), config, **kwargs)
File "/home/ec2-user/.local/lib/python3.9/site-packages/langchain_core/runnables/base.py", line 2433, in transform
yield from self._transform_stream_with_config(
File "/home/ec2-user/.local/lib/python3.9/site-packages/langchain_core/runnables/base.py", line 1513, in _transform_stream_with_config
chunk: Output = context.run(next, iterator) # type: ignore
File "/home/ec2-user/.local/lib/python3.9/site-packages/langchain_core/runnables/base.py", line 2397, in _transform
for output in final_pipeline:
File "/home/ec2-user/.local/lib/python3.9/site-packages/langchain_core/runnables/base.py", line 4173, in transform
yield from self.bound.transform(
File "/home/ec2-user/.local/lib/python3.9/site-packages/langchain_core/runnables/base.py", line 2433, in transform
yield from self._transform_stream_with_config(
File "/home/ec2-user/.local/lib/python3.9/site-packages/langchain_core/runnables/base.py", line 1513, in _transform_stream_with_config
chunk: Output = context.run(next, iterator) # type: ignore
File "/home/ec2-user/.local/lib/python3.9/site-packages/langchain_core/runnables/base.py", line 2397, in _transform
for output in final_pipeline:
File "/home/ec2-user/.local/lib/python3.9/site-packages/langchain_core/runnables/base.py", line 1061, in transform
yield from self.stream(final, config, **kwargs)
File "/home/ec2-user/.local/lib/python3.9/site-packages/langchain_core/language_models/chat_models.py", line 250, in stream
raise e
File "/home/ec2-user/.local/lib/python3.9/site-packages/langchain_core/language_models/chat_models.py", line 234, in stream
for chunk in self._stream(
File "/home/ec2-user/.local/lib/python3.9/site-packages/langchain_community/chat_models/bedrock.py", line 211, in _stream
system, formatted_messages = ChatPromptAdapter.format_messages(
File "/home/ec2-user/.local/lib/python3.9/site-packages/langchain_community/chat_models/bedrock.py", line 157, in format_messages
return _format_anthropic_messages(messages)
File "/home/ec2-user/.local/lib/python3.9/site-packages/langchain_community/chat_models/bedrock.py", line 78, in _format_anthropic_messages
role = _message_type_lookups[message.type]
KeyError: 'AIMessageChunk'
### Description
I create Chat app with Streamlit.
* I use Amazon Bedrock (claude-3-sonnet)
* I use DynamoDBChatMessageHistory as chat message history
* I call `chain.stream`, raise Error
### System Info
langchain==0.1.11
langchain-community==0.0.27
langchain-core==0.1.30
langchain-text-splitters==0.0.1
Python 3.9.16
OS: Amazon Linux 2023(Cloud 9) | Bedrock(claude-3-sonnet) with DynamoDBChatMessageHistory raise error | https://api.github.com/repos/langchain-ai/langchain/issues/18831/comments | 2 | 2024-03-09T02:32:40Z | 2024-07-30T16:06:15Z | https://github.com/langchain-ai/langchain/issues/18831 | 2,177,015,653 | 18,831 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```
import torch
import accelerate
from transformers import (
AutoModelForCausalLM,
AutoTokenizer,
BitsAndBytesConfig,
TrainingArguments
)
from transformers import pipeline
from langchain.llms import HuggingFacePipeline
from typing import List
from langchain_core.tools import tool
from langchain.agents import AgentExecutor, create_react_agent
from langchain.agents import AgentType, initialize_agent
from langchain.prompts import PromptTemplate
bnb_config = BitsAndBytesConfig(
load_in_4bit=True, # 4 bit quantization
bnb_4bit_quant_type="nf4", # For weights initializes using a normal distribution
bnb_4bit_compute_dtype=torch.bfloat16, # Match model dtype
bnb_4bit_use_double_quant=True, # Nested quantization improves performance
)
model_name='mistralai/Mistral-7B-Instruct-v0.1'
tokenizer = AutoTokenizer.from_pretrained(
model_name,
padding_side="right",
)
tokenizer.pad_token = tokenizer.eos_token
model = AutoModelForCausalLM.from_pretrained(
model_name,
quantization_config=bnb_config,
torch_dtype=torch.bfloat16,
device_map={"": 0},
)
# Create huggingface pipeline
text_generation_pipeline = pipeline(
model=model,
tokenizer=tokenizer,
task="text-generation",
max_new_tokens=200,
pad_token_id=tokenizer.eos_token_id,
)
# Create langchain llm from huggingface pipeline
mistral_llm = HuggingFacePipeline(pipeline=text_generation_pipeline)
@tool
def get_data(n: int) -> List[dict]:
"""Get n datapoints."""
return [{"name": "foo", "value": "bar"}] * n
tools = [get_data]
#prompt = PromptTemplate(
# input_variables=['agent_scratchpad', 'input', 'tool_names', 'tools'],
# template='Answer the following questions as best you can. You have access to the following tools:\n\n{tools}\n\nUse the following format:\n\nQuestion: the input question you must answer\nThought: you should always think about what to do\nAction: the action to take, should be one of [{tool_names}]\nAction Input: the input to the action\nObservation: the result of the action\n... (this Thought/Action/Action Input/Observation can repeat N times)\nThought: I now know the final answer\nFinal Answer: the final answer to the original input question\n\nBegin!\n\nQuestion: {input}\nThought:{agent_scratchpad}'
#)
prompt = PromptTemplate.from_template(
"""Answer the following questions as best you can. You have access to the following tools:
{tools}
Use the following format:
Question: the input question you must answer
Thought: you should always think about what to do
Action: the action to take, should be one of [{tool_names}]
Action Input: the input to the action\nObservation: the result of the action
... (this Thought/Action/Action Input/Observation can repeat N times)
Thought: I now know the final answer
Final Answer: the final answer to the original input question
Begin!
Question: {input}
Thought:{agent_scratchpad}"""
)
agent = create_react_agent(
llm=mistral_llm,
tools=tools,
prompt=prompt,
)
agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True)
agent_executor.invoke({"input": "get me three datapoints"})
```
### Error Message and Stack Trace (if applicable)
```
> Entering new AgentExecutor chain...
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
[<ipython-input-61-69f05d02fa37>](https://localhost:8080/#) in <cell line: 48>()
46 agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True)
47
---> 48 agent_executor.invoke({"input": "get me three datapoints"})
49
50
27 frames
[/usr/local/lib/python3.10/dist-packages/langchain/chains/base.py](https://localhost:8080/#) in invoke(self, input, config, **kwargs)
161 except BaseException as e:
162 run_manager.on_chain_error(e)
--> 163 raise e
164 run_manager.on_chain_end(outputs)
165
[/usr/local/lib/python3.10/dist-packages/langchain/chains/base.py](https://localhost:8080/#) in invoke(self, input, config, **kwargs)
151 self._validate_inputs(inputs)
152 outputs = (
--> 153 self._call(inputs, run_manager=run_manager)
154 if new_arg_supported
155 else self._call(inputs)
[/usr/local/lib/python3.10/dist-packages/langchain/agents/agent.py](https://localhost:8080/#) in _call(self, inputs, run_manager)
1389 # We now enter the agent loop (until it returns something).
1390 while self._should_continue(iterations, time_elapsed):
-> 1391 next_step_output = self._take_next_step(
1392 name_to_tool_map,
1393 color_mapping,
[/usr/local/lib/python3.10/dist-packages/langchain/agents/agent.py](https://localhost:8080/#) in _take_next_step(self, name_to_tool_map, color_mapping, inputs, intermediate_steps, run_manager)
1095 ) -> Union[AgentFinish, List[Tuple[AgentAction, str]]]:
1096 return self._consume_next_step(
-> 1097 [
1098 a
1099 for a in self._iter_next_step(
[/usr/local/lib/python3.10/dist-packages/langchain/agents/agent.py](https://localhost:8080/#) in <listcomp>(.0)
1095 ) -> Union[AgentFinish, List[Tuple[AgentAction, str]]]:
1096 return self._consume_next_step(
-> 1097 [
1098 a
1099 for a in self._iter_next_step(
[/usr/local/lib/python3.10/dist-packages/langchain/agents/agent.py](https://localhost:8080/#) in _iter_next_step(self, name_to_tool_map, color_mapping, inputs, intermediate_steps, run_manager)
1123
1124 # Call the LLM to see what to do.
-> 1125 output = self.agent.plan(
1126 intermediate_steps,
1127 callbacks=run_manager.get_child() if run_manager else None,
[/usr/local/lib/python3.10/dist-packages/langchain/agents/agent.py](https://localhost:8080/#) in plan(self, intermediate_steps, callbacks, **kwargs)
385 # accumulate the output into final output and return that.
386 final_output: Any = None
--> 387 for chunk in self.runnable.stream(inputs, config={"callbacks": callbacks}):
388 if final_output is None:
389 final_output = chunk
[/usr/local/lib/python3.10/dist-packages/langchain_core/runnables/base.py](https://localhost:8080/#) in stream(self, input, config, **kwargs)
2444 **kwargs: Optional[Any],
2445 ) -> Iterator[Output]:
-> 2446 yield from self.transform(iter([input]), config, **kwargs)
2447
2448 async def atransform(
[/usr/local/lib/python3.10/dist-packages/langchain_core/runnables/base.py](https://localhost:8080/#) in transform(self, input, config, **kwargs)
2431 **kwargs: Optional[Any],
2432 ) -> Iterator[Output]:
-> 2433 yield from self._transform_stream_with_config(
2434 input,
2435 self._transform,
[/usr/local/lib/python3.10/dist-packages/langchain_core/runnables/base.py](https://localhost:8080/#) in _transform_stream_with_config(self, input, transformer, config, run_type, **kwargs)
1511 try:
1512 while True:
-> 1513 chunk: Output = context.run(next, iterator) # type: ignore
1514 yield chunk
1515 if final_output_supported:
[/usr/local/lib/python3.10/dist-packages/langchain_core/runnables/base.py](https://localhost:8080/#) in _transform(self, input, run_manager, config)
2395 )
2396
-> 2397 for output in final_pipeline:
2398 yield output
2399
[/usr/local/lib/python3.10/dist-packages/langchain_core/runnables/base.py](https://localhost:8080/#) in transform(self, input, config, **kwargs)
1049 got_first_val = False
1050
-> 1051 for chunk in input:
1052 if not got_first_val:
1053 final = chunk
[/usr/local/lib/python3.10/dist-packages/langchain_core/runnables/base.py](https://localhost:8080/#) in transform(self, input, config, **kwargs)
4171 **kwargs: Any,
4172 ) -> Iterator[Output]:
-> 4173 yield from self.bound.transform(
4174 input,
4175 self._merge_configs(config),
[/usr/local/lib/python3.10/dist-packages/langchain_core/runnables/base.py](https://localhost:8080/#) in transform(self, input, config, **kwargs)
1059
1060 if got_first_val:
-> 1061 yield from self.stream(final, config, **kwargs)
1062
1063 async def atransform(
[/usr/local/lib/python3.10/dist-packages/langchain_core/language_models/llms.py](https://localhost:8080/#) in stream(self, input, config, stop, **kwargs)
407 if type(self)._stream == BaseLLM._stream:
408 # model doesn't implement streaming, so use default implementation
--> 409 yield self.invoke(input, config=config, stop=stop, **kwargs)
410 else:
411 prompt = self._convert_input(input).to_string()
[/usr/local/lib/python3.10/dist-packages/langchain_core/language_models/llms.py](https://localhost:8080/#) in invoke(self, input, config, stop, **kwargs)
271 config = ensure_config(config)
272 return (
--> 273 self.generate_prompt(
274 [self._convert_input(input)],
275 stop=stop,
[/usr/local/lib/python3.10/dist-packages/langchain_core/language_models/llms.py](https://localhost:8080/#) in generate_prompt(self, prompts, stop, callbacks, **kwargs)
566 ) -> LLMResult:
567 prompt_strings = [p.to_string() for p in prompts]
--> 568 return self.generate(prompt_strings, stop=stop, callbacks=callbacks, **kwargs)
569
570 async def agenerate_prompt(
[/usr/local/lib/python3.10/dist-packages/langchain_core/language_models/llms.py](https://localhost:8080/#) in generate(self, prompts, stop, callbacks, tags, metadata, run_name, **kwargs)
739 )
740 ]
--> 741 output = self._generate_helper(
742 prompts, stop, run_managers, bool(new_arg_supported), **kwargs
743 )
[/usr/local/lib/python3.10/dist-packages/langchain_core/language_models/llms.py](https://localhost:8080/#) in _generate_helper(self, prompts, stop, run_managers, new_arg_supported, **kwargs)
603 for run_manager in run_managers:
604 run_manager.on_llm_error(e, response=LLMResult(generations=[]))
--> 605 raise e
606 flattened_outputs = output.flatten()
607 for manager, flattened_output in zip(run_managers, flattened_outputs):
[/usr/local/lib/python3.10/dist-packages/langchain_core/language_models/llms.py](https://localhost:8080/#) in _generate_helper(self, prompts, stop, run_managers, new_arg_supported, **kwargs)
590 try:
591 output = (
--> 592 self._generate(
593 prompts,
594 stop=stop,
[/usr/local/lib/python3.10/dist-packages/langchain_community/llms/huggingface_pipeline.py](https://localhost:8080/#) in _generate(self, prompts, stop, run_manager, **kwargs)
259
260 # Process batch of prompts
--> 261 responses = self.pipeline(
262 batch_prompts,
263 stop_sequence=stop,
[/usr/local/lib/python3.10/dist-packages/transformers/pipelines/text_generation.py](https://localhost:8080/#) in __call__(self, text_inputs, **kwargs)
239 return super().__call__(chats, **kwargs)
240 else:
--> 241 return super().__call__(text_inputs, **kwargs)
242
243 def preprocess(
[/usr/local/lib/python3.10/dist-packages/transformers/pipelines/base.py](https://localhost:8080/#) in __call__(self, inputs, num_workers, batch_size, *args, **kwargs)
1146 batch_size = self._batch_size
1147
-> 1148 preprocess_params, forward_params, postprocess_params = self._sanitize_parameters(**kwargs)
1149
1150 # Fuse __init__ params and __call__ params without modifying the __init__ ones.
[/usr/local/lib/python3.10/dist-packages/transformers/pipelines/text_generation.py](https://localhost:8080/#) in _sanitize_parameters(self, return_full_text, return_tensors, return_text, return_type, clean_up_tokenization_spaces, prefix, handle_long_generation, stop_sequence, add_special_tokens, truncation, padding, max_length, **generate_kwargs)
169
170 if stop_sequence is not None:
--> 171 stop_sequence_ids = self.tokenizer.encode(stop_sequence, add_special_tokens=False)
172 if len(stop_sequence_ids) > 1:
173 warnings.warn(
[/usr/local/lib/python3.10/dist-packages/transformers/tokenization_utils_base.py](https://localhost:8080/#) in encode(self, text, text_pair, add_special_tokens, padding, truncation, max_length, stride, return_tensors, **kwargs)
2598 method).
2599 """
-> 2600 encoded_inputs = self.encode_plus(
2601 text,
2602 text_pair=text_pair,
[/usr/local/lib/python3.10/dist-packages/transformers/tokenization_utils_base.py](https://localhost:8080/#) in encode_plus(self, text, text_pair, add_special_tokens, padding, truncation, max_length, stride, is_split_into_words, pad_to_multiple_of, return_tensors, return_token_type_ids, return_attention_mask, return_overflowing_tokens, return_special_tokens_mask, return_offsets_mapping, return_length, verbose, **kwargs)
3006 )
3007
-> 3008 return self._encode_plus(
3009 text=text,
3010 text_pair=text_pair,
[/usr/local/lib/python3.10/dist-packages/transformers/tokenization_utils_fast.py](https://localhost:8080/#) in _encode_plus(self, text, text_pair, add_special_tokens, padding_strategy, truncation_strategy, max_length, stride, is_split_into_words, pad_to_multiple_of, return_tensors, return_token_type_ids, return_attention_mask, return_overflowing_tokens, return_special_tokens_mask, return_offsets_mapping, return_length, verbose, **kwargs)
574 ) -> BatchEncoding:
575 batched_input = [(text, text_pair)] if text_pair else [text]
--> 576 batched_output = self._batch_encode_plus(
577 batched_input,
578 is_split_into_words=is_split_into_words,
[/usr/local/lib/python3.10/dist-packages/transformers/tokenization_utils_fast.py](https://localhost:8080/#) in _batch_encode_plus(self, batch_text_or_text_pairs, add_special_tokens, padding_strategy, truncation_strategy, max_length, stride, is_split_into_words, pad_to_multiple_of, return_tensors, return_token_type_ids, return_attention_mask, return_overflowing_tokens, return_special_tokens_mask, return_offsets_mapping, return_length, verbose)
502 )
503 print(batch_text_or_text_pairs)
--> 504 encodings = self._tokenizer.encode_batch(
505 batch_text_or_text_pairs,
506 add_special_tokens=add_special_tokens,
TypeError: TextEncodeInput must be Union[TextInputSequence, Tuple[InputSequence, InputSequence]]
```
### Description
When running a simple react agent example based on the [react agent docs](https://python.langchain.com/docs/modules/agents/agent_types/react) using a huggingface model (Mistral 7B Instruct), execution fails on the line:
```
agent_executor.invoke({"input": "get me three datapoints"})
```
It appears that the agent_executor is either sending nothing to the tokenizer, or sending the dict with "input", both of which are not acceptable by the tokenizer. When I try passing a string to the `agent_executor.invoke` method (which would be acceptable by the tokenizer), the executor complains that it's not a dict.
### System Info
```
System Information
------------------
> OS: Linux
> OS Version: #1 SMP PREEMPT_DYNAMIC Sat Nov 18 15:31:17 UTC 2023
> Python Version: 3.10.12 (main, Nov 20 2023, 15:14:05) [GCC 11.4.0]
Package Information
-------------------
> langchain_core: 0.1.30
> langchain: 0.1.11
> langchain_community: 0.0.27
> langsmith: 0.1.23
> langchain_text_splitters: 0.0.1
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph
> langserve
``` | ReAct Agent Not Working With Huggingface Model When Using `create_react_agent` | https://api.github.com/repos/langchain-ai/langchain/issues/18820/comments | 1 | 2024-03-08T21:27:59Z | 2024-06-16T16:09:04Z | https://github.com/langchain-ai/langchain/issues/18820 | 2,176,798,351 | 18,820 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
from langchain_community.vectorstores import Weaviate
from langchain_openai import OpenAIEmbeddings
embeddings = OpenAIEmbeddings()
vectorstore = Weaviate.from_documents(
text_chunks,
embeddings,
client=client,
by_text=False
)
document_content_description = "description"
### Error Message and Stack Trace (if applicable)
```python
AttributeError: 'WeaviateClient' object has no attribute 'schema'
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
File <command-3980657862137734>, line 6
2 from langchain_openai import OpenAIEmbeddings
4 embeddings = OpenAIEmbeddings()
----> 6 vectorstore = Weaviate.from_documents(
7 text_chunks,
8 embeddings,
9 client=client,
10 by_text=False
11 )
13 document_content_description = "telecom documentation"
File /local_disk0/.ephemeral_nfs/envs/pythonEnv-bb42155d-01ee-4da3-9c8c-ce4489c59c93/lib/python3.10/site-packages/langchain_core/vectorstores.py:528, in VectorStore.from_documents(cls, documents, embedding, **kwargs)
526 texts = [d.page_content for d in documents]
527 metadatas = [d.metadata for d in documents]
--> 528 return cls.from_texts(texts, embedding, metadatas=metadatas, **kwargs)
File /local_disk0/.ephemeral_nfs/envs/pythonEnv-bb42155d-01ee-4da3-9c8c-ce4489c59c93/lib/python3.10/site-packages/langchain_community/vectorstores/weaviate.py:465, in Weaviate.from_texts(cls, texts, embedding, metadatas, client, weaviate_url, weaviate_api_key, batch_size, index_name, text_key, by_text, relevance_score_fn, **kwargs)
463 schema = _default_schema(index_name, text_key)
464 # check whether the index already exists
--> 465 if not client.schema.exists(index_name):
466 client.schema.create_class(schema)
468 embeddings = embedding.embed_documents(texts) if embedding else None
AttributeError: 'WeaviateClient' object has no attribute 'schema'
### Description
Trying to ingest langchain Document classes into Weaviate Cloud.
Using the up-to-date version of all libs.
Followed the tutorial here: https://python.langchain.com/docs/integrations/vectorstores/weaviate
### System Info
langchain==0.1.7
weaviate-client==4.5.1
weaviate cloud ver. 1.24.1 and 1.23.10 (2 different clusters)
Running in Windows as well on databricks. | AttributeError: 'WeaviateClient' object has no attribute 'schema' | https://api.github.com/repos/langchain-ai/langchain/issues/18809/comments | 5 | 2024-03-08T17:41:27Z | 2024-07-24T16:07:51Z | https://github.com/langchain-ai/langchain/issues/18809 | 2,176,491,695 | 18,809 |
[
"langchain-ai",
"langchain"
] | ### Privileged issue
- [X] I am a LangChain maintainer, or was asked directly by a LangChain maintainer to create an issue here.
### Issue Content
Hi all!
Core Runnable methods are not documented well enough yet. It would be great to have each of the method include a self-contained usage example that folks can use as a reference.
Documenting these methods well will have a fairly high value impact by making is easier for LangChain users to use core LangChain primitives.
Runnable (https://github.com/langchain-ai/langchain/blob/bc6249c889a4fe208f8f145e80258eb3de20d2d4/libs/core/langchain_core/runnables/base.py#L103-L103):
* assign
* with_fallbacks
* with_retry
* pick
* pipe
* map
* with_listeners
RunnableSerializable (https://github.com/langchain-ai/langchain/blob/bc6249c889a4fe208f8f145e80258eb3de20d2d4/libs/core/langchain_core/runnables/base.py#L1664-L1664):
* configurable_fields
* configurable_alternatives
## Acceptance criteria
- Documentation includes context
- Documentation includes a self contained example in python (including all imports).
- Example uses ..code-block: python syntax (see other places in the code as reference).
Please document only on method per PR to make it easy to review and get PRs merged quickly! | Add in code documentation to core Runnable methods | https://api.github.com/repos/langchain-ai/langchain/issues/18804/comments | 7 | 2024-03-08T15:50:55Z | 2024-07-09T16:06:54Z | https://github.com/langchain-ai/langchain/issues/18804 | 2,176,301,157 | 18,804 |
[
"langchain-ai",
"langchain"
] | ### Privileged issue
- [X] I am a LangChain maintainer, or was asked directly by a LangChain maintainer to create an issue here.
### Issue Content
We would love to get help to add in code documentation to LangChain core to better document LCEL primitives:
Here is an example of a documented runnable:
https://api.python.langchain.com/en/latest/runnables/langchain_core.runnables.base.RunnableLambda.html#langchain_core.runnables.base.RunnableLambda
Here is an example of an undocumented runnable (currently many are undocumented): https://api.python.langchain.com/en/latest/runnables/langchain_core.runnables.passthrough.RunnablePick.html
Acceptance Criteria:
- PR should be as minimal as possible (don't try to document unrelated runnables please!) Keep PR size small to get things merged quicker and avoid merge conflicts.
- Document the class doc-string:
- include an overview about what the runnable does
- include a ...code-block: python that shows a self-contained example of how to use the runnable.
- the self contained example should include all relevant imports so it can be copy pasted AS is and run
How do i figure out what the runnable does?
- All runnables have unit tests that show how the runnable can be used! You can locate the unit tests and use them as reference.
Some especially import runnables (note that some of these are base abstractions)
- https://github.com/langchain-ai/langchain/blob/bc6249c889a4fe208f8f145e80258eb3de20d2d4/libs/core/langchain_core/runnables/passthrough.py#L315-L315
- https://github.com/langchain-ai/langchain/blob/bc6249c889a4fe208f8f145e80258eb3de20d2d4/libs/core/langchain_core/runnables/passthrough.py#L577-L577
- https://github.com/langchain-ai/langchain/blob/bc6249c889a4fe208f8f145e80258eb3de20d2d4/libs/core/langchain_core/runnables/configurable.py#L44-L44
- https://github.com/langchain-ai/langchain/blob/bc6249c889a4fe208f8f145e80258eb3de20d2d4/libs/core/langchain_core/runnables/configurable.py#L222-L222
- Context: https://github.com/langchain-ai/langchain/blob/bc6249c889a4fe208f8f145e80258eb3de20d2d4/libs/core/langchain_core/beta/runnables/context.py#L309-L309 | Add in-code documentation for LangChain Runnables | https://api.github.com/repos/langchain-ai/langchain/issues/18803/comments | 7 | 2024-03-08T15:39:52Z | 2024-07-31T21:58:21Z | https://github.com/langchain-ai/langchain/issues/18803 | 2,176,280,298 | 18,803 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
When using langchain pointing to Mistral, for example,
```
from langchain_openai import OpenAIEmbeddings
embedding = OpenAIEmbeddings(model="Mistral-7B-Instruct-v0.2")
```
one gets a `openai.InternalServerError: Internal Server Error`
But when using
```
embedding = OpenAIEmbeddings(model="text-embedding-ada-002")
```
It works. The thing is that the server is using an alias to mistral called `text-embedding-ada-002`, so it's literally the same model as above, but there are assumptions on langchain's code about it.
### Error Message and Stack Trace (if applicable)
_No response_
### Description
The use of embeddings on my own inference server would fail if using names which are not the ones provided by openAi. But if one aliases the names : for example, `text-embedding-ada-002` is just an alias to `Mistral-7B-Instruct-v0.2`, then it just works.
So it seems that there are assumptions on langchain's code about hardcoded models' names.
### System Info
```
langchain==0.1.11
langchain-community==0.0.27
langchain-core==0.1.30
langchain-openai==0.0.8
langchain-text-splitters==0.0.1
```
Similar behavior on mac os and on linux
Python 3.8, 3.9 and 3.11.7 | OpenAIEmbeddings relies on hardcoded names | https://api.github.com/repos/langchain-ai/langchain/issues/18800/comments | 1 | 2024-03-08T14:35:37Z | 2024-06-18T16:09:46Z | https://github.com/langchain-ai/langchain/issues/18800 | 2,176,162,258 | 18,800 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
Although similarity search now includes the flexibility to change the vector comparison fields, etc, we still can't put a limitation on what all fields we want the opensearch instance to return upon querying, further there is no option to collapse the results within the query. (Sharing an example query I would like to execute post my code snippet)
```python
from langchain_community.vectorstores import OpenSearchVectorSearch
from langchain.embeddings import HuggingFaceEmbeddings
from opensearchpy import RequestsHttpConnection
embeddings = HuggingFaceEmbeddings(model_name='some model')
oss = OpenSearchVectorSearch(
opensearch_url = 'someurl',
index_name = 'some index',
embedding_function = embeddings
)
docs = oss.similarity_search(
"Some query",
search_type = "script_scoring",
space_type= "cosinesimil",
vector_field = "custom_field",
text_field = "custom_metadata field",
metadata_field="*",
k=3
)
```
Sample OpenSearch query I wish to run
```python
{
"query": {
....
},
"_source": {
"excludes": ["snippet_window_vector"]
},
"collapse": {
"field": "id",
"inner_hits": {
"name": "top_hit",
"size": 1,
"_source": {
"excludes": ["snippet_window_vector"]
},
"sort": [{"_score": {"order": "desc"}}]
}
},
"size": self.top_k
}
```
Suggestions: Either allow the users to provide the query themselves, or at least allow them to specify `collapse` and `_source` fields in query using kwargs being passed to the `_default_script_query function` in `langchain_community > vectorestores > opensearch_vector_search.py`
### Error Message and Stack Trace (if applicable)
_No response_
### Description
Although similarity search now includes the flexibility to change the vector comparison fields, etc, we still can't put a limitation on what all fields we want the opensearch instance to return upon querying, further there is no option to collapse the results within the query. (Sharing an example query I would like to execute post my code snippet).
### System Info
```
langchain==0.1.11
langchain-community==0.0.27
langchain-core==0.1.30
langchain-text-splitters==0.0.1
```
platform Mac
```
python version 3.9
``` | Query Flexibility Limitation on OpenSearchVectorSearch for a pre-existing index | https://api.github.com/repos/langchain-ai/langchain/issues/18797/comments | 0 | 2024-03-08T13:12:10Z | 2024-06-14T16:09:00Z | https://github.com/langchain-ai/langchain/issues/18797 | 2,176,018,816 | 18,797 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
from langchain_community.chat_message_histories import StreamlitChatMessageHistory
import streamlit as st
from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder
from langchain_core.runnables.history import RunnableWithMessageHistory
from langchain_community.chat_models import ChatCohere
# Optionally, specify your own session_state key for storing messages
msgs = StreamlitChatMessageHistory(key="special_app_key")
if len(msgs.messages) == 0:
msgs.add_ai_message("How can I help you?")
prompt = ChatPromptTemplate.from_messages(
[
("system", "You are an AI chatbot having a conversation with a human."),
MessagesPlaceholder(variable_name="history"),
("human", "{question}"),
]
)
chain = prompt | ChatCohere(cohere_api_key="",model="command", max_tokens=256, temperature=0.75)
chain_with_history = RunnableWithMessageHistory(
chain,
lambda session_id: msgs, # Always return the instance created earlier
input_messages_key="question",
history_messages_key="history",
)
for msg in msgs.messages:
st.chat_message(msg.type).write(msg.content)
if prompt := st.chat_input():
st.chat_message("human").write(prompt)
# As usual, new messages are added to StreamlitChatMessageHistory when the Chain is called.
config = {"configurable": {"session_id": "any"}}
response = chain_with_history.invoke({"question": prompt}, config)
st.chat_message("ai").write(response.content)
### Error Message and Stack Trace (if applicable)
KeyError: 'st.session_state has no key "langchain_messages". Did you forget to initialize it? More info: https://docs.streamlit.io/library/advanced-features/session-state#initialization'
Traceback:
File "c:\users\prakotian\appdata\local\programs\python\python38\lib\site-packages\streamlit\runtime\scriptrunner\script_runner.py", line 535, in _run_script
exec(code, module.__dict__)
File "C:\Users\prakotian\Desktop\Projects\ChatData\chat.py", line 142, in <module>
response = chain_with_history.invoke({"question": prompt}, config)
File "c:\users\prakotian\appdata\local\programs\python\python38\lib\site-packages\langchain_core\runnables\base.py", line 4069, in invoke
return self.bound.invoke(
File "c:\users\prakotian\appdata\local\programs\python\python38\lib\site-packages\langchain_core\runnables\base.py", line 4069, in invoke
return self.bound.invoke(
File "c:\users\prakotian\appdata\local\programs\python\python38\lib\site-packages\langchain_core\runnables\base.py", line 2075, in invoke
input = step.invoke(
File "c:\users\prakotian\appdata\local\programs\python\python38\lib\site-packages\langchain_core\runnables\base.py", line 4069, in invoke
return self.bound.invoke(
File "c:\users\prakotian\appdata\local\programs\python\python38\lib\site-packages\langchain_core\runnables\passthrough.py", line 419, in invoke
return self._call_with_config(self._invoke, input, config, **kwargs)
File "c:\users\prakotian\appdata\local\programs\python\python38\lib\site-packages\langchain_core\runnables\base.py", line 1262, in _call_with_config
context.run(
File "c:\users\prakotian\appdata\local\programs\python\python38\lib\site-packages\langchain_core\runnables\config.py", line 326, in call_func_with_variable_args
return func(input, **kwargs) # type: ignore[call-arg]
File "c:\users\prakotian\appdata\local\programs\python\python38\lib\site-packages\langchain_core\runnables\passthrough.py", line 406, in _invoke
**self.mapper.invoke(
File "c:\users\prakotian\appdata\local\programs\python\python38\lib\site-packages\langchain_core\runnables\base.py", line 2712, in invoke
output = {key: future.result() for key, future in zip(steps, futures)}
File "c:\users\prakotian\appdata\local\programs\python\python38\lib\site-packages\langchain_core\runnables\base.py", line 2712, in <dictcomp>
output = {key: future.result() for key, future in zip(steps, futures)}
File "c:\users\prakotian\appdata\local\programs\python\python38\lib\concurrent\futures\_base.py", line 444, in result
return self.__get_result()
File "c:\users\prakotian\appdata\local\programs\python\python38\lib\concurrent\futures\_base.py", line 389, in __get_result
raise self._exception
File "c:\users\prakotian\appdata\local\programs\python\python38\lib\concurrent\futures\thread.py", line 57, in run
result = self.fn(*self.args, **self.kwargs)
File "c:\users\prakotian\appdata\local\programs\python\python38\lib\site-packages\langchain_core\runnables\base.py", line 4069, in invoke
return self.bound.invoke(
File "c:\users\prakotian\appdata\local\programs\python\python38\lib\site-packages\langchain_core\runnables\base.py", line 3523, in invoke
return self._call_with_config(
File "c:\users\prakotian\appdata\local\programs\python\python38\lib\site-packages\langchain_core\runnables\base.py", line 1262, in _call_with_config
context.run(
File "c:\users\prakotian\appdata\local\programs\python\python38\lib\site-packages\langchain_core\runnables\config.py", line 326, in call_func_with_variable_args
return func(input, **kwargs) # type: ignore[call-arg]
File "c:\users\prakotian\appdata\local\programs\python\python38\lib\site-packages\langchain_core\runnables\base.py", line 3397, in _invoke
output = call_func_with_variable_args(
File "c:\users\prakotian\appdata\local\programs\python\python38\lib\site-packages\langchain_core\runnables\config.py", line 326, in call_func_with_variable_args
return func(input, **kwargs) # type: ignore[call-arg]
File "c:\users\prakotian\appdata\local\programs\python\python38\lib\site-packages\langchain_core\runnables\history.py", line 409, in _enter_history
return hist.messages.copy()
File "c:\users\prakotian\appdata\local\programs\python\python38\lib\site-packages\langchain_community\chat_message_histories\streamlit.py", line 32, in messages
return st.session_state[self._key]
File "c:\users\prakotian\appdata\local\programs\python\python38\lib\site-packages\streamlit\runtime\state\session_state_proxy.py", line 90, in __getitem__
return get_session_state()[key]
File "c:\users\prakotian\appdata\local\programs\python\python38\lib\site-packages\streamlit\runtime\state\safe_session_state.py", line 91, in __getitem__
return self._state[key]
File "c:\users\prakotian\appdata\local\programs\python\python38\lib\site-packages\streamlit\runtime\state\session_state.py", line 400, in __getitem__
raise KeyError(_missing_key_error_message(key))
### Description
I am using the same code step by step as mentioned here "https://python.langchain.com/docs/integrations/memory/streamlit_chat_message_history"
Still getting the key error
### System Info
System Information
------------------
> OS: Windows
> OS Version: 10.0.19045
> Python Version: 3.8.10 (tags/v3.8.10:3d8993a, May 3 2021, 11:48:03) [MSC v.1928 64 bit (AMD64)]
Package Information
-------------------
> langchain_core: 0.1.29
> langchain: 0.1.11
> langchain_community: 0.0.25
> langsmith: 0.1.19
> langchain_text_splitters: 0.0.1
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph
> langserve | StreamlitChatMessageHistory gives "KeyError: 'st.session_state has no key "langchain_messages" | https://api.github.com/repos/langchain-ai/langchain/issues/18790/comments | 2 | 2024-03-08T11:02:36Z | 2024-07-01T16:05:33Z | https://github.com/langchain-ai/langchain/issues/18790 | 2,175,803,143 | 18,790 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
# llms.py of opengpts (https://github.com/langchain-ai/opengpts)
@lru_cache(maxsize=1)
def get_tgi_llm():
huggingface_hub.login(os.getenv("HUGGINGFACE_TOKEN"))
llm = HuggingFaceTextGenInference(
inference_server_url="http://myinferenceserver.com/",
max_new_tokens=2048,
top_k=10,
top_p=0.95,
typical_p=0.95,
temperature=0.3,
repetition_penalty=1.03,
streaming=True,
callbacks=[StreamingStdOutCallbackHandler()],
server_kwargs={
"headers": {
"Content-Type": "application/json",
}
},
)
return ChatHuggingFace(llm=llm, model_id="HuggingFaceH4/zephyr-7b-beta") # setting model_id for using tokenizer
### Error Message and Stack Trace (if applicable)
There is no error, but the output is wrong.
# in my log
HumanMessage(content='Hello?')]
opengpts-backend | [HumanMessage(content='Hello?'), 'Hello']
opengpts-backend | [HumanMessage(content='Hello?'), 'Hello!']
opengpts-backend | [HumanMessage(content='Hello?'), 'Hello! How']
...
### Description
I'm trying to use the 'langchain' library to use the TGI model in OpenGPTs.
I expect the model response to look like the following log:
HumanMessage(content='Hello?')]
opengpts-backend | [HumanMessage(content='Hello?'), AIMessageChunk(content='Hello')]
opengpts-backend | [HumanMessage(content='Hello?'), AIMessageChunk(content='Hello!')]
opengpts-backend | [HumanMessage(content='Hello?'), AIMessageChunk(content='Hello! How')]
However, messages that are not wrapped in AIMessageChunk are logged. This causes the token response in OpenGPTs to not be visible in the chat window.
### System Info
(py312) eunhye1kim@eunhye1kim-400TEA-400SEA:~/git/forPR/opengpts/backend$ pip freeze | grep langchain
langchain==0.1.7
langchain-cli==0.0.21
langchain-community==0.0.20
langchain-core==0.1.27
langchain-experimental==0.0.37
langchain-google-vertexai==0.0.6
langchain-openai==0.0.7
langchain-robocorp==0.0.3 | Chat HuggingFace model does not send chunked replies when streaming=True. | https://api.github.com/repos/langchain-ai/langchain/issues/18782/comments | 0 | 2024-03-08T08:47:52Z | 2024-06-14T16:08:58Z | https://github.com/langchain-ai/langchain/issues/18782 | 2,175,567,686 | 18,782 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
from langchain_community.chat_models import `ChatZhipuAI`
from langchain_core.messages import AIMessage, HumanMessage, SystemMessage
zhipuai_api_key = my_key
chat = ChatZhipuAI(
temperature=0.5,
api_key=zhipuai_api_key,
model="chatglm_turbo",
)
messages = [
AIMessage(content="Hi."),
SystemMessage(content="Your role is a poet."),
HumanMessage(content="Write a short poem about AI in four lines."),
]
response = chat(messages)
print(response.content) # Displays the AI-generated poem
```
### Error Message and Stack Trace (if applicable)
```
AttributeError: module 'zhipuai' has no attribute 'model_api'
```
### Description
I'm trying to run codes in langchain document "https://python.langchain.com/docs/integrations/chat/zhipuai", then I get this issus. Maybe zhipuai update their package version, while langchain doesn't make any change.
### System Info
System Information
------------------
> OS: Windows
> OS Version: 10.0.22635
> Python Version: 3.12.2 | packaged by conda-forge | (main, Feb 16 2024, 20:42:31) [MSC v.1937 64 bit (AMD64)]
Package Information
-------------------
> langchain_core: 0.1.30
> langchain: 0.1.11
> langchain_community: 0.0.25
> langsmith: 0.1.22
> langchain_openai: 0.0.8
> langchain_text_splitters: 0.0.1
> langchainhub: 0.1.14 | langchain_community ChatZhipuAI API doesn't work | https://api.github.com/repos/langchain-ai/langchain/issues/18771/comments | 6 | 2024-03-08T02:03:46Z | 2024-03-11T11:57:19Z | https://github.com/langchain-ai/langchain/issues/18771 | 2,175,138,009 | 18,771 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
for the example sentence "天青色等烟雨,而我在等你。哈哈哈....."
when I use RecursiveCharacterTextSplitter split it by ["\n\n", "\n", "。"],the result shows that the "。"will be appended in the beginning of the sub-sentence.for example:[Document(page_content="。哈哈哈...")]
### Error Message and Stack Trace (if applicable)
_No response_
### Description
for the example sentence "天青色等烟雨,而我在等你。哈哈哈....."
when I use RecursiveCharacterTextSplitter split it by ["\n\n", "\n", "。"],the result shows that the "。"will be appended in the beginning of the sub-sentence.for example:[Document(page_content="。哈哈哈...")]
### System Info
langchain 0.1.0 | RecursiveCharacterTextSplitter for Chinese sentence | https://api.github.com/repos/langchain-ai/langchain/issues/18770/comments | 1 | 2024-03-08T01:55:22Z | 2024-06-16T16:08:54Z | https://github.com/langchain-ai/langchain/issues/18770 | 2,175,131,360 | 18,770 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
N/A
### Error Message and Stack Trace (if applicable)
_No response_
### Description
There are several places using the following pattern to generate uuid for Chroma vector entries.
https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/vectorstores/chroma.py
ids = [str(uuid.uuid4()) for _ in uris]
ids = [str(uuid.uuid4()) for _ in texts]
ids = [str(uuid.uuid4()) for _ in texts]
However, it will create the same uuid for all entries in the output list.
Not sure whether this is intended or an issue.
Thanks
### System Info
N/A | Using same UUID for Chroma vector entries | https://api.github.com/repos/langchain-ai/langchain/issues/18767/comments | 1 | 2024-03-08T01:29:05Z | 2024-03-08T01:55:53Z | https://github.com/langchain-ai/langchain/issues/18767 | 2,175,110,395 | 18,767 |
[
"langchain-ai",
"langchain"
] | ### Privileged issue
- [X] I am a LangChain maintainer, or was asked directly by a LangChain maintainer to create an issue here.
### Issue Content
Reproducible issue. on_tool_end logs the string representation of the tool. All tool callbacks need to be updated to accommodate
```python
from langchain_core.tools import tool
from langchain_core.documents import Document
def foo(x):
return {
'x': 5
}
@tool
def get_docs(x: int):
"""Hello"""
return [Document(page_content='hello')]
chain = foo | get_docs
async for event in chain.astream_events({}, version='v1'):
if event['event'] == 'on_tool_end':
print(event)
``` | Update tool callbacks to send the actual response from the tool | https://api.github.com/repos/langchain-ai/langchain/issues/18760/comments | 12 | 2024-03-07T21:52:10Z | 2024-03-13T17:59:16Z | https://github.com/langchain-ai/langchain/issues/18760 | 2,174,866,952 | 18,760 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
I am using a `PGVector` vector store and running the following code to retrieve relevant docs:
```python
retriever = pg_vs.as_retriever(
search_kwargs = {
"k" : 10,
"filter" : {
"year" : {"gte" : 2022}
}
}
)
```
When running `retriever.get_relevant_documents(query)` I get results that have a `year` field of less than 2022. I have tried different variations of the operator, such as `$gte` as well as other similar operators but they are all leading to the same result. The `year` column type is `int`.
### Error Message and Stack Trace (if applicable)
_No response_
### Description
I am trying to do metadata filtering using a "greater-than-or-equal-to" operator `gte` with a `PGVector` vector store, using a metadata field that is of type `int`. The filter does not work and the retriever returns results that do not adhere to the filter restriction.
### System Info
```
langchain==0.1.7
langchain_community==0.0.2.0
pgvector==0.2.5
```
| PGVector advanced filtering operators (e.g. `gte`) do not seem to work | https://api.github.com/repos/langchain-ai/langchain/issues/18758/comments | 1 | 2024-03-07T21:33:27Z | 2024-07-04T16:08:03Z | https://github.com/langchain-ai/langchain/issues/18758 | 2,174,838,484 | 18,758 |
[
"langchain-ai",
"langchain"
] | ### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
%pip install –upgrade –quiet langchain langchain-openai.
fails . This is across just about every page.
Also, There are incomplete pip installs. Especially in the cookbook. What is there should be dirt simple to get running.
https://python.langchain.com/docs/expression_language/cookbook/multiple_chains
### Idea or request for content:
Also, you mention Runnables all over the place. Not really any formal docs on it. Same for chains. | How about pip install instructions that are formed correctly | https://api.github.com/repos/langchain-ai/langchain/issues/18747/comments | 1 | 2024-03-07T18:23:10Z | 2024-03-07T21:29:33Z | https://github.com/langchain-ai/langchain/issues/18747 | 2,174,483,798 | 18,747 |
[
"langchain-ai",
"langchain"
] | ### Privileged issue
- [X] I am a LangChain maintainer, or was asked directly by a LangChain maintainer to create an issue here.
### Issue Content
Minimal reproduction
```python
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.runnables import RunnablePassthrough, RunnableGenerator
from langchain_core.beta.runnables.context import Context
async def to_dict(input):
async for chunk in input:
yield {
'foo': chunk
}
chain = Context.setter('input') | model | to_dict | Context.getter('input')
async for chunk in chain.astream('hello'):
print(chunk)
``` | Potential bug in Context Runnable | https://api.github.com/repos/langchain-ai/langchain/issues/18741/comments | 2 | 2024-03-07T15:38:27Z | 2024-03-08T02:23:14Z | https://github.com/langchain-ai/langchain/issues/18741 | 2,174,152,369 | 18,741 |
[
"langchain-ai",
"langchain"
] | ### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
使用QAGenerateChain产生问答对,一个问答只能生成一个问答对吗?
### Idea or request for content:
使用QAGenerateChain产生问答对,一个问答只能生成一个问答对吗?如果我想某个文档产生自己想要数量的问答对,可以通过修改代码实现吗?
| 使用QAGenerateChain产生问答对,一个问答只能生成一个问答对吗? | https://api.github.com/repos/langchain-ai/langchain/issues/18737/comments | 2 | 2024-03-07T14:49:48Z | 2024-03-10T14:38:09Z | https://github.com/langchain-ai/langchain/issues/18737 | 2,174,042,531 | 18,737 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
### Custom Retriever Code
```python
# Code from: https://redis.com/blog/build-ecommerce-chatbot-with-redis/
class UserRetriever(BaseRetriever):
"""
ArgenxUserRetriever class extends BaseRetriever and is designed for retrieving relevant documents
based on a user query using hybrid similarity search with a VectorStore.
Attributes:
- vectorstore (VectorStore): The VectorStore instance used for similarity search.
- username (str): The username associated with the documents, used for personalized retrieval.
Methods:
- clean_metadata(self, doc): Cleans the metadata of a document, extracting relevant information for display.
- get_relevant_documents(self, query): Retrieves relevant documents based on a user query using hybrid similarity search.
Example:
retriever = ArgenxRetriever(vectorstore=vector_store, username="john_doe")
relevant_docs = retriever.get_relevant_documents("How does photosynthesis work?")
for doc in relevant_docs:
print(doc.metadata["Title"], doc.page_content)
"""
vectorstore: VectorStore
username: str
def clean_metadata(self, doc):
"""
Cleans the metadata of a document.
Parameters:
doc (object): The document object.
Returns:
dict: A dictionary containing the cleaned metadata.
"""
metadata = doc.metadata
return {
"file_id": metadata["title"],
"source": metadata["title"] + "_page=" + str(int(metadata["chunk_id"].split("_")[-1])+1),
"page_number": str(int(metadata["chunk_id"].split("_")[-1])+1),
"document_title": metadata["document_title_result"]
}
def get_relevant_documents(self, query):
"""
Retrieves relevant documents based on a given query.
Args:
query (str): The query to search for relevant documents.
Returns:
list: A list of relevant documents.
"""
docs = []
is_match_filter = ""
load_dotenv()
admins = os.getenv('ADMINS', '')
admins_list = admins.split(',')
is_admin = self.username.split('@')[0] in admins_list
os.environ["AZURESEARCH_FIELDS_ID"] = "chunk_id"
os.environ["AZURESEARCH_FIELDS_CONTENT"] = "chunk"
os.environ["AZURESEARCH_FIELDS_CONTENT_VECTOR"] = "vector"
#os.environ["AZURESEARCH_FIELDS_TAG"] = "metadata"
if not is_admin:
is_match_filter = f"search.ismatch('{self.username.split('@')[0]}', 'usernames_result')"
for doc in self.vectorstore.similarity_search(query, search_type="semantic_hybrid", k=NUMBER_OF_CHUNKS_TO_RETURN, filters=is_match_filter):
cleaned_metadata = self.clean_metadata(doc)
docs.append(Document(
page_content=doc.page_content,
metadata=cleaned_metadata))
print("\n\n----------------DOCUMENTS RETRIEVED------------------\n\n", docs)
return docs
```
### setup langchain chain,llm
```python
chat = AzureChatOpenAI(
azure_endpoint=SHD_AZURE_OPENAI_ENDPOINT,
openai_api_version="2023-03-15-preview",
deployment_name= POL_OPENAI_EMBEDDING_DEPLOYMENT_NAME,
openai_api_key=SHD_OPENAI_KEY ,
openai_api_type="Azure",
model_name=POL_OPENAI_GPT_MODEL_NAME,
streaming=True,
callbacks=[ChainStreamHandler(g)], # Set ChainStreamHandler as callback
temperature=0)
# Define system and human message prompts
messages = [
SystemMessagePromptTemplate.from_template(ANSWER_PROMPT),
HumanMessagePromptTemplate.from_template("{question} Please answer in html format"),
]
# Set up embeddings, vector store, chat prompt, retriever, memory, and chain
embeddings = setup_embeddings()
vector_store = setup_vector_store(embeddings)
chat_prompt = ChatPromptTemplate.from_messages(messages)
retriever = UserRetriever(vectorstore=vector_store, username=username)
memory = setup_memory()
#memory.save_context(chat_history)
chain = ConversationalRetrievalChain.from_llm(chat,
retriever=retriever,
memory=memory,
verbose=False,
combine_docs_chain_kwargs={
"prompt": chat_prompt,
"document_prompt": PromptTemplate(
template=DOCUMENT_PROMPT,
input_variables=["page_content", "source"]
)
}
)
```
### My fields

### Error Message and Stack Trace (if applicable)
```
Exception has occurred: KeyError
'metadata'
```
The error is thown in this line:
```python
for doc in self.vectorstore.similarity_search(query, search_type="semantic_hybrid", k=NUMBER_OF_CHUNKS_TO_RETURN, filters=is_match_filter):
```
When I dig deep in the langchain code, I found this code:
```python
docs = [
(
Document(
page_content=result.pop(FIELDS_CONTENT),
metadata={
**(
json.loads(result[FIELDS_METADATA])
if FIELDS_METADATA in result
else {
k: v
for k, v in result.items()
if k != FIELDS_CONTENT_VECTOR
}
),
**{
"captions": {
"text": result.get("@search.captions", [{}])[0].text,
"highlights": result.get("@search.captions", [{}])[
0
].highlights,
}
if result.get("@search.captions")
else {},
"answers": semantic_answers_dict.get(
json.loads(result["metadata"]).get("key"),
"",
),
},
},
),
```
As you can see in the last line, its trying to find a metadata field on the search results, which we dont have as our index is customized with our own fields.
I am blaming this line:
https://github.com/langchain-ai/langchain/blob/ced5e7bae790cd9ec4e5374f5d070d9f23d6457b/libs/community/langchain_community/vectorstores/azuresearch.py#L607
@Skar0 , not sure if this is really a bug, or I missed something in the documentation.
### Description
I am trying to use langchain with Azure OpenAI and Azure Search as Vector Store, and a custom retriever. I dont have a metadata field
This was working with a previous project with azure-search-documents==11.4.b09
but in a new project I am trying azure-search-documents ==11.4.0
### System Info
langchain==0.1.7
langchain-community==0.0.20
langchain-core==0.1.23
langchain-openai==0.0.6
langchainhub==0.1.14 | Azure AI Search, metadata field is required and hardcoded in langchain community | https://api.github.com/repos/langchain-ai/langchain/issues/18731/comments | 6 | 2024-03-07T11:19:46Z | 2024-07-01T16:05:24Z | https://github.com/langchain-ai/langchain/issues/18731 | 2,173,626,443 | 18,731 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```
from langchain_core.documents import Document
from langchain_pinecone import PineconeVectorStore
from langchain_openai import AzureOpenAIEmbeddings, OpenAIEmbeddings
# os.environ["PINECONE_API_KEY"] = "xxxxxxxxxxxxx-xxxxx-4c9f-99a0-42b2e5922ba0"
# os.environ["PINECONE_INDEX_NAME"] = "zzzzzzzzz"
doc = Document("hello world")
raw_documents: list[Document] = [doc]
text_splitter = CharacterTextSplitter(
chunk_size=1000, chunk_overlap=0)
docs = text_splitter.split_documents(raw_documents)
embeddings = OpenAIEmbeddings()
docsearch = PineconeVectorStore.from_documents(
docs, embeddings, index_name=os.environ["PINECONE_INDEX_NAME"])
```
throw an exception say:
> Exception has occurred: ValidationError
1 validation error for OpenAIEmbeddings
__root__
If you are using Azure, please use the `AzureOpenAIEmbeddings` class. (type=value_error)
File "D:\src\lang-serve-fastapi-v3-proxy-ai\code\test2_index.py", line 48, in <module>
embeddings = OpenAIEmbeddings()
pydantic.error_wrappers.ValidationError: 1 validation error for OpenAIEmbeddings
__root__
If you are using Azure, please use the `AzureOpenAIEmbeddings` class. (type=value_error)]
at:

### Error Message and Stack Trace (if applicable)
_No response_
### Description
use vector database.
### System Info
langchain==0.1.11
langchain-community==0.0.27
langchain-core==0.1.30
langchain-openai==0.0.8
langchain-pinecone==0.0.3
langchain-text-splitters==0.0.1
langchainhub==0.1.14
langserve==0.0.41
langsmith==0.1.22 | using OpenAIEmbeddings popup exception say please use AzureOpenAIEmbeddings | https://api.github.com/repos/langchain-ai/langchain/issues/18727/comments | 2 | 2024-03-07T08:50:48Z | 2024-03-07T11:45:10Z | https://github.com/langchain-ai/langchain/issues/18727 | 2,173,316,212 | 18,727 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
my vLLM api_server is:
```py
python -m vllm.entrypoints.openai.api_server \
--model=/usr/local/models/Qwen/Qwen1.5-7B-Chat \
--trust-remote-code \
--served-model-name qwmiic \
--host 127.0.0.1 \
--port 9999 \
--dtype=half
```
This is my demo
```py
TOOLS = [ArxivQueryRun()]
functions = [convert_to_openai_function(t) for t in TOOLS]
print(functions)
inference_server_url = "http://127.0.0.1:9999/v1"
llm = ChatOpenAI(
model="qwmiic",
openai_api_key="EMPTY",
openai_api_base=inference_server_url,
max_tokens=512,
temperature=1,
)
llm.invoke("What's the paper 1605.08386 about?",functions=functions)
```
### Error Message and Stack Trace (if applicable)
_No response_
### Description
Please see my Demo.After verification, the functions generated by variable `functions` have a standardized format base on OpenAI format, and if Arxiv directly invokes as ` arxiv = ArxivQueryRun() arxiv.run("1605.08386")` return the correct article content,**but if combined with llm or chain, even if the binding is correct, such as `llm. bind (functions=functions)` or directly `llm.invoke("What's the paper 1605.08386 about?",functions=functions)` as mentioned earlier, it will not trigger function calls or tools**.
### System Info
my python version is 3.10
my platform linux centos8
aiohttp==3.9.3
aioprometheus==23.12.0
aiosignal==1.3.1
annotated-types==0.6.0
anyio==4.2.0
arxiv==2.1.0
async-timeout==4.0.3
attrs==23.2.0
certifi==2024.2.2
charset-normalizer==3.3.2
click==8.1.7
dataclasses-json==0.6.4
distro==1.9.0
exceptiongroup==1.2.0
fastapi==0.109.2
feedparser==6.0.10
filelock==3.13.1
frozenlist==1.4.1
fsspec==2024.2.0
greenlet==3.0.3
h11==0.14.0
httpcore==1.0.4
httptools==0.6.1
httpx==0.27.0
huggingface-hub==0.20.3
idna==3.6
Jinja2==3.1.3
jsonpatch==1.33
jsonpointer==2.4
jsonschema==4.21.1
jsonschema-specifications==2023.12.1
langchain==0.1.5
langchain-community==0.0.24
langchain-core==0.1.28
langchain-openai==0.0.8
langchainhub==0.1.14
langsmith==0.1.10
MarkupSafe==2.1.5
marshmallow==3.20.2
mpmath==1.3.0
msgpack==1.0.7
multidict==6.0.5
mypy-extensions==1.0.0
networkx==3.2.1
ninja==1.11.1.1
numpy==1.26.3
nvidia-cublas-cu12==12.1.3.1
nvidia-cuda-cupti-cu12==12.1.105
nvidia-cuda-nvrtc-cu12==12.1.105
nvidia-cuda-runtime-cu12==12.1.105
nvidia-cudnn-cu12==8.9.2.26
nvidia-cufft-cu12==11.0.2.54
nvidia-curand-cu12==10.3.2.106
nvidia-cusolver-cu12==11.4.5.107
nvidia-cusparse-cu12==12.1.0.106
nvidia-nccl-cu12==2.18.1
nvidia-nvjitlink-cu12==12.3.101
nvidia-nvtx-cu12==12.1.105
openai==1.13.3
orjson==3.9.15
packaging==23.2
protobuf==4.25.2
psutil==5.9.8
pydantic==2.6.0
pydantic_core==2.16.1
pynvml==11.5.0
python-dotenv==1.0.1
PyYAML==6.0.1
quantile-python==1.1
ray==2.9.1
referencing==0.33.0
regex==2023.12.25
requests==2.31.0
rpds-py==0.17.1
safetensors==0.4.2
sentencepiece==0.1.99
sgmllib3k==1.0.0
sniffio==1.3.0
SQLAlchemy==2.0.25
starlette==0.36.3
sympy==1.12
tenacity==8.2.3
tiktoken==0.6.0
tokenizer==3.4.3
tokenizers==0.15.1
torch==2.1.2
tqdm==4.66.1
transformers==4.37.2
triton==2.1.0
types-requests==2.31.0.20240218
typing-inspect==0.9.0
typing_extensions==4.9.0
urllib3==2.2.0
uvicorn==0.27.0.post1
uvloop==0.19.0
vllm==0.3.0
watchfiles==0.21.0
websockets==12.0
xformers==0.0.23.post1
yarl==1.9.4
| functions call or tools call can not be trigged by LLM using vLLM Chat(Qwen1.5) ? | https://api.github.com/repos/langchain-ai/langchain/issues/18724/comments | 8 | 2024-03-07T08:19:33Z | 2024-07-23T16:07:56Z | https://github.com/langchain-ai/langchain/issues/18724 | 2,173,259,610 | 18,724 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
from __future__ import annotations
from typing import List, cast
from langchain_core.embeddings import Embeddings
import math
import numpy as np
# create a `UserDefinedEmbeddings` class that expects each document's text
# to contain an embedding vector in Python format
class UserDefinedEmbeddings(Embeddings):
def __init__(
self,
normalize: bool = False
) -> None:
"""langchain Embedding class that allows vectors to be manually
specified via each document's text.
Args:
normalize: If True, normalize all vectors to a unit length of 1.0"""
super().__init__()
self.normalize = normalize
def embed_documents(self, texts: List[str]) -> List[List[float]]:
# evaluate texts into Python objects
vectors = [eval(text) for text in texts]
# verify we have vectors of consistent dimension
if not all(isinstance(vector, list) for vector in vectors):
raise ValueError('All vectors must be lists')
if any(len(vector) != len(vectors[0]) for vector in vectors):
raise ValueError('All vectors must have the same dimension')
if not all((-1 <= value <= 1) for vector in vectors for value in vector):
raise ValueError('All vectors must be between -1.0 and +1.0')
def _normalize_vector(vector):
"""normalize the input vector to a unit length of 1.0"""
magnitude = math.sqrt(np.sum(np.square(vector)))
normalized_vector = [element / magnitude for element in vector]
return normalized_vector
if self.normalize:
vectors = [_normalize_vector(vector) for vector in vectors]
# return them
return cast(List[List[float]], vectors)
def embed_query(self, text: str) -> List[float]:
return self.embed_documents([text])[0]
# create a UserDefinedEmbeddings embedder
userdefined_embeddings = UserDefinedEmbeddings(normalize=True)
# create "documents" that will have hardcoded 2-dimension embedding vectors
text_documents = [
'[ 1, 0]',
'[ 1, 1]',
'[ 0, 1]',
'[-1, 1]',
'[-1, 0]',
'[-1,-1]',
'[ 0,-1]',
'[ 1,-1]'
]
# compare those "document" vectors against this 2-dimensional "query" vector
query = '[ 1, 0]'
# print the default relevance scores for FAISS
import langchain_community.vectorstores
faiss_vectorstore = langchain_community.vectorstores.FAISS.from_texts(text_documents, userdefined_embeddings)
results = faiss_vectorstore.similarity_search_with_relevance_scores(query, k=20)
print("FAISS relevance scores:")
for result in results:
print(f" Relevance of {query} to {result[0].page_content} is {result[1]:.4f}")
print("")
# print the default relevance scores for Chroma DB
import langchain_community.vectorstores
chroma_vectorstore = langchain_community.vectorstores.Chroma.from_texts(text_documents, userdefined_embeddings)
results = chroma_vectorstore.similarity_search_with_relevance_scores(query, k=20)
print("Chroma DB relevance scores:")
for result in results:
print(f" Relevance of {query} to {result[0].page_content} is {result[1]:.4f}")
```
### Error Message and Stack Trace (if applicable)
```
FAISS relevance scores:
Relevance of [ 1, 0] to [ 1, 0] is 1.0000
Relevance of [ 1, 0] to [ 1, 1] is 0.5858
Relevance of [ 1, 0] to [ 1,-1] is 0.5858
Relevance of [ 1, 0] to [ 0, 1] is -0.4142
Relevance of [ 1, 0] to [ 0,-1] is -0.4142
Relevance of [ 1, 0] to [-1, 1] is -1.4142
Relevance of [ 1, 0] to [-1,-1] is -1.4142
Relevance of [ 1, 0] to [-1, 0] is -1.8284
Chroma DB relevance scores:
Relevance of [ 1, 0] to [ 1, 0] is 1.0000
Relevance of [ 1, 0] to [ 1, 1] is 0.5858
Relevance of [ 1, 0] to [ 1,-1] is 0.5858
Relevance of [ 1, 0] to [ 0, 1] is -0.4142
Relevance of [ 1, 0] to [ 0,-1] is -0.4142
Relevance of [ 1, 0] to [-1, 1] is -1.4142
Relevance of [ 1, 0] to [-1,-1] is -1.4142
Relevance of [ 1, 0] to [-1, 0] is -1.8284
```
`/.../langchain_core/vectorstores.py:331: UserWarning: Relevance scores must be between 0 and 1, got [(Document(page_content='[ 1, 0]'), 1.0), (Document(page_content='[ 1, 1]'), 0.5857864626594824), (Document(page_content='[ 1,-1]'), 0.5857864626594824), (Document(page_content='[ 0, 1]'), -0.4142135623730949), (Document(page_content='[ 0,-1]'), -0.4142135623730949), (Document(page_content='[-1, 1]'), -1.414213629552521), (Document(page_content='[-1,-1]'), -1.414213629552521), (Document(page_content='[-1, 0]'), -1.8284271247461898)]
warnings.warn(`
### Description
In the Langchain FAISS and Chroma DB classes, the `DistanceStrategy` is `EUCLIDEAN_DISTANCE`. However, they actually return the *square* of the distance, as described here:
https://github.com/facebookresearch/faiss/wiki/MetricType-and-distances
As a result, relevance scores are computed incorrectly and `similarity_search_with_relevance_scores()` can return negative values unless a user-defined `relevance_score_fn` is given. This is not a desirable out-of-the-box experience.
Perhaps a `EUCLIDEAN_DISTANCE_SQUARED` distance strategy is also needed, such as:
```python
def relevance_score_fn_squared_distance(self, distance: float) -> float:
"""
Remap a distance-squared value in the range of [0, 4] to a relevance score
in the range of [1, 0]
"""
return 1.0 - math.sqrt(distance) / 2
```
### System Info
System Information
------------------
> OS: Linux
> OS Version: #1 SMP Wed Aug 10 16:21:17 UTC 2022
> Python Version: 3.9.2 (default, Apr 30 2021, 04:38:51)
[GCC 8.2.0]
Package Information
-------------------
> langchain_core: 0.1.30
> langchain: 0.1.11
> langchain_community: 0.0.26
> langsmith: 0.1.22
> langchain_openai: 0.0.8
> langchain_text_splitters: 0.0.1
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph
> langserve | FAISS and Chroma DB use a `DistanceStrategy` of "distance" instead of "distance-squared", which results in negative relevance scores | https://api.github.com/repos/langchain-ai/langchain/issues/18709/comments | 1 | 2024-03-07T01:07:52Z | 2024-08-02T16:07:18Z | https://github.com/langchain-ai/langchain/issues/18709 | 2,172,738,864 | 18,709 |
[
"langchain-ai",
"langchain"
] | ### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
Pubmed Tool breaks on the documented example.
https://python.langchain.com/docs/integrations/tools/pubmed
py", line 240, in invoke
return self.run(
callback_manager = CallbackManager.configure(
old_debug = langchain.debug
module 'langchain' has no attribute 'debug'
### Idea or request for content:
_No response_ | DOC: PubMed Tool example breaks | https://api.github.com/repos/langchain-ai/langchain/issues/18704/comments | 0 | 2024-03-06T22:28:06Z | 2024-03-07T01:58:29Z | https://github.com/langchain-ai/langchain/issues/18704 | 2,172,553,258 | 18,704 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
;
### Error Message and Stack Trace (if applicable)
_No response_
### Description
I have schema that have tables and metadata-tables, sort of helper tables that connected to regular one.
I want to exclude all of them and don't load them to SQLAlchemy metadata.
Names of all of them has pattern by design, so it will be nice to exclude them by regular expression.
**Note:**
1. I think ignore_tables should receive both optional list of plain strings and optional list of regexp.
`Optional[List[str | re.Pattern]]`
2. May be there are use-case to make the same for include_tables. It is worth to consider.
3. Using wildcard characters (as in SQL like) can be consider besides regexp.
**Workaround** is obvious: load all tables from the schema to SQLDatabase, go over db.get_usable_table_names() and filter out metadata table, than create new SQLDatabase with include_table that has tables after filtering. It is waste on the time, memory and I/O.
### System Info
bash-4.2# python -m langchain_core.sys_info
System Information
------------------
> OS: Linux
> OS Version: #1 SMP Thu Oct 5 21:02:42 UTC 2023
> Python Version: 3.10.13 (main, Dec 4 2023, 13:30:46) [GCC 7.3.1 20180712 (Red Hat 7.3.1-17)]
Package Information
-------------------
> langchain_core: 0.1.28
> langchain: 0.1.10
> langchain_community: 0.0.25
> langsmith: 0.1.10
> langchain_experimental: 0.0.50
> langchain_openai: 0.0.2.post1
> langchain_text_splitters: 0.0.1
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph
> langserve | In SQLDatabase change ignore_tables to recieve regex | https://api.github.com/repos/langchain-ai/langchain/issues/18688/comments | 1 | 2024-03-06T18:27:54Z | 2024-06-14T16:08:47Z | https://github.com/langchain-ai/langchain/issues/18688 | 2,172,162,318 | 18,688 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
from __future__ import annotations
import typing as t
import streamlit as st
from langchain.callbacks.base import BaseCallbackHandler
from langchain_community.chat_message_histories import StreamlitChatMessageHistory
from langchain_core.messages import AIMessage
from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder
from langchain_core.runnables.history import RunnableWithMessageHistory
from langchain_openai import AzureChatOpenAI
class StreamHandler(BaseCallbackHandler):
def __init__(
self,
container,
initial_text: str = "",
) -> None:
self.container = container
self.text = initial_text
def on_llm_start(
self,
serialized: t.Dict[str, t.Any],
prompts: t.List[str],
**kwargs: t.Any,
) -> t.Any:
formatted_prompts = "\n".join(prompts)
print(f"Prompt:\n{formatted_prompts}")
def on_llm_new_token(
self,
token: str,
**kwargs,
) -> None:
self.text += token
self.container.markdown(self.text + "▌")
def on_llm_end(
self,
response,
**kwargs,
) -> None:
self.container.markdown(self.text)
st.toast("Ready!", icon="🥞")
memory = StreamlitChatMessageHistory(key="langchain_messages")
if len(memory.messages) == 0:
memory.clear()
memory.add_message(AIMessage("How can I help you?"))
prompt = ChatPromptTemplate.from_messages(
[
("system", ""),
MessagesPlaceholder(variable_name="history"),
("human", "{input}"),
],
)
llm = AzureChatOpenAI(
azure_endpoint="https://foo.openai.azure.com/",
deployment_name="GPT-4-NEW",
openai_api_version="2024-02-15-preview",
openai_api_key="bar",
streaming=True,
)
chain = prompt | llm
chain_with_history = RunnableWithMessageHistory(
chain,
lambda session_id: memory,
input_messages_key="input",
history_messages_key="history",
)
for msg in memory.messages:
st.chat_message(msg.type).write(msg.content)
if user_input := st.chat_input():
st.chat_message("human").write(user_input)
with st.chat_message("ai"):
stream_handler = StreamHandler(st.empty())
chain_with_history.invoke(
{"input": user_input},
config={
"callbacks": [stream_handler],
"configurable": {"session_id": "any"},
},
)
```
### Error Message and Stack Trace (if applicable)
```
2024-03-07 01:23:00.591 Thread 'ThreadPoolExecutor-1_0': missing ScriptRunContext
2024-03-07 01:23:00.592 Uncaught app exception
Traceback (most recent call last):
File "/Users/qux/.venv/lib/python3.10/site-packages/streamlit/runtime/state/session_state.py", line 398, in __getitem__
return self._getitem(widget_id, key)
File "/Users/qux/.venv/lib/python3.10/site-packages/streamlit/runtime/state/session_state.py", line 443, in _getitem
raise KeyError
KeyError
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/Users/qux/.venv/lib/python3.10/site-packages/streamlit/runtime/scriptrunner/script_runner.py", line 535, in _run_script
exec(code, module.__dict__)
File "/Users/qux/demo.py", line 88, in <module>
chain_with_history.invoke(
File "/Users/qux/.venv/lib/python3.10/site-packages/langchain_core/runnables/base.py", line 4069, in invoke
return self.bound.invoke(
File "/Users/qux/.venv/lib/python3.10/site-packages/langchain_core/runnables/base.py", line 4069, in invoke
return self.bound.invoke(
File "/Users/qux/.venv/lib/python3.10/site-packages/langchain_core/runnables/base.py", line 2075, in invoke
input = step.invoke(
File "/Users/qux/.venv/lib/python3.10/site-packages/langchain_core/runnables/base.py", line 4069, in invoke
return self.bound.invoke(
File "/Users/qux/.venv/lib/python3.10/site-packages/langchain_core/runnables/passthrough.py", line 419, in invoke
return self._call_with_config(self._invoke, input, config, **kwargs)
File "/Users/qux/.venv/lib/python3.10/site-packages/langchain_core/runnables/base.py", line 1262, in _call_with_config
context.run(
File "/Users/qux/.venv/lib/python3.10/site-packages/langchain_core/runnables/config.py", line 326, in call_func_with_variable_args
return func(input, **kwargs) # type: ignore[call-arg]
File "/Users/qux/.venv/lib/python3.10/site-packages/langchain_core/runnables/passthrough.py", line 406, in _invoke
**self.mapper.invoke(
File "/Users/qux/.venv/lib/python3.10/site-packages/langchain_core/runnables/base.py", line 2712, in invoke
output = {key: future.result() for key, future in zip(steps, futures)}
File "/Users/qux/.venv/lib/python3.10/site-packages/langchain_core/runnables/base.py", line 2712, in <dictcomp>
output = {key: future.result() for key, future in zip(steps, futures)}
File "/Users/jenrey/.pyenv/versions/3.10.12/lib/python3.10/concurrent/futures/_base.py", line 451, in result
return self.__get_result()
File "/Users/jenrey/.pyenv/versions/3.10.12/lib/python3.10/concurrent/futures/_base.py", line 403, in __get_result
raise self._exception
File "/Users/jenrey/.pyenv/versions/3.10.12/lib/python3.10/concurrent/futures/thread.py", line 58, in run
result = self.fn(*self.args, **self.kwargs)
File "/Users/qux/.venv/lib/python3.10/site-packages/langchain_core/runnables/base.py", line 4069, in invoke
return self.bound.invoke(
File "/Users/qux/.venv/lib/python3.10/site-packages/langchain_core/runnables/base.py", line 3523, in invoke
return self._call_with_config(
File "/Users/qux/.venv/lib/python3.10/site-packages/langchain_core/runnables/base.py", line 1262, in _call_with_config
context.run(
File "/Users/qux/.venv/lib/python3.10/site-packages/langchain_core/runnables/config.py", line 326, in call_func_with_variable_args
return func(input, **kwargs) # type: ignore[call-arg]
File "/Users/qux/.venv/lib/python3.10/site-packages/langchain_core/runnables/base.py", line 3397, in _invoke
output = call_func_with_variable_args(
File "/Users/qux/.venv/lib/python3.10/site-packages/langchain_core/runnables/config.py", line 326, in call_func_with_variable_args
return func(input, **kwargs) # type: ignore[call-arg]
File "/Users/qux/.venv/lib/python3.10/site-packages/langchain_core/runnables/history.py", line 409, in _enter_history
return hist.messages.copy()
File "/Users/qux/.venv/lib/python3.10/site-packages/langchain_community/chat_message_histories/streamlit.py", line 32, in messages
return st.session_state[self._key]
File "/Users/qux/.venv/lib/python3.10/site-packages/streamlit/runtime/state/session_state_proxy.py", line 90, in __getitem__
return get_session_state()[key]
File "/Users/qux/.venv/lib/python3.10/site-packages/streamlit/runtime/state/safe_session_state.py", line 91, in __getitem__
return self._state[key]
File "/Users/qux/.venv/lib/python3.10/site-packages/streamlit/runtime/state/session_state.py", line 400, in __getitem__
raise KeyError(_missing_key_error_message(key))
KeyError: 'st.session_state has no key "langchain_messages". Did you forget to initialize it? More info: https://docs.streamlit.io/library/advanced-features/session-state#initialization'
```
### Description
* I use `streamlit` for front-end page rendering, and I hope to use `StreamlitChatMessageHistory` to store the history of chat records.
* I installed `langchain==0.1.9` in February and ran the above code without any problems, everything went smoothly. When I reinstalled `langchain==0.1.9` in March, problems arose!!!
### System Info
* macOS 12.7.3
* Python 3.10.13
* langchain==0.1.11
* langchain-openai==0.0.6
* streamlit==1.31.1
# Supplement the test results (March)
* In order to make the testing more rigorous and reduce the workload of project developers, I have done extra tests.
* I tested with the same code in `langchain ==0.1.0`, `streamlit==1.30.0`, and the error is as follows:
```
2024-03-07 02:05:17.076 Uncaught app exception
Traceback (most recent call last):
File "/Users/bar/.venv/lib/python3.11/site-packages/streamlit/runtime/scriptrunner/script_runner.py", line 535, in _run_script
exec(code, module.__dict__)
File "/Users/bar/demo.py", line 53, in <module>
memory.add_message(AIMessage("How can I help you?"))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
TypeError: Serializable.__init__() takes 1 positional argument but 2 were given
```
To solve this non-critical error, I replaced the code of the line that reported the error, as shown below.
```python
memory.add_ai_message("Please describe your requirements")
```
At this point, everything is fine!
* Supplement, I also tested in `langchain==0.1.1`, and it still runs smoothly.
### Everything is normal.
- `langchain==0.1.7`
- `streamlit==1.31.1`
- `Python 3.10.12`
The code on the first floor runs smoothly and achieves the final expected effect. There have been no problems at all.
### An error occurred
* `langchain==0.1.8` and `langchain==0.1.9` and `langchain==0.1.10`
* `streamlit==1.31.1`
* `Python 3.10.12`
The program has encountered an error, the error message is as previously shown.
## Conjectures
* Based on the multiple tests above, I suspect that starting from `langchain==0.1.8`, the source code of langchain has been updated, leading to the occurrence of this problem.
* Upon seeing this, it is inferred that the problem will start to occur from `langchain==0.1.8`, however, it's not that simple, I will give the final conclusion at the end.
# Final conclusion (2024.03.08)
```yaml
name : langchain
version : 0.1.7
dependencies:
- langchain-community >=0.0.20,<0.1
- langchain-core >=0.1.22,<0.2
name : langchain
version : 0.1.9
dependencies:
- langchain-community >=0.0.21,<0.1
- langchain-core >=0.1.26,<0.2
name : langchain
version : 0.1.10
dependencies:
- langchain-community >=0.0.25,<0.1
- langchain-core >=0.1.28,<0.2
name : langchain
version : 0.1.11
dependencies:
- langchain-community >=0.0.25,<0.1
- langchain-core >=0.1.29,<0.2
```
* As shown above, this is about the dependency of `langchain`.
* After repeated testing, I found that the ultimate cause of the problem is langchain-core. Because in February, when I used `langchain==0.1.9`, what was installed was the latest `langchain-core==0.1.27` at that time, and `langchain-core==0.1.28` was released in March, so from March onwards when I reinstalled `langchain==0.1.9`, `langchain-core` had changed, leading to this error occurring.
* Then, I went to look at the `langchain-core` code, the changes were small, and found nothing wrong. At this moment, my mind is already in a storm...
* I began to shift my focus towards `langchain-community`, and in the end, I found:
```toml
langchain-community = "0.0.25"
langchain-openai = "0.0.6"
langchain = "0.1.10"
```
```toml
langchain-community = "0.0.25"
langchain-openai = "0.0.6"
langchain = "0.1.9"
```
In these two environments, the program will report an error.
In the following environment, the program will not report an error:
```toml
langchain-community = "0.0.24"
langchain-openai = "0.0.6"
langchain = "0.1.9"
```
## Conclusion
* Conjecture 1: An error occurred in the test code when `langchain-core==0.1.28`. Please check if `langchain-core==0.1.28` has affected the `StreamlitChatMessageHistory` class.
* Conjecture 2 (Most likely): `langchain-community==0.0.25` was released on Mar 2, 2024, so when I reinstalled `langchain==0.1.9` in March, the latest `langchain-community==0.0.25` at that time would be installed, and some changes in `langchain-community==0.0.25` affected the normal execution of the program.
Thank you
| ‼️`langchain-community = "0.0.25"` Milestones: missing ScriptRunContext | https://api.github.com/repos/langchain-ai/langchain/issues/18684/comments | 7 | 2024-03-06T17:47:08Z | 2024-03-10T06:16:24Z | https://github.com/langchain-ai/langchain/issues/18684 | 2,172,093,703 | 18,684 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
'''
!pip -q install langchain openai tiktoken pypdf InstructorEmbedding faiss-cpu
!pip install sentence-transformers==2.2.2
from langchain.text_splitter import RecursiveCharacterTextSplitter
from langchain.llms import OpenAI
from langchain.chains import RetrievalQA
from langchain.document_loaders import PyPDFLoader
from langchain.document_loaders import DirectoryLoader
from InstructorEmbedding import INSTRUCTOR
from langchain.embeddings import HuggingFaceInstructEmbeddings
from google.colab import drive
drive.mount('/content/gdrive', force_remount=True)
root_dir = "/content/gdrive/MyDrive"
path = "/content/gdrive/MyDrive/Documents/inek"
loader = DirectoryLoader(path , glob="./*.pdf", loader_cls=PyPDFLoader)
documents = loader.load()
text_splitter = RecursiveCharacterTextSplitter(
chunk_size=1000,
chunk_overlap=200)
texts = text_splitter.split_documents(documents)
len(texts)
texts[3]
import pickle
import faiss
from langchain.vectorstores import FAISS
from langchain.embeddings import HuggingFaceInstructEmbeddings
instructor_embeddings = HuggingFaceInstructEmbeddings(model_name="asafaya/kanarya-750m"
)
db_instructEmbedd = FAISS.from_documents(texts, instructor_embeddings)
retriever = db_instructEmbedd.as_retriever(search_kwargs={"k": 3})
docs = retriever.get_relevant_documents("Ketosiz hastalığı nedir kimlerde görülür?")
api_key= "sk-ac2d3761ef1e42e2aa18c4fc9cb381ec"
base_url = "https://api.deepseek.com/v1"
client = OpenAI(api_key=api_key, base_url=base_url)
qa_chain_deepseek = RetrievalQA.from_chain_type(
llm=client,
chain_type="stuff",
retriever=retriever,
return_source_documents=True
)
query = "Ketosiz hastalığı nedir kimlerde görülür?"
llm_response = qa_chain_deepseek(query)
print(llm_response["result"])
'''
### Error Message and Stack Trace (if applicable)
NotFoundError Traceback (most recent call last)
[<ipython-input-26-792341640681>](https://localhost:8080/#) in <cell line: 16>()
14
15 query = "Ketosiz hastalığı nedir kimlerde görülür?"
---> 16 llm_response = qa_chain_deepseek(query)
17 print(llm_response["result"])
30 frames
[/usr/local/lib/python3.10/dist-packages/openai/_base_client.py](https://localhost:8080/#) in _request(self, cast_to, options, remaining_retries, stream, stream_cls)
978
979 log.debug("Re-raising status error")
--> 980 raise self._make_status_error_from_response(err.response) from None
981
982 return self._process_response(
NotFoundError: Error code: 404 - {'detail': 'Not Found'}
### Description
* I'm getting NotFoundError: Error code: 404 - {'detail': 'Not Found'}
* Please help me that why I'm getting this error.
### System Info
* | NotFoundError: Error code: 404 - {'detail': 'Not Found'} | https://api.github.com/repos/langchain-ai/langchain/issues/18652/comments | 2 | 2024-03-06T10:48:25Z | 2024-06-20T12:12:06Z | https://github.com/langchain-ai/langchain/issues/18652 | 2,171,198,924 | 18,652 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
When attempting to use the ConversationSummaryBufferMemory.save_context method in the LangChain library, I encountered a NotImplementedError. This error occurs specifically when calling get_num_tokens_from_messages() for the model gpt-35-turbo-16k. I am trying to integrate the AzureChatOpenAI model with a custom memory buffer to manage conversation states but faced this issue during implementation.
To reproduce the error
```
from langchain.chat_models.azure_openai import AzureChatOpenAI
from langchain.memory import ConversationSummaryBufferMemory
llm = AzureChatOpenAI(
openai_api_version=constant.OPENAI_API_VERSION,
azure_endpoint=constant.OPENAI_API_BASE,
api_key=constant.OPENAI_API_KEY,
model=constant.OPENAI_GPT_MODEL_NAME,
azure_deployment=constant.OPENAI_GPT_DEPLOYMENT_NAME,
temperature=0,
)
memory = ConversationSummaryBufferMemory(llm=llm, max_token_limit=10)
memory.save_context({"input": "hi"}, {"output": "whats up"})
```
### Error Message and Stack Trace (if applicable)
NotImplementedError: get_num_tokens_from_messages() is not presently implemented for model gpt-35-turbo-16k.
See https://github.com/openai/openai-python/blob/main/chatml.md for information on how messages are converted to tokens.
### Description
Initialize AzureChatOpenAI with the model gpt-35-turbo-16k.
Create an instance of ConversationSummaryBufferMemory with a small max_token_limit (e.g., 10 tokens).
Attempt to save a context using memory.save_context({"input": "hi"}, {"output": "whats up"}).
### System Info
- langchain==0.1.11
- Python version: 3.11
- Operating System: Windows
| NotImplementedError in ConversationSummaryBufferMemory.save_context with get_num_tokens_from_messages | https://api.github.com/repos/langchain-ai/langchain/issues/18650/comments | 2 | 2024-03-06T10:36:38Z | 2024-06-27T16:07:49Z | https://github.com/langchain-ai/langchain/issues/18650 | 2,171,176,379 | 18,650 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
from langchain_community.chat_models import ChatAnthropic
memory = ConversationBufferMemory(memory_key="chat_history", return_messages=True,output_key="answer")
callback = StreamHandler(st.empty())
llm = ChatAnthropic(temperature=0.5, max_tokens=1000, model_name= "claude-3-opus-20240229",streaming=True,callbacks=[callback])
qa = ConversationalRetrievalChain.from_llm(llm=llm, retriever=vector_store.as_retriever(),memory=memory,verbose=True)
result = qa.invoke({"question": prompt, "chat_history": [(message["role"], message["content"]) for message in st.session_state.messages]})
full_response = result["answer"]
### Error Message and Stack Trace (if applicable)
anthropic error 400 "claude-3-opus-20240229" is not supported on this API. Please use the Messages API instead.
### Description
Once I switch mode_namel from claude-2 to: claude-3-opus-2024022 I get following error:
anthropic error 400 "claude-3-opus-20240229" is not supported on this API. Please use the Messages API instead.
I tried updating langchain to latest but it did not resolve the issue.
### System Info
langchain==0.1.6
langchain-community==0.0.19
langchain-core==0.1.23
langchain-experimental==0.0.50
langchain-openai==0.0.5
langchainhub==0.1.13
anthropic==0.10.0
Windows11 | anthropic error 400 "claude-3-opus-20240229" is not supported on this API. Please use the Messages API instead. | https://api.github.com/repos/langchain-ai/langchain/issues/18646/comments | 7 | 2024-03-06T10:16:19Z | 2024-03-08T21:16:21Z | https://github.com/langchain-ai/langchain/issues/18646 | 2,171,135,553 | 18,646 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
from langchain_community.llms import HuggingFaceEndpoint
llm = HuggingFaceEndpoint(
endpoint_url="http://0.0.0.0:8080/",
max_new_tokens=512,
top_k=10,
top_p=0.95,
typical_p=0.95,
temperature=0.01,
repetition_penalty=1.03,
huggingfacehub_api_token="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"
)
from langchain.schema import (
HumanMessage,
SystemMessage,
)
from langchain_community.chat_models.huggingface import ChatHuggingFace
messages = [
SystemMessage(content="You're a helpful assistant"),
HumanMessage(
content="What happens when an unstoppable force meets an immovable object?"
),
]
chat_model = ChatHuggingFace(llm=llm)
### Error Message and Stack Trace (if applicable)
{
"name": "ValueError",
"message": "Failed to resolve model_id:Could not find model id for inference server: http://0.0.0.0:8080/Make sure that your Hugging Face token has access to the endpoint.",
"stack": "---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
Cell In[25], line 14
5 from langchain_community.chat_models.huggingface import ChatHuggingFace
7 messages = [
8 SystemMessage(content=\"You're a helpful assistant\"),
9 HumanMessage(
10 content=\"What happens when an unstoppable force meets an immovable object?\"
11 ),
12 ]
---> 14 chat_model = ChatHuggingFace(llm=llm)
File ~/miniconda3/envs/api_mapping/lib/python3.9/site-packages/langchain_community/chat_models/huggingface.py:55, in ChatHuggingFace.__init__(self, **kwargs)
51 super().__init__(**kwargs)
53 from transformers import AutoTokenizer
---> 55 self._resolve_model_id()
57 self.tokenizer = (
58 AutoTokenizer.from_pretrained(self.model_id)
59 if self.tokenizer is None
60 else self.tokenizer
61 )
File ~/miniconda3/envs/api_mapping/lib/python3.9/site-packages/langchain_community/chat_models/huggingface.py:155, in ChatHuggingFace._resolve_model_id(self)
152 self.model_id = endpoint.repository
154 if not self.model_id:
--> 155 raise ValueError(
156 \"Failed to resolve model_id:\"
157 f\"Could not find model id for inference server: {endpoint_url}\"
158 \"Make sure that your Hugging Face token has access to the endpoint.\"
159 )
ValueError: Failed to resolve model_id:Could not find model id for inference server: http://0.0.0.0:8080/Make sure that your Hugging Face token has access to the endpoint."
}
### Description
I try to create ChatHuggingFace model from Huggingface Text Generation Inference API (deploy my local model)
Get Error
ValueError: Failed to resolve model_id:Could not find model id for inference server: http://0.0.0.0:8080/Make sure that your Hugging Face token has access to the endpoint.
### System Info
absl-py==1.4.0
accelerate==0.26.1
aiofiles==23.2.1
aiohttp @ file:///home/conda/feedstock_root/build_artifacts/aiohttp_1689804989543/work
aiosignal @ file:///home/conda/feedstock_root/build_artifacts/aiosignal_1667935791922/work
altair==5.0.1
annotated-types==0.5.0
antlr4-python3-runtime==4.9.3
anyio==3.7.1
appdirs==1.4.4
asttokens @ file:///home/conda/feedstock_root/build_artifacts/asttokens_1670263926556/work
async-timeout @ file:///home/conda/feedstock_root/build_artifacts/async-timeout_1691763562544/work
attrs @ file:///home/conda/feedstock_root/build_artifacts/attrs_1683424013410/work
auto-gptq==0.6.0
autoawq==0.2.2
autoawq_kernels==0.0.6
backcall @ file:///home/conda/feedstock_root/build_artifacts/backcall_1592338393461/work
backports.functools-lru-cache @ file:///home/conda/feedstock_root/build_artifacts/backports.functools_lru_cache_1687772187254/work
beautifulsoup4==4.12.2
bigjson==1.0.9
bitsandbytes==0.42.0
black==23.7.0
brotlipy @ file:///home/conda/feedstock_root/build_artifacts/brotlipy_1666764672617/work
cachetools @ file:///home/conda/feedstock_root/build_artifacts/cachetools_1633010882559/work
certifi @ file:///home/conda/feedstock_root/build_artifacts/certifi_1700303426725/work/certifi
cffi @ file:///home/conda/feedstock_root/build_artifacts/cffi_1671179360775/work
charset-normalizer @ file:///home/conda/feedstock_root/build_artifacts/charset-normalizer_1688813409104/work
cleanlab==2.5.0
click==8.1.7
cloudpickle==3.0.0
cmake==3.27.2
colorama @ file:///home/conda/feedstock_root/build_artifacts/colorama_1666700638685/work
coloredlogs==15.0.1
comm @ file:///home/conda/feedstock_root/build_artifacts/comm_1691044910542/work
contourpy==1.1.0
cryptography @ file:///home/conda/feedstock_root/build_artifacts/cryptography-split_1695163786734/work
cycler @ file:///home/conda/feedstock_root/build_artifacts/cycler_1635519461629/work
Cython==0.29.37
dataclasses-json==0.6.3
datasets==2.14.4
DateTime==5.4
debugpy @ file:///home/conda/feedstock_root/build_artifacts/debugpy_1691021247994/work
decorator @ file:///home/conda/feedstock_root/build_artifacts/decorator_1641555617451/work
dict==2020.12.3
dill==0.3.7
distro==1.8.0
docker-pycreds==0.4.0
docstring-parser==0.15
einops==0.7.0
et-xmlfile==1.1.0
evaluate==0.4.0
exceptiongroup==1.1.3
executing @ file:///home/conda/feedstock_root/build_artifacts/executing_1667317341051/work
fastapi==0.101.1
ffmpy==0.3.1
filelock==3.12.2
fire==0.5.0
fonttools @ file:///home/conda/feedstock_root/build_artifacts/fonttools_1692542611950/work
frozenlist @ file:///home/conda/feedstock_root/build_artifacts/frozenlist_1695377824562/work
fsspec==2023.6.0
future==0.18.3
fvcore==0.1.5.post20221221
gdown==4.7.1
gekko==1.0.6
gitdb==4.0.10
GitPython==3.1.32
google-api-core @ file:///home/conda/feedstock_root/build_artifacts/google-api-core-split_1653881570487/work
google-api-python-client @ file:///home/conda/feedstock_root/build_artifacts/google-api-python-client_1695664297279/work
google-auth==2.23.3
google-auth-httplib2 @ file:///home/conda/feedstock_root/build_artifacts/google-auth-httplib2_1694516804909/work
google-auth-oauthlib==1.1.0
googleapis-common-protos @ file:///home/conda/feedstock_root/build_artifacts/googleapis-common-protos-feedstock_1690830130005/work
gradio==3.40.1
gradio_client==0.4.0
greenlet==3.0.1
grpcio==1.59.0
h11==0.14.0
hdbscan==0.8.33
htmlmin==0.1.12
httpcore==0.17.3
httplib2 @ file:///home/conda/feedstock_root/build_artifacts/httplib2_1679483503307/work
httpx==0.24.1
huggingface-hub==0.20.3
humanfriendly==10.0
hydra-core==1.3.2
idna @ file:///home/conda/feedstock_root/build_artifacts/idna_1663625384323/work
ImageHash @ file:///home/conda/feedstock_root/build_artifacts/imagehash_1664371213222/work
importlib-metadata @ file:///home/conda/feedstock_root/build_artifacts/importlib-metadata_1688754491823/work
importlib-resources==6.0.1
iopath==0.1.9
ipykernel @ file:///home/conda/feedstock_root/build_artifacts/ipykernel_1693880262622/work
ipython @ file:///home/conda/feedstock_root/build_artifacts/ipython_1685727741709/work
ipywidgets @ file:///home/conda/feedstock_root/build_artifacts/ipywidgets_1694607144474/work
itables @ file:///home/conda/feedstock_root/build_artifacts/itables_1692399918721/work
jedi @ file:///home/conda/feedstock_root/build_artifacts/jedi_1690896916983/work
Jinja2 @ file:///home/conda/feedstock_root/build_artifacts/jinja2_1654302431367/work
jiwer==3.0.3
joblib @ file:///home/conda/feedstock_root/build_artifacts/joblib_1691577114857/work
json-lines==0.5.0
jsonlines==4.0.0
jsonpatch==1.33
jsonpointer==2.4
jsonschema==4.19.0
jsonschema-specifications==2023.7.1
jupyter_client @ file:///home/conda/feedstock_root/build_artifacts/jupyter_client_1687700988094/work
jupyter_core @ file:///home/conda/feedstock_root/build_artifacts/jupyter_core_1686775603087/work
jupyterlab-widgets @ file:///home/conda/feedstock_root/build_artifacts/jupyterlab_widgets_1694598704522/work
kiwisolver==1.4.4
langchain==0.1.11
langchain-community==0.0.25
langchain-core==0.1.29
langchain-text-splitters==0.0.1
langsmith==0.1.22
linkify-it-py==2.0.2
lit==16.0.6
llvmlite==0.41.1
loralib==0.1.1
Markdown==3.5
markdown-it-py==2.2.0
MarkupSafe @ file:///home/conda/feedstock_root/build_artifacts/markupsafe_1685769048265/work
marshmallow==3.20.1
matplotlib @ file:///home/conda/feedstock_root/build_artifacts/matplotlib-suite_1661440538658/work
matplotlib-inline @ file:///home/conda/feedstock_root/build_artifacts/matplotlib-inline_1660814786464/work
mdit-py-plugins==0.3.3
mdurl==0.1.2
mock==5.1.0
mpmath==1.3.0
msal==1.26.0
multidict @ file:///home/conda/feedstock_root/build_artifacts/multidict_1672339396340/work
multimethod @ file:///home/conda/feedstock_root/build_artifacts/multimethod_1603129052241/work
multiprocess==0.70.15
munkres==1.1.4
mypy-extensions==1.0.0
nb-conda-kernels @ file:///home/conda/feedstock_root/build_artifacts/nb_conda_kernels_1667060622050/work
neo4j==5.16.0
nest-asyncio @ file:///home/conda/feedstock_root/build_artifacts/nest-asyncio_1664684991461/work
networkx @ file:///home/conda/feedstock_root/build_artifacts/networkx_1680692919326/work
nltk==3.8.1
nose==1.3.7
numba==0.58.1
numpy @ file:///home/conda/feedstock_root/build_artifacts/numpy_1668919081525/work
nvidia-cublas-cu11==11.10.3.66
nvidia-cublas-cu12==12.1.3.1
nvidia-cuda-cupti-cu11==11.7.101
nvidia-cuda-cupti-cu12==12.1.105
nvidia-cuda-nvrtc-cu11==11.7.99
nvidia-cuda-nvrtc-cu12==12.1.105
nvidia-cuda-runtime-cu11==11.7.99
nvidia-cuda-runtime-cu12==12.1.105
nvidia-cudnn-cu11==8.5.0.96
nvidia-cudnn-cu12==8.9.2.26
nvidia-cufft-cu11==10.9.0.58
nvidia-cufft-cu12==11.0.2.54
nvidia-curand-cu11==10.2.10.91
nvidia-curand-cu12==10.3.2.106
nvidia-cusolver-cu11==11.4.0.1
nvidia-cusolver-cu12==11.4.5.107
nvidia-cusparse-cu11==11.7.4.91
nvidia-cusparse-cu12==12.1.0.106
nvidia-nccl-cu11==2.14.3
nvidia-nccl-cu12==2.18.1
nvidia-nvjitlink-cu12==12.3.101
nvidia-nvtx-cu11==11.7.91
nvidia-nvtx-cu12==12.1.105
oauth2client==4.1.3
oauthlib==3.2.2
omegaconf==2.3.0
openai==0.28.0
opencv-python==4.8.1.78
openpyxl==3.1.2
optimum==1.17.1
optimum-intel==1.15.2
orjson==3.9.15
packaging==23.2
pandas==1.5.3
pandas-profiling @ file:///home/conda/feedstock_root/build_artifacts/pandas-profiling_1674670576924/work
parso @ file:///home/conda/feedstock_root/build_artifacts/parso_1638334955874/work
pathspec==0.11.2
pathtools==0.1.2
patsy @ file:///home/conda/feedstock_root/build_artifacts/patsy_1665356157073/work
peft==0.8.2
pexpect @ file:///home/conda/feedstock_root/build_artifacts/pexpect_1667297516076/work
phik @ file:///home/conda/feedstock_root/build_artifacts/phik_1670564192669/work
pickleshare @ file:///home/conda/feedstock_root/build_artifacts/pickleshare_1602536217715/work
Pillow==10.0.0
platformdirs @ file:///home/conda/feedstock_root/build_artifacts/platformdirs_1690813113769/work
portalocker==2.8.2
prompt-toolkit @ file:///home/conda/feedstock_root/build_artifacts/prompt-toolkit_1688565951714/work
promptlayer==0.4.0
protobuf==3.20.3
psutil @ file:///home/conda/feedstock_root/build_artifacts/psutil_1681775019467/work
ptyprocess @ file:///home/conda/feedstock_root/build_artifacts/ptyprocess_1609419310487/work/dist/ptyprocess-0.7.0-py2.py3-none-any.whl
pure-eval @ file:///home/conda/feedstock_root/build_artifacts/pure_eval_1642875951954/work
py-vncorenlp==0.1.4
pyArango==2.0.2
pyarrow==12.0.1
pyasn1 @ file:///home/conda/feedstock_root/build_artifacts/pyasn1_1694615621498/work
pyasn1-modules @ file:///home/conda/feedstock_root/build_artifacts/pyasn1-modules_1695107857548/work
pycocotools==2.0.7
pycparser @ file:///home/conda/feedstock_root/build_artifacts/pycparser_1636257122734/work
pydantic @ file:///home/conda/feedstock_root/build_artifacts/pydantic_1690476225427/work
pydantic_core==2.6.1
PyDrive==1.3.1
pydub==0.25.1
Pygments @ file:///home/conda/feedstock_root/build_artifacts/pygments_1691408637400/work
pyjnius==1.6.0
PyJWT==2.8.0
pynndescent==0.5.11
pyOpenSSL @ file:///home/conda/feedstock_root/build_artifacts/pyopenssl_1685514481738/work
pyparsing==3.0.9
PySocks @ file:///home/conda/feedstock_root/build_artifacts/pysocks_1661604839144/work
python-arango==7.9.0
python-crfsuite==0.9.9
python-dateutil @ file:///home/conda/feedstock_root/build_artifacts/python-dateutil_1626286286081/work
python-multipart==0.0.6
pytz==2023.3
pyu2f @ file:///home/conda/feedstock_root/build_artifacts/pyu2f_1604248910016/work
PyWavelets @ file:///home/conda/feedstock_root/build_artifacts/pywavelets_1673082327051/work
PyYAML @ file:///home/conda/feedstock_root/build_artifacts/pyyaml_1692737146376/work
pyzmq @ file:///home/conda/feedstock_root/build_artifacts/pyzmq_1691667452339/work
rapidfuzz==3.5.2
referencing==0.30.2
regex==2023.8.8
requests @ file:///home/conda/feedstock_root/build_artifacts/requests_1680286922386/work
requests-oauthlib==1.3.1
requests-toolbelt==1.0.0
responses==0.18.0
rich==13.7.0
rouge==1.0.1
rouge-score==0.1.2
rpds-py==0.9.2
rsa @ file:///home/conda/feedstock_root/build_artifacts/rsa_1658328885051/work
safetensors==0.4.2
scikit-learn==1.3.0
scipy==1.11.2
seaborn @ file:///home/conda/feedstock_root/build_artifacts/seaborn-split_1672497695270/work
semantic-version==2.10.0
sentence-transformers==2.2.2
sentencepiece==0.1.99
sentry-sdk==1.29.2
seqeval==1.2.2
setproctitle==1.3.2
shtab==1.6.4
simplejson==3.19.2
six @ file:///home/conda/feedstock_root/build_artifacts/six_1620240208055/work
skorch==0.15.0
smmap==5.0.0
sniffio==1.3.0
soupsieve==2.5
SQLAlchemy==2.0.23
stack-data @ file:///home/conda/feedstock_root/build_artifacts/stack_data_1669632077133/work
starlette==0.27.0
statsmodels @ file:///croot/statsmodels_1676643798791/work
sympy==1.12
tabulate==0.9.0
tangled-up-in-unicode @ file:///home/conda/feedstock_root/build_artifacts/tangled-up-in-unicode_1632832610704/work
tenacity==8.2.3
tensorboard==2.15.0
tensorboard-data-server==0.7.2
termcolor==2.3.0
text-generation==0.6.1
threadpoolctl==3.2.0
tiktoken==0.5.2
tokenize-rt==5.2.0
tokenizers==0.15.2
tomli==2.0.1
toolz==0.12.0
torch==2.1.2
torchinfo==1.8.0
torchvision==0.16.2
tornado @ file:///home/conda/feedstock_root/build_artifacts/tornado_1692311754787/work
tqdm @ file:///home/conda/feedstock_root/build_artifacts/tqdm_1662214488106/work
traitlets @ file:///home/conda/feedstock_root/build_artifacts/traitlets_1675110562325/work
transformers==4.37.0
trash-cli==0.23.2.13.2
triton==2.1.0
trl==0.7.4
typeguard @ file:///home/conda/feedstock_root/build_artifacts/typeguard_1658932097418/work
typing==3.7.4.3
typing-inspect==0.9.0
typing_extensions==4.10.0
tyro==0.5.17
tzdata==2023.3
uc-micro-py==1.0.2
umap-learn==0.5.5
underthesea==6.7.0
underthesea_core==1.0.4
unicodedata2 @ file:///home/conda/feedstock_root/build_artifacts/unicodedata2_1667239485250/work
uritemplate @ file:///home/conda/feedstock_root/build_artifacts/uritemplate_1634152692041/work
urllib3 @ file:///home/conda/feedstock_root/build_artifacts/urllib3_1678635778344/work
uvicorn==0.23.2
values==2020.12.3
visions @ file:///home/conda/feedstock_root/build_artifacts/visions_1638743854326/work
wandb==0.15.12
wcwidth @ file:///home/conda/feedstock_root/build_artifacts/wcwidth_1673864653149/work
websockets==11.0.3
Werkzeug==3.0.1
widgetsnbextension @ file:///home/conda/feedstock_root/build_artifacts/widgetsnbextension_1694598693908/work
xxhash==3.3.0
yacs==0.1.8
yarl @ file:///home/conda/feedstock_root/build_artifacts/yarl_1685191803031/work
zipp @ file:///home/conda/feedstock_root/build_artifacts/zipp_1689374466814/work
zope.interface==6.1
zstandard==0.22.0 | Failed to resolve model_id:Could not find model id for inference server | https://api.github.com/repos/langchain-ai/langchain/issues/18639/comments | 2 | 2024-03-06T09:35:43Z | 2024-08-04T16:06:20Z | https://github.com/langchain-ai/langchain/issues/18639 | 2,171,052,417 | 18,639 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
import os
from langchain_openai import AzureChatOpenAI
api_base = os.getenv("AZURE_OPENAI_ENDPOINT")
api_key= os.getenv("AZURE_OPENAI_API_KEY")
api_version=os.getenv("OPENAI_API_VERSION"),
model_name = 'gpt-4' #Replace with model deployment name
llm = AzureChatOpenAI(
api_key=api_key,
azure_endpoint=api_base,
azure_deployment=model_name,
openai_api_version = api_version,
temperature=0
)
### Error Message and Stack Trace (if applicable)
Traceback (most recent call last):
File "d:\Develop\CodeProjects\chatgpt4v-demo\langchainApp.py", line 11, in <module>
llm = AzureChatOpenAI(
File "D:\Develop\anaconda3\lib\site-packages\langchain_core\load\serializable.py", line 120, in __init__
super().__init__(**kwargs)
File "pydantic\main.py", line 339, in pydantic.main.BaseModel.__init__
File "pydantic\main.py", line 1100, in pydantic.main.validate_model
File "D:\Develop\anaconda3\lib\site-packages\langchain_openai\chat_models\azure.py", line 125, in validate_environment
values["openai_api_version"] = values["openai_api_version"] or os.getenv(
KeyError: 'openai_api_version'
### Description
I want to instantiate AzureChatOpenAI correctly
### System Info
libs:
langchain 0.1.11
langchain-community 0.0.25
langchain-core 0.1.29
langchain-openai 0.0.8
langchain-text-splitters 0.0.1
openai 1.13.3
platform:
Windows x64
python version: 3.10.13 | KeyError: 'openai_api_version' | https://api.github.com/repos/langchain-ai/langchain/issues/18632/comments | 2 | 2024-03-06T07:15:12Z | 2024-03-07T01:47:34Z | https://github.com/langchain-ai/langchain/issues/18632 | 2,170,808,965 | 18,632 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
retriever = elastic_search_db.as_retriever(
search_type="similarity_score_threshold", search_kwargs={"score_threshold": 0.5}
)
docs = retriever.get_relevant_documents(query)
### Error Message and Stack Trace (if applicable)
_No response_
### Description
why similarity_score_threshold is not working. When i changed search_type to similarity then it is worked but retreiverdocument not based on score threshold.
### System Info
version latest
python 3.10 | ElasticSearch retreiver issue based on score_threshold | https://api.github.com/repos/langchain-ai/langchain/issues/18623/comments | 3 | 2024-03-06T05:33:12Z | 2024-06-19T16:07:58Z | https://github.com/langchain-ai/langchain/issues/18623 | 2,170,673,569 | 18,623 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
from langchain import hub
from langchain.agents import (
AgentExecutor,
create_openai_tools_agent,
)
from langchain.tools import tool
from langchain_community.callbacks import get_openai_callback
from langchain_openai import AzureChatOpenAI
llm = AzureChatOpenAI(
api_key="...",
azure_deployment="...",
azure_endpoint="...",
api_version="2024-02-15-preview",
streaming=False,
)
@tool
def multiply(a: int, b: int) -> int:
"""Multiply two numbers."""
return a * b
prompt = hub.pull("hwchase17/openai-tools-agent")
tools = [multiply]
agent = create_openai_tools_agent(llm, tools, prompt)
agent_executor = AgentExecutor(
agent=agent,
tools=tools,
)
with get_openai_callback() as cb:
resp = agent_executor.invoke({"input": "What's 3 multiplied by 4?"})
print(cb)
assert cb.total_tokens > 0
```
### Error Message and Stack Trace (if applicable)
The assertion in the minimum working example fails as the token count is zero.
### Description
I am trying to track token usage with an azure openai tools agent. But the token count always returns zero.
### System Info
```
langchain==0.1.11
langchain-community==0.0.25
langchain-core==0.1.29
langchain-openai==0.0.6
langchain-text-splitters==0.0.1
langchainhub==0.1.15
langchain-openai==0.0.6
openai==1.13.3
``` | Token count is not reported for openai tools agent | https://api.github.com/repos/langchain-ai/langchain/issues/18617/comments | 2 | 2024-03-06T00:53:47Z | 2024-03-07T16:54:05Z | https://github.com/langchain-ai/langchain/issues/18617 | 2,170,419,330 | 18,617 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```py
from langchain_together.llms import Together
from langchain_together.embeddings import TogetherEmbeddings as LangchainTogetherEmbeddings
```
### Error Message and Stack Trace (if applicable)
File "/Users/aabor/projects/rag/llm_select.py", line 7, in <module>
from langchain_together.llms import Together
File "/Users/aabor/projects/rag/venv/lib/python3.11/site-packages/langchain_together/__init__.py", line 1, in <module>
from langchain_together.embeddings import TogetherEmbeddings
File "/Users/aabor/projects/rag/venv/lib/python3.11/site-packages/langchain_together/embeddings.py", line 10, in <module>
class TogetherEmbeddings(BaseModel, Embeddings):
File "/Users/aabor/projects/rag/venv/lib/python3.11/site-packages/langchain_together/embeddings.py", line 23, in TogetherEmbeddings
_client: together.Together
^^^^^^^^^^^^^^^^^
AttributeError: module 'together' has no attribute 'Together'
### Description
Example code from langchain documentation provokes error.
I just want to import classes from langchain_together package.
### System Info
System Information
------------------
> OS: Darwin
> OS Version: Darwin Kernel Version 23.3.0: Wed Dec 20 21:31:00 PST 2023; root:xnu-10002.81.5~7/RELEASE_ARM64_T6020
> Python Version: 3.11.6 (v3.11.6:8b6ee5ba3b, Oct 2 2023, 11:18:21) [Clang 13.0.0 (clang-1300.0.29.30)]
Package Information
-------------------
> langchain_core: 0.1.28
> langchain: 0.1.11
> langchain_community: 0.0.24
> langsmith: 0.1.21
> langchain_text_splitters: 0.0.1
> langchain_together: 0.0.2.post1
| Integration with TogetherAI does not work properly, unable to import | https://api.github.com/repos/langchain-ai/langchain/issues/18612/comments | 5 | 2024-03-05T22:36:34Z | 2024-03-06T16:27:53Z | https://github.com/langchain-ai/langchain/issues/18612 | 2,170,257,595 | 18,612 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```
_search_kwargs: dict[str, Any] = {"k": k}
_search_kwargs["post_filter_pipeline"] = [{"$project": {"vector": 0}}]
_search_kwargs["pre_filter"] = {'$and': [{'datetime': {'$gte': datetime.datetime(2023, 1, 1, 0, 0)}}]}
vector_collection = VectorMongoCollection()
db: MongoDBAtlasVectorSearch = vector_collection.create_vectorstore()
retriever = db.as_retriever(search_kwargs=_search_kwargs)
```
### Error Message and Stack Trace (if applicable)
pymongo.errors.OperationFailure: Operand type is not supported for $vectorSearch: date, full error: {'ok': 0.0, 'errmsg': 'Operand type is not supported for $vectorSearch: date', 'code': 7828301, 'codeName': 'Location7828301', '$clusterTime': {'clusterTime': Timestamp(1709671160, 1), 'signature': {'hash': b'\x17\x86\xe5\xbe\xd10\x81\xf3\x0e\xe5\xc2\xfc\x9e\xe6\xdf\xe4l\x9c\xb4F', 'keyId': 7314028249355386881}}, 'operationTime': Timestamp(1709671160, 1)}
### Description
I have a MongoDB collection with an Atlas Vector Search index with a `datetime` field both in the collection and in the index so that I can pre filter for specific datetime ranges. Currently, any filter I create with a date fails. This was not the case prior to the switch to `$vectorSearch` within MongoDB before GA when it was necessary to use a Mongo Atlas Search index with `$knnBeta`.
### System Info
```
❯ python -m langchain_core.sys_info
System Information
------------------
> OS: Darwin
> OS Version: Darwin Kernel Version 22.5.0: Thu Jun 8 22:22:20 PDT 2023; root:xnu-8796.121.3~7/RELEASE_ARM64_T6000
> Python Version: 3.12.2 (main, Feb 22 2024, 15:15:24) [Clang 14.0.0 (clang-1400.0.29.202)]
Package Information
-------------------
> langchain_core: 0.1.28
> langchain: 0.1.10
> langchain_community: 0.0.25
> langsmith: 0.1.14
> langchain_mongodb: 0.1.0
> langchain_openai: 0.0.8
> langchain_text_splitters: 0.0.1
``` | MongoDB Atlas Vector Search Does Not Support Date Pre Filters | https://api.github.com/repos/langchain-ai/langchain/issues/18604/comments | 1 | 2024-03-05T20:56:30Z | 2024-05-29T13:11:01Z | https://github.com/langchain-ai/langchain/issues/18604 | 2,170,130,825 | 18,604 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```
from typing import TypedDict, Annotated, List, Union
from langchain_core.agents import AgentAction, AgentFinish
from langchain_core.messages import BaseMessage
import operator
from langchain.tools import BaseTool, StructuredTool, Tool, tool
from langgraph.prebuilt.tool_executor import ToolExecutor
import random
@tool("lower_case", return_direct=True)
def to_lower_case(input:str) -> str:
"""Returns the input as all lower case."""
return input.lower()
@tool("random_number", return_direct=True)
def random_number_maker(input:str) -> str:
"""Returns a random number between 0-100."""
return random.randint(0, 100)
tools = [to_lower_case,random_number_maker]
tool_executor = ToolExecutor(tools)
from langchain import hub
from langchain_community.llms import Bedrock
from langchain.chat_models import BedrockChat
from langchain_experimental.llms.anthropic_functions import AnthropicFunctions
from langchain_core.utils.function_calling import format_tool_to_openai_function
llm = BedrockChat(model_id="anthropic.claude-v2:1", model_kwargs={"temperature": 0.1})
base_model = AnthropicFunctions(llm=llm)
functions = [format_tool_to_openai_function(t) for t in tools]
model = base_model.bind(functions=functions)
from typing import TypedDict, Annotated, Sequence
import operator
from langchain_core.messages import BaseMessage
class AgentState(TypedDict):
messages: Annotated[Sequence[BaseMessage], operator.add]
from langchain_core.agents import AgentFinish
from langgraph.prebuilt import ToolInvocation
import json
from langchain_core.messages import FunctionMessage
# Define the function that determines whether to continue or not
def should_continue(state):
messages = state['messages']
last_message = messages[-1]
# If there is no function call, then we finish
if "function_call" not in last_message.additional_kwargs:
return "end"
# Otherwise if there is, we continue
else:
return "continue"
# Define the function that calls the model
def call_model(state):
messages = state['messages']
response = model.invoke(messages)
# We return a list, because this will get added to the existing list
return {"messages": [response]}
# Define the function to execute tools
def call_tool(state):
messages = state['messages']
# Based on the continue condition
# we know the last message involves a function call
last_message = messages[-1]
# We construct an ToolInvocation from the function_call
action = ToolInvocation(
tool=last_message.additional_kwargs["function_call"]["name"],
tool_input=json.loads(last_message.additional_kwargs["function_call"]["arguments"]),
)
print(f"The agent action is {action}")
# We call the tool_executor and get back a response
response = tool_executor.invoke(action)
print(f"The tool result is: {response}")
# We use the response to create a FunctionMessage
function_message = FunctionMessage(content=str(response), name=action.tool)
# We return a list, because this will get added to the existing list
print("**********")
return {"messages": [function_message]}
from langgraph.graph import StateGraph, END
# Define a new graph
workflow = StateGraph(AgentState)
# Define the two nodes we will cycle between
workflow.add_node("agent", call_model)
workflow.add_node("action", call_tool)
# Set the entrypoint as `agent` where we start
workflow.set_entry_point("agent")
# We now add a conditional edge
workflow.add_conditional_edges(
# First, we define the start node. We use `agent`.
# This means these are the edges taken after the `agent` node is called.
"agent",
# Next, we pass in the function that will determine which node is called next.
should_continue,
# Finally we pass in a mapping.
# The keys are strings, and the values are other nodes.
# END is a special node marking that the graph should finish.
# What will happen is we will call `should_continue`, and then the output of that
# will be matched against the keys in this mapping.
# Based on which one it matches, that node will then be called.
{
# If `tools`, then we call the tool node.
"continue": "action",
# Otherwise we finish.
"end": END
}
)
# We now add a normal edge from `tools` to `agent`.
# This means that after `tools` is called, `agent` node is called next.
workflow.add_edge('action', 'agent')
# Finally, we compile it!
# This compiles it into a LangChain Runnable,
# meaning you can use it as you would any other runnable
app = workflow.compile()
from langchain_core.messages import HumanMessage, SystemMessage
# inputs = {"input": "give me a random number and then write in words and make it lower case", "chat_history": []}
system_message = SystemMessage(content="you are a helpful assistant")
user_01 = HumanMessage(content="give me a random number and then write in words and make it lower case")
# user_01 = HumanMessage(content="plear write 'Merlion' in lower case")
# user_01 = HumanMessage(content="what is a Merlion?")
inputs = {"messages": [system_message,user_01]}
app.invoke(inputs)
```
### Error Message and Stack Trace (if applicable)
```
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
Cell In[8], line 11
6 # user_01 = HumanMessage(content="plear write 'Merlion' in lower case")
7 # user_01 = HumanMessage(content="what is a Merlion?")
9 inputs = {"messages": [system_message,user_01]}
---> 11 app.invoke(inputs)
File [/opt/conda/envs/langchain/lib/python3.10/site-packages/langgraph/pregel/__init__.py:579](http://localhost:8888/opt/conda/envs/langchain/lib/python3.10/site-packages/langgraph/pregel/__init__.py#line=578), in Pregel.invoke(self, input, config, output_keys, input_keys, **kwargs)
569 def invoke(
570 self,
571 input: Union[dict[str, Any], Any],
(...)
576 **kwargs: Any,
577 ) -> Union[dict[str, Any], Any]:
578 latest: Union[dict[str, Any], Any] = None
--> 579 for chunk in self.stream(
580 input,
581 config,
582 output_keys=output_keys if output_keys is not None else self.output,
583 input_keys=input_keys,
584 **kwargs,
585 ):
586 latest = chunk
587 return latest
File [/opt/conda/envs/langchain/lib/python3.10/site-packages/langgraph/pregel/__init__.py:615](http://localhost:8888/opt/conda/envs/langchain/lib/python3.10/site-packages/langgraph/pregel/__init__.py#line=614), in Pregel.transform(self, input, config, output_keys, input_keys, **kwargs)
606 def transform(
607 self,
608 input: Iterator[Union[dict[str, Any], Any]],
(...)
613 **kwargs: Any,
614 ) -> Iterator[Union[dict[str, Any], Any]]:
--> 615 for chunk in self._transform_stream_with_config(
616 input,
617 self._transform,
618 config,
619 output_keys=output_keys,
620 input_keys=input_keys,
621 **kwargs,
622 ):
623 yield chunk
File [/opt/conda/envs/langchain/lib/python3.10/site-packages/langchain_core/runnables/base.py:1513](http://localhost:8888/opt/conda/envs/langchain/lib/python3.10/site-packages/langchain_core/runnables/base.py#line=1512), in Runnable._transform_stream_with_config(self, input, transformer, config, run_type, **kwargs)
1511 try:
1512 while True:
-> 1513 chunk: Output = context.run(next, iterator) # type: ignore
1514 yield chunk
1515 if final_output_supported:
File [/opt/conda/envs/langchain/lib/python3.10/site-packages/langgraph/pregel/__init__.py:355](http://localhost:8888/opt/conda/envs/langchain/lib/python3.10/site-packages/langgraph/pregel/__init__.py#line=354), in Pregel._transform(self, input, run_manager, config, input_keys, output_keys, interrupt)
348 done, inflight = concurrent.futures.wait(
349 futures,
350 return_when=concurrent.futures.FIRST_EXCEPTION,
351 timeout=self.step_timeout,
352 )
354 # interrupt on failure or timeout
--> 355 _interrupt_or_proceed(done, inflight, step)
357 # apply writes to channels
358 _apply_writes(
359 checkpoint, channels, pending_writes, config, step + 1
360 )
File [/opt/conda/envs/langchain/lib/python3.10/site-packages/langgraph/pregel/__init__.py:698](http://localhost:8888/opt/conda/envs/langchain/lib/python3.10/site-packages/langgraph/pregel/__init__.py#line=697), in _interrupt_or_proceed(done, inflight, step)
696 inflight.pop().cancel()
697 # raise the exception
--> 698 raise exc
699 # TODO this is where retry of an entire step would happen
701 if inflight:
702 # if we got here means we timed out
File [/opt/conda/envs/langchain/lib/python3.10/concurrent/futures/thread.py:58](http://localhost:8888/opt/conda/envs/langchain/lib/python3.10/concurrent/futures/thread.py#line=57), in _WorkItem.run(self)
55 return
57 try:
---> 58 result = self.fn(*self.args, **self.kwargs)
59 except BaseException as exc:
60 self.future.set_exception(exc)
File [/opt/conda/envs/langchain/lib/python3.10/site-packages/langchain_core/runnables/base.py:4069](http://localhost:8888/opt/conda/envs/langchain/lib/python3.10/site-packages/langchain_core/runnables/base.py#line=4068), in RunnableBindingBase.invoke(self, input, config, **kwargs)
4063 def invoke(
4064 self,
4065 input: Input,
4066 config: Optional[RunnableConfig] = None,
4067 **kwargs: Optional[Any],
4068 ) -> Output:
-> 4069 return self.bound.invoke(
4070 input,
4071 self._merge_configs(config),
4072 **{**self.kwargs, **kwargs},
4073 )
File [/opt/conda/envs/langchain/lib/python3.10/site-packages/langchain_core/runnables/base.py:2075](http://localhost:8888/opt/conda/envs/langchain/lib/python3.10/site-packages/langchain_core/runnables/base.py#line=2074), in RunnableSequence.invoke(self, input, config)
2073 try:
2074 for i, step in enumerate(self.steps):
-> 2075 input = step.invoke(
2076 input,
2077 # mark each step as a child run
2078 patch_config(
2079 config, callbacks=run_manager.get_child(f"seq:step:{i+1}")
2080 ),
2081 )
2082 # finish the root run
2083 except BaseException as e:
File [/opt/conda/envs/langchain/lib/python3.10/site-packages/langchain_core/runnables/base.py:3523](http://localhost:8888/opt/conda/envs/langchain/lib/python3.10/site-packages/langchain_core/runnables/base.py#line=3522), in RunnableLambda.invoke(self, input, config, **kwargs)
3521 """Invoke this runnable synchronously."""
3522 if hasattr(self, "func"):
-> 3523 return self._call_with_config(
3524 self._invoke,
3525 input,
3526 self._config(config, self.func),
3527 **kwargs,
3528 )
3529 else:
3530 raise TypeError(
3531 "Cannot invoke a coroutine function synchronously."
3532 "Use `ainvoke` instead."
3533 )
File [/opt/conda/envs/langchain/lib/python3.10/site-packages/langchain_core/runnables/base.py:1262](http://localhost:8888/opt/conda/envs/langchain/lib/python3.10/site-packages/langchain_core/runnables/base.py#line=1261), in Runnable._call_with_config(self, func, input, config, run_type, **kwargs)
1258 context = copy_context()
1259 context.run(var_child_runnable_config.set, child_config)
1260 output = cast(
1261 Output,
-> 1262 context.run(
1263 call_func_with_variable_args,
1264 func, # type: ignore[arg-type]
1265 input, # type: ignore[arg-type]
1266 config,
1267 run_manager,
1268 **kwargs,
1269 ),
1270 )
1271 except BaseException as e:
1272 run_manager.on_chain_error(e)
File [/opt/conda/envs/langchain/lib/python3.10/site-packages/langchain_core/runnables/config.py:326](http://localhost:8888/opt/conda/envs/langchain/lib/python3.10/site-packages/langchain_core/runnables/config.py#line=325), in call_func_with_variable_args(func, input, config, run_manager, **kwargs)
324 if run_manager is not None and accepts_run_manager(func):
325 kwargs["run_manager"] = run_manager
--> 326 return func(input, **kwargs)
File [/opt/conda/envs/langchain/lib/python3.10/site-packages/langchain_core/runnables/base.py:3397](http://localhost:8888/opt/conda/envs/langchain/lib/python3.10/site-packages/langchain_core/runnables/base.py#line=3396), in RunnableLambda._invoke(self, input, run_manager, config, **kwargs)
3395 output = chunk
3396 else:
-> 3397 output = call_func_with_variable_args(
3398 self.func, input, config, run_manager, **kwargs
3399 )
3400 # If the output is a runnable, invoke it
3401 if isinstance(output, Runnable):
File [/opt/conda/envs/langchain/lib/python3.10/site-packages/langchain_core/runnables/config.py:326](http://localhost:8888/opt/conda/envs/langchain/lib/python3.10/site-packages/langchain_core/runnables/config.py#line=325), in call_func_with_variable_args(func, input, config, run_manager, **kwargs)
324 if run_manager is not None and accepts_run_manager(func):
325 kwargs["run_manager"] = run_manager
--> 326 return func(input, **kwargs)
Cell In[6], line 28, in call_model(state)
26 def call_model(state):
27 messages = state['messages']
---> 28 response = model.invoke(messages)
29 # We return a list, because this will get added to the existing list
30 return {"messages": [response]}
File [/opt/conda/envs/langchain/lib/python3.10/site-packages/langchain_core/runnables/base.py:4069](http://localhost:8888/opt/conda/envs/langchain/lib/python3.10/site-packages/langchain_core/runnables/base.py#line=4068), in RunnableBindingBase.invoke(self, input, config, **kwargs)
4063 def invoke(
4064 self,
4065 input: Input,
4066 config: Optional[RunnableConfig] = None,
4067 **kwargs: Optional[Any],
4068 ) -> Output:
-> 4069 return self.bound.invoke(
4070 input,
4071 self._merge_configs(config),
4072 **{**self.kwargs, **kwargs},
4073 )
File [/opt/conda/envs/langchain/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py:166](http://localhost:8888/opt/conda/envs/langchain/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py#line=165), in BaseChatModel.invoke(self, input, config, stop, **kwargs)
155 def invoke(
156 self,
157 input: LanguageModelInput,
(...)
161 **kwargs: Any,
162 ) -> BaseMessage:
163 config = ensure_config(config)
164 return cast(
165 ChatGeneration,
--> 166 self.generate_prompt(
167 [self._convert_input(input)],
168 stop=stop,
169 callbacks=config.get("callbacks"),
170 tags=config.get("tags"),
171 metadata=config.get("metadata"),
172 run_name=config.get("run_name"),
173 **kwargs,
174 ).generations[0][0],
175 ).message
File [/opt/conda/envs/langchain/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py:544](http://localhost:8888/opt/conda/envs/langchain/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py#line=543), in BaseChatModel.generate_prompt(self, prompts, stop, callbacks, **kwargs)
536 def generate_prompt(
537 self,
538 prompts: List[PromptValue],
(...)
541 **kwargs: Any,
542 ) -> LLMResult:
543 prompt_messages = [p.to_messages() for p in prompts]
--> 544 return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs)
File [/opt/conda/envs/langchain/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py:408](http://localhost:8888/opt/conda/envs/langchain/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py#line=407), in BaseChatModel.generate(self, messages, stop, callbacks, tags, metadata, run_name, **kwargs)
406 if run_managers:
407 run_managers[i].on_llm_error(e, response=LLMResult(generations=[]))
--> 408 raise e
409 flattened_outputs = [
410 LLMResult(generations=[res.generations], llm_output=res.llm_output)
411 for res in results
412 ]
413 llm_output = self._combine_llm_outputs([res.llm_output for res in results])
File [/opt/conda/envs/langchain/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py:398](http://localhost:8888/opt/conda/envs/langchain/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py#line=397), in BaseChatModel.generate(self, messages, stop, callbacks, tags, metadata, run_name, **kwargs)
395 for i, m in enumerate(messages):
396 try:
397 results.append(
--> 398 self._generate_with_cache(
399 m,
400 stop=stop,
401 run_manager=run_managers[i] if run_managers else None,
402 **kwargs,
403 )
404 )
405 except BaseException as e:
406 if run_managers:
File [/opt/conda/envs/langchain/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py:577](http://localhost:8888/opt/conda/envs/langchain/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py#line=576), in BaseChatModel._generate_with_cache(self, messages, stop, run_manager, **kwargs)
573 raise ValueError(
574 "Asked to cache, but no cache found at `langchain.cache`."
575 )
576 if new_arg_supported:
--> 577 return self._generate(
578 messages, stop=stop, run_manager=run_manager, **kwargs
579 )
580 else:
581 return self._generate(messages, stop=stop, **kwargs)
File [/opt/conda/envs/langchain/lib/python3.10/site-packages/langchain_experimental/llms/anthropic_functions.py:180](http://localhost:8888/opt/conda/envs/langchain/lib/python3.10/site-packages/langchain_experimental/llms/anthropic_functions.py#line=179), in AnthropicFunctions._generate(self, messages, stop, run_manager, **kwargs)
176 if "function_call" in kwargs:
177 raise ValueError(
178 "if `function_call` provided, `functions` must also be"
179 )
--> 180 response = self.model.predict_messages(
181 messages, stop=stop, callbacks=run_manager, **kwargs
182 )
183 completion = cast(str, response.content)
184 if forced:
File [/opt/conda/envs/langchain/lib/python3.10/site-packages/langchain_core/_api/deprecation.py:145](http://localhost:8888/opt/conda/envs/langchain/lib/python3.10/site-packages/langchain_core/_api/deprecation.py#line=144), in deprecated.<locals>.deprecate.<locals>.warning_emitting_wrapper(*args, **kwargs)
143 warned = True
144 emit_warning()
--> 145 return wrapped(*args, **kwargs)
File [/opt/conda/envs/langchain/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py:747](http://localhost:8888/opt/conda/envs/langchain/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py#line=746), in BaseChatModel.predict_messages(self, messages, stop, **kwargs)
745 else:
746 _stop = list(stop)
--> 747 return self(messages, stop=_stop, **kwargs)
File [/opt/conda/envs/langchain/lib/python3.10/site-packages/langchain_core/_api/deprecation.py:145](http://localhost:8888/opt/conda/envs/langchain/lib/python3.10/site-packages/langchain_core/_api/deprecation.py#line=144), in deprecated.<locals>.deprecate.<locals>.warning_emitting_wrapper(*args, **kwargs)
143 warned = True
144 emit_warning()
--> 145 return wrapped(*args, **kwargs)
File [/opt/conda/envs/langchain/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py:691](http://localhost:8888/opt/conda/envs/langchain/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py#line=690), in BaseChatModel.__call__(self, messages, stop, callbacks, **kwargs)
683 @deprecated("0.1.7", alternative="invoke", removal="0.2.0")
684 def __call__(
685 self,
(...)
689 **kwargs: Any,
690 ) -> BaseMessage:
--> 691 generation = self.generate(
692 [messages], stop=stop, callbacks=callbacks, **kwargs
693 ).generations[0][0]
694 if isinstance(generation, ChatGeneration):
695 return generation.message
File [/opt/conda/envs/langchain/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py:408](http://localhost:8888/opt/conda/envs/langchain/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py#line=407), in BaseChatModel.generate(self, messages, stop, callbacks, tags, metadata, run_name, **kwargs)
406 if run_managers:
407 run_managers[i].on_llm_error(e, response=LLMResult(generations=[]))
--> 408 raise e
409 flattened_outputs = [
410 LLMResult(generations=[res.generations], llm_output=res.llm_output)
411 for res in results
412 ]
413 llm_output = self._combine_llm_outputs([res.llm_output for res in results])
File [/opt/conda/envs/langchain/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py:398](http://localhost:8888/opt/conda/envs/langchain/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py#line=397), in BaseChatModel.generate(self, messages, stop, callbacks, tags, metadata, run_name, **kwargs)
395 for i, m in enumerate(messages):
396 try:
397 results.append(
--> 398 self._generate_with_cache(
399 m,
400 stop=stop,
401 run_manager=run_managers[i] if run_managers else None,
402 **kwargs,
403 )
404 )
405 except BaseException as e:
406 if run_managers:
File [/opt/conda/envs/langchain/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py:577](http://localhost:8888/opt/conda/envs/langchain/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py#line=576), in BaseChatModel._generate_with_cache(self, messages, stop, run_manager, **kwargs)
573 raise ValueError(
574 "Asked to cache, but no cache found at `langchain.cache`."
575 )
576 if new_arg_supported:
--> 577 return self._generate(
578 messages, stop=stop, run_manager=run_manager, **kwargs
579 )
580 else:
581 return self._generate(messages, stop=stop, **kwargs)
File [/opt/conda/envs/langchain/lib/python3.10/site-packages/langchain_community/chat_models/bedrock.py:112](http://localhost:8888/opt/conda/envs/langchain/lib/python3.10/site-packages/langchain_community/chat_models/bedrock.py#line=111), in BedrockChat._generate(self, messages, stop, run_manager, **kwargs)
110 else:
111 provider = self._get_provider()
--> 112 prompt = ChatPromptAdapter.convert_messages_to_prompt(
113 provider=provider, messages=messages
114 )
116 params: Dict[str, Any] = {**kwargs}
117 if stop:
File [/opt/conda/envs/langchain/lib/python3.10/site-packages/langchain_community/chat_models/bedrock.py:32](http://localhost:8888/opt/conda/envs/langchain/lib/python3.10/site-packages/langchain_community/chat_models/bedrock.py#line=31), in ChatPromptAdapter.convert_messages_to_prompt(cls, provider, messages)
27 @classmethod
28 def convert_messages_to_prompt(
29 cls, provider: str, messages: List[BaseMessage]
30 ) -> str:
31 if provider == "anthropic":
---> 32 prompt = convert_messages_to_prompt_anthropic(messages=messages)
33 elif provider == "meta":
34 prompt = convert_messages_to_prompt_llama(messages=messages)
File [/opt/conda/envs/langchain/lib/python3.10/site-packages/langchain_community/chat_models/anthropic.py:64](http://localhost:8888/opt/conda/envs/langchain/lib/python3.10/site-packages/langchain_community/chat_models/anthropic.py#line=63), in convert_messages_to_prompt_anthropic(messages, human_prompt, ai_prompt)
61 if not isinstance(messages[-1], AIMessage):
62 messages.append(AIMessage(content=""))
---> 64 text = "".join(
65 _convert_one_message_to_text(message, human_prompt, ai_prompt)
66 for message in messages
67 )
69 # trim off the trailing ' ' that might come from the "Assistant: "
70 return text.rstrip()
File [/opt/conda/envs/langchain/lib/python3.10/site-packages/langchain_community/chat_models/anthropic.py:65](http://localhost:8888/opt/conda/envs/langchain/lib/python3.10/site-packages/langchain_community/chat_models/anthropic.py#line=64), in <genexpr>(.0)
61 if not isinstance(messages[-1], AIMessage):
62 messages.append(AIMessage(content=""))
64 text = "".join(
---> 65 _convert_one_message_to_text(message, human_prompt, ai_prompt)
66 for message in messages
67 )
69 # trim off the trailing ' ' that might come from the "Assistant: "
70 return text.rstrip()
File [/opt/conda/envs/langchain/lib/python3.10/site-packages/langchain_community/chat_models/anthropic.py:41](http://localhost:8888/opt/conda/envs/langchain/lib/python3.10/site-packages/langchain_community/chat_models/anthropic.py#line=40), in _convert_one_message_to_text(message, human_prompt, ai_prompt)
39 message_text = content
40 else:
---> 41 raise ValueError(f"Got unknown type {message}")
42 return message_text
ValueError: Got unknown type content='67' name='random_number'
```
### Description
I am trying to use AWS bedrock models (such as "anthropic.claude-v2:1") for LangGraph but encountered the error. It seems that `in _convert_one_message_to_text` function for anthropic models cannot consume FunctionMessage.
### System Info
latest langchain v0.1.11 | ValueError: Got unknown type content when using AWS Bedrock for LangGraph | https://api.github.com/repos/langchain-ai/langchain/issues/18598/comments | 3 | 2024-03-05T19:22:23Z | 2024-06-23T16:09:19Z | https://github.com/langchain-ai/langchain/issues/18598 | 2,169,985,942 | 18,598 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
from langchain.text_splitter import RecursiveCharacterTextSplitter
from langchain.vectorstores import Qdrant
from langchain_community.document_loaders import TextLoader
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.output_parsers import StrOutputParser
from langchain_core.runn```
ables import RunnablePassthrough
from langchain.embeddings import OpenAIEmbeddings
from langchain.retrievers import BM25Retriever, EnsembleRetriever
from langchain.document_loaders import DirectoryLoader
# from langchain.chat_models import ChatOpenAI
from typing import Optional
from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder
from langchain_core.runnables.history import RunnableWithMessageHistory
from langchain_openai import ChatOpenAI
import gradio as gr
from langchain_community.chat_message_histories import RedisChatMessageHistory
from langchain_core.runnables.history import RunnableWithMessageHistory
import uuid
import redis
from langchain_community.chat_message_histories import RedisChatMessageHistory
with_message_history = RunnableWithMessageHistory(
rag_chain,
get_message_history,
input_messages_key="question",
history_messages_key="chat_history"
)
question = "question?"
session_id = "foobar"
res = str(with_message_history.invoke(
{"question": question},
config={"configurable": {"session_id": session_id}}
))
print(res)
### Error Message and Stack Trace (if applicable)
openai.BadRequestError: Error code: 400 - {'error': {'message':
"'$.messages[90].content' is invalid.
### Description
I am implementing RAG for custom documents to answer questions. Additionally, I have implemented contextual question answering based on the provided history.
### System Info
langchain==0.1.6
langchain-community==0.0.19
langchain-core==0.1.22
langchain-openai==0.0.6 | openai BadRequestError | https://api.github.com/repos/langchain-ai/langchain/issues/18583/comments | 0 | 2024-03-05T16:00:46Z | 2024-03-05T16:19:45Z | https://github.com/langchain-ai/langchain/issues/18583 | 2,169,597,436 | 18,583 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
from langchain_openai import AzureChatOpenAI
model = AzureChatOpenAI(
openai_api_version="2023-05-15",
api_key=os.getenv("AZURE_OPENAI_KEY"),
azure_deployment="gpt-4", #gtp4-turbo
model_kwargs={"seed": 42, "logprobs": True}
model.invoke("What is the meaning of life?")
)
```
### Error Message and Stack Trace (if applicable)
```python
---------------------------------------------------------------------------
BadRequestError Traceback (most recent call last)
Cell In[90], line 1
----> 1 model.invoke("What is the meaning of life?")
File ~/Desktop/Code/pl-data-science/.venv/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:166, in BaseChatModel.invoke(self, input, config, stop, **kwargs)
155 def invoke(
156 self,
157 input: LanguageModelInput,
(...)
161 **kwargs: Any,
162 ) -> BaseMessage:
163 config = ensure_config(config)
164 return cast(
165 ChatGeneration,
--> 166 self.generate_prompt(
167 [self._convert_input(input)],
168 stop=stop,
169 callbacks=config.get("callbacks"),
170 tags=config.get("tags"),
171 metadata=config.get("metadata"),
172 run_name=config.get("run_name"),
173 **kwargs,
174 ).generations[0][0],
175 ).message
File ~/Desktop/Code/pl-data-science/.venv/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:544, in BaseChatModel.generate_prompt(self, prompts, stop, callbacks, **kwargs)
536 def generate_prompt(
537 self,
538 prompts: List[PromptValue],
(...)
541 **kwargs: Any,
542 ) -> LLMResult:
543 prompt_messages = [p.to_messages() for p in prompts]
--> 544 return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs)
File ~/Desktop/Code/pl-data-science/.venv/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:408, in BaseChatModel.generate(self, messages, stop, callbacks, tags, metadata, run_name, **kwargs)
406 if run_managers:
407 run_managers[i].on_llm_error(e, response=LLMResult(generations=[]))
--> 408 raise e
409 flattened_outputs = [
410 LLMResult(generations=[res.generations], llm_output=res.llm_output)
411 for res in results
412 ]
413 llm_output = self._combine_llm_outputs([res.llm_output for res in results])
File ~/Desktop/Code/pl-data-science/.venv/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:398, in BaseChatModel.generate(self, messages, stop, callbacks, tags, metadata, run_name, **kwargs)
395 for i, m in enumerate(messages):
396 try:
397 results.append(
--> 398 self._generate_with_cache(
399 m,
400 stop=stop,
401 run_manager=run_managers[i] if run_managers else None,
402 **kwargs,
403 )
404 )
405 except BaseException as e:
406 if run_managers:
File ~/Desktop/Code/pl-data-science/.venv/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:577, in BaseChatModel._generate_with_cache(self, messages, stop, run_manager, **kwargs)
573 raise ValueError(
574 "Asked to cache, but no cache found at `langchain.cache`."
575 )
576 if new_arg_supported:
--> 577 return self._generate(
578 messages, stop=stop, run_manager=run_manager, **kwargs
579 )
580 else:
581 return self._generate(messages, stop=stop, **kwargs)
File ~/Desktop/Code/pl-data-science/.venv/lib/python3.11/site-packages/langchain_openai/chat_models/base.py:451, in ChatOpenAI._generate(self, messages, stop, run_manager, stream, **kwargs)
445 message_dicts, params = self._create_message_dicts(messages, stop)
446 params = {
447 **params,
448 **({"stream": stream} if stream is not None else {}),
449 **kwargs,
450 }
--> 451 response = self.client.create(messages=message_dicts, **params)
452 return self._create_chat_result(response)
File ~/Desktop/Code/pl-data-science/.venv/lib/python3.11/site-packages/openai/_utils/_utils.py:271, in required_args.<locals>.inner.<locals>.wrapper(*args, **kwargs)
269 msg = f"Missing required argument: {quote(missing[0])}"
270 raise TypeError(msg)
--> 271 return func(*args, **kwargs)
File ~/Desktop/Code/pl-data-science/.venv/lib/python3.11/site-packages/openai/resources/chat/completions.py:659, in Completions.create(self, messages, model, frequency_penalty, function_call, functions, logit_bias, logprobs, max_tokens, n, presence_penalty, response_format, seed, stop, stream, temperature, tool_choice, tools, top_logprobs, top_p, user, extra_headers, extra_query, extra_body, timeout)
608 @required_args(["messages", "model"], ["messages", "model", "stream"])
609 def create(
610 self,
(...)
657 timeout: float | httpx.Timeout | None | NotGiven = NOT_GIVEN,
658 ) -> ChatCompletion | Stream[ChatCompletionChunk]:
--> 659 return self._post(
660 "/chat/completions",
661 body=maybe_transform(
662 {
663 "messages": messages,
664 "model": model,
665 "frequency_penalty": frequency_penalty,
666 "function_call": function_call,
667 "functions": functions,
668 "logit_bias": logit_bias,
669 "logprobs": logprobs,
670 "max_tokens": max_tokens,
671 "n": n,
672 "presence_penalty": presence_penalty,
673 "response_format": response_format,
674 "seed": seed,
675 "stop": stop,
676 "stream": stream,
677 "temperature": temperature,
678 "tool_choice": tool_choice,
679 "tools": tools,
680 "top_logprobs": top_logprobs,
681 "top_p": top_p,
682 "user": user,
683 },
684 completion_create_params.CompletionCreateParams,
685 ),
686 options=make_request_options(
687 extra_headers=extra_headers, extra_query=extra_query, extra_body=extra_body, timeout=timeout
688 ),
689 cast_to=ChatCompletion,
690 stream=stream or False,
691 stream_cls=Stream[ChatCompletionChunk],
692 )
File ~/Desktop/Code/pl-data-science/.venv/lib/python3.11/site-packages/openai/_base_client.py:1180, in SyncAPIClient.post(self, path, cast_to, body, options, files, stream, stream_cls)
1166 def post(
1167 self,
1168 path: str,
(...)
1175 stream_cls: type[_StreamT] | None = None,
1176 ) -> ResponseT | _StreamT:
1177 opts = FinalRequestOptions.construct(
1178 method="post", url=path, json_data=body, files=to_httpx_files(files), **options
1179 )
-> 1180 return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls))
File ~/Desktop/Code/pl-data-science/.venv/lib/python3.11/site-packages/openai/_base_client.py:869, in SyncAPIClient.request(self, cast_to, options, remaining_retries, stream, stream_cls)
860 def request(
861 self,
862 cast_to: Type[ResponseT],
(...)
867 stream_cls: type[_StreamT] | None = None,
868 ) -> ResponseT | _StreamT:
--> 869 return self._request(
870 cast_to=cast_to,
871 options=options,
872 stream=stream,
873 stream_cls=stream_cls,
874 remaining_retries=remaining_retries,
875 )
File ~/Desktop/Code/pl-data-science/.venv/lib/python3.11/site-packages/openai/_base_client.py:960, in SyncAPIClient._request(self, cast_to, options, remaining_retries, stream, stream_cls)
957 err.response.read()
959 log.debug("Re-raising status error")
--> 960 raise self._make_status_error_from_response(err.response) from None
962 return self._process_response(
963 cast_to=cast_to,
964 options=options,
(...)
967 stream_cls=stream_cls,
968 )
BadRequestError: Error code: 400 - {'error': {'message': "This model does not support the 'logprobs' parameter.", 'type': 'invalid_request_error', 'param': 'logprobs', 'code': None}}
```
### Description
I'm trying to use `AzureChatOpenAI` as an alternative to `ChatOpenAI`, with setting the `logprobs` parameter. Model used is `gpt-4-turbo`.
This seems to be related to this issue: https://github.com/openai/openai-python/issues/1080. When adding the `logprobs` to the `model_kwargs` I get the following error:
```python
BadRequestError: Error code: 400 - {'error': {'message': "This model does not support the 'logprobs' parameter.", 'type': 'invalid_request_error', 'param': 'logprobs', 'code': None}}
```
### System Info
```python
"pip freeze | grep langchain"
langchain==0.1.9
langchain-community==0.0.21
langchain-core==0.1.26
langchain-openai==0.0.5
````
platform: mac
python version: 3.11 | missing logprobs parameter support for AzureChatOpenAI | https://api.github.com/repos/langchain-ai/langchain/issues/18582/comments | 1 | 2024-03-05T15:38:56Z | 2024-06-27T16:07:44Z | https://github.com/langchain-ai/langchain/issues/18582 | 2,169,549,536 | 18,582 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
vectordb = Milvus.from_documents(
documents=texts,
embedding=instructor_embeddings,
collection_name=f"collection_{collection_name.replace('-', '_')}",
connection_args=self.get_milvus_connection_params(),
index_params=self.config.milvus_configuration.get("index_params"),
search_params=self.config.milvus_configuration.get("search_params"),
timeout=300
)
```
### Error Message and Stack Trace (if applicable)
```
vectordb = Milvus.from_documents(
^^^^^^^^^^^^^^^^^^^^^^
File ".../venv/lib/python3.11/site-packages/langchain_core/vectorstores.py", line 528, in from_documents
return cls.from_texts(texts, embedding, metadatas=metadatas, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File ".../venv/lib/python3.11/site-packages/langchain_community/vectorstores/milvus.py", line 987, in from_texts
vector_db.add_texts(texts=texts, metadatas=metadatas, ids=ids)
File ".../venv/lib/python3.11/site-packages/langchain_community/vectorstores/milvus.py", line 593, in add_texts
res = self.col.insert(insert_list, timeout=timeout, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
TypeError: pymilvus.orm.collection.Collection.insert() got multiple values for keyword argument 'timeout'
```
### Description
When trying to add a `timeout` to Milvus class when using it `from_documents`, there is an overlap with the insert `timeout` argument here:
https://github.com/langchain-ai/langchain/blob/7248e98b9edba60a34e8b0018e7b5c1ee1bbdc76/libs/community/langchain_community/vectorstores/milvus.py#L593
### System Info
langchain 0.1.9
langchain-community 0.0.24
langchain-core 0.1.27
langsmith 0.1.9
| Milvus got multiple values for keyword argument 'timeout' | https://api.github.com/repos/langchain-ai/langchain/issues/18580/comments | 0 | 2024-03-05T15:17:09Z | 2024-03-19T03:44:26Z | https://github.com/langchain-ai/langchain/issues/18580 | 2,169,499,293 | 18,580 |
[
"langchain-ai",
"langchain"
] | ### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
Page: https://python.langchain.com/docs/integrations/text_embedding/text_embeddings_inference
The first URL (in **Hugging Face Text Embeddings Inference (TEI)**) should point to this page: https://huggingface.co/docs/text-embeddings-inference/index
### Idea or request for content:
_No response_ | DOC: URL pointing to HF TGI instead of TEI | https://api.github.com/repos/langchain-ai/langchain/issues/18576/comments | 0 | 2024-03-05T10:57:14Z | 2024-03-08T03:38:41Z | https://github.com/langchain-ai/langchain/issues/18576 | 2,168,902,097 | 18,576 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
async for token in retrieval_chain().astream(
{
"question": question,
"history": [],
},
config={"callbacks": [ConsoleCallbackHandler()]},
): # type: ignore
yield ChatSSEResponse(type="streaming", value=token).model_dump_json()
### Error Message and Stack Trace (if applicable)
<img width="1779" alt="image" src="https://github.com/langchain-ai/langchain/assets/79570011/4b231a27-5328-41b1-8c2a-92758eb85cc3">
### Description
With the given code, we are using ConsoleCallbackHandler form langchain_core.tracers to print all the inputs, outputs and all the api calls that are going behind the scenes.
But with this setup, we are not getting the inputs and the inputs are logged as empty strings but this is working and inputs are logged when we don't use streaming instead use normal invoke method while invoking the chain.
### System Info
python version - 3.11
langchain_core version - 0.1.23
langchain version - 0.1.3 | Input is empty string in ConsoleCallBackHandler outputs | https://api.github.com/repos/langchain-ai/langchain/issues/18567/comments | 2 | 2024-03-05T09:09:30Z | 2024-07-18T16:23:20Z | https://github.com/langchain-ai/langchain/issues/18567 | 2,168,678,480 | 18,567 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```
from langchain.agents import AgentExecutor, create_react_agent
from langchain.tools import tool
from langchain.llms.bedrock import Bedrock
import boto3
from langchain_core.prompts import PromptTemplate
from langchain import hub
react_prompt_template="""
Answer the following questions as best you can. You have access to the following tools:
{tools}
Use the following format:
Question: the input question you must answer
Thought: you should always think about what to do
Action: the action to take, should be one of [{tool_names}]
Action Input: the input to the action
Observation: the result of the action
... (this Thought/Action/Action Input/Observation can repeat N times)
Thought: I now know the final answer
Final Answer: the final answer to the original input question
Begin!
Question: {input}
Thought:{agent_scratchpad}
"""
# prompt = hub.pull("hwchase17/react")
prompt = PromptTemplate(
input_variables=["input"],
template=react_prompt_template
)
@tool
def say_hi(name: str) -> str:
"""Say hi to the world"""
return f"hi {name}"
def specify_bedrock_titan_llm():
bedrock_client = boto3.client(
service_name="bedrock-runtime",
region_name="us-east-1",
)
bedrock_llm = Bedrock(
model_id="amazon.titan-text-express-v1",
client=bedrock_client,
model_kwargs={'temperature': 0}
)
return bedrock_llm
if __name__ == '__main__':
llm = specify_bedrock_titan_llm()
agent = create_react_agent(llm, [say_hi], prompt)
agent_executor = AgentExecutor(agent=agent, tools=[say_hi], verbose=True, handle_parsing_errors=True)
result = agent_executor.invoke({"input": "call say_hi function and return the result"})
print(result)
```
### Error Message and Stack Trace (if applicable)
Traceback (most recent call last):
File "/Users/aqiao/PycharmProjects/pythonProject/venv/lib/python3.10/site-packages/langchain_community/llms/bedrock.py", line 543, in _prepare_input_and_invoke_stream
response = self.client.invoke_model_with_response_stream(**request_options)
File "/Users/aqiao/PycharmProjects/pythonProject/venv/lib/python3.10/site-packages/botocore/client.py", line 553, in _api_call
return self._make_api_call(operation_name, kwargs)
File "/Users/aqiao/PycharmProjects/pythonProject/venv/lib/python3.10/site-packages/botocore/client.py", line 1009, in _make_api_call
raise error_class(parsed_response, operation_name)
botocore.errorfactory.ValidationException: An error occurred (ValidationException) when calling the InvokeModelWithResponseStream operation: Malformed input request: string [
Observation] does not match pattern ^(\|+|User:)$, please reformat your input and try again.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/Users/aqiao/Learning/bedrock/langchain-agent/demo2.py", line 58, in <module>
result = agent_executor.invoke({"input": "call say_hi function and return the result"})
File "/Users/aqiao/PycharmProjects/pythonProject/venv/lib/python3.10/site-packages/langchain/chains/base.py", line 163, in invoke
raise e
File "/Users/aqiao/PycharmProjects/pythonProject/venv/lib/python3.10/site-packages/langchain/chains/base.py", line 153, in invoke
self._call(inputs, run_manager=run_manager)
File "/Users/aqiao/PycharmProjects/pythonProject/venv/lib/python3.10/site-packages/langchain/agents/agent.py", line 1391, in _call
next_step_output = self._take_next_step(
File "/Users/aqiao/PycharmProjects/pythonProject/venv/lib/python3.10/site-packages/langchain/agents/agent.py", line 1097, in _take_next_step
[
File "/Users/aqiao/PycharmProjects/pythonProject/venv/lib/python3.10/site-packages/langchain/agents/agent.py", line 1097, in <listcomp>
[
File "/Users/aqiao/PycharmProjects/pythonProject/venv/lib/python3.10/site-packages/langchain/agents/agent.py", line 1125, in _iter_next_step
output = self.agent.plan(
File "/Users/aqiao/PycharmProjects/pythonProject/venv/lib/python3.10/site-packages/langchain/agents/agent.py", line 387, in plan
for chunk in self.runnable.stream(inputs, config={"callbacks": callbacks}):
File "/Users/aqiao/PycharmProjects/pythonProject/venv/lib/python3.10/site-packages/langchain_core/runnables/base.py", line 2446, in stream
yield from self.transform(iter([input]), config, **kwargs)
File "/Users/aqiao/PycharmProjects/pythonProject/venv/lib/python3.10/site-packages/langchain_core/runnables/base.py", line 2433, in transform
yield from self._transform_stream_with_config(
File "/Users/aqiao/PycharmProjects/pythonProject/venv/lib/python3.10/site-packages/langchain_core/runnables/base.py", line 1513, in _transform_stream_with_config
chunk: Output = context.run(next, iterator) # type: ignore
File "/Users/aqiao/PycharmProjects/pythonProject/venv/lib/python3.10/site-packages/langchain_core/runnables/base.py", line 2397, in _transform
for output in final_pipeline:
File "/Users/aqiao/PycharmProjects/pythonProject/venv/lib/python3.10/site-packages/langchain_core/runnables/base.py", line 1051, in transform
for chunk in input:
File "/Users/aqiao/PycharmProjects/pythonProject/venv/lib/python3.10/site-packages/langchain_core/runnables/base.py", line 4173, in transform
yield from self.bound.transform(
File "/Users/aqiao/PycharmProjects/pythonProject/venv/lib/python3.10/site-packages/langchain_core/runnables/base.py", line 1061, in transform
yield from self.stream(final, config, **kwargs)
File "/Users/aqiao/PycharmProjects/pythonProject/venv/lib/python3.10/site-packages/langchain_core/language_models/llms.py", line 452, in stream
raise e
File "/Users/aqiao/PycharmProjects/pythonProject/venv/lib/python3.10/site-packages/langchain_core/language_models/llms.py", line 436, in stream
for chunk in self._stream(
File "/Users/aqiao/PycharmProjects/pythonProject/venv/lib/python3.10/site-packages/langchain_community/llms/bedrock.py", line 546, in _prepare_input_and_invoke_stream
raise ValueError(f"Error raised by bedrock service: {e}")
ValueError: Error raised by bedrock service: An error occurred (ValidationException) when calling the InvokeModelWithResponseStream operation: Malformed input request: string [
Observation] does not match pattern ^(\|+|User:)$, please reformat your input and try again.
### Description
I'm using langchain (0.1.10) to interact with aws titan text g1 follow langchain official demo.
there is the `Prompt_Temlate`
```
react_prompt_template="""
Answer the following questions as best you can. You have access to the following tools:
{tools}
Use the following format:
Question: the input question you must answer
Thought: you should always think about what to do
Action: the action to take, should be one of [{tool_names}]
Action Input: the input to the action
Observation: the result of the action
... (this Thought/Action/Action Input/Observation can repeat N times)
Thought: I now know the final answer
Final Answer: the final answer to the original input question
Begin!
Question: {input}
Thought:{agent_scratchpad}
"""
```
Here is the tool `say_hi` definition
```
@tool
def say_hi(name: str) -> str:
"""Say hi to the world"""
return f"hi {name}"
```
When running the code, it raised below exception
```
ValueError: Error raised by bedrock service: An error occurred (ValidationException) when calling the InvokeModelWithResponseStream operation: Malformed input request: string [
Observation] does not match pattern ^(\|+|User:)$, please reformat your input and try again.
```
### System Info
langchain 0.1.10
aws Titan Text G1
langchain agent | Malformed input request: string [ Observation] does not match pattern ^(\|+|User:)$ | https://api.github.com/repos/langchain-ai/langchain/issues/18565/comments | 5 | 2024-03-05T08:23:52Z | 2024-05-16T11:35:32Z | https://github.com/langchain-ai/langchain/issues/18565 | 2,168,596,027 | 18,565 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
import os
from langchain_community.document_loaders import TextLoader
loader = TextLoader("state_of_the_union.txt")
loader.load()
### Error Message and Stack Trace (if applicable)
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
Cell In[17], line 2
1 import os
----> 2 from langchain_community.document_loaders import TextLoader
3 loader = TextLoader("state_of_the_union.txt")
4 loader.load()
File ~\AppData\Roaming\Python\Python310\site-packages\langchain_community\document_loaders\__init__.py:190
188 from langchain_community.document_loaders.snowflake_loader import SnowflakeLoader
189 from langchain_community.document_loaders.spreedly import SpreedlyLoader
--> 190 from langchain_community.document_loaders.sql_database import SQLDatabaseLoader
191 from langchain_community.document_loaders.srt import SRTLoader
192 from langchain_community.document_loaders.stripe import StripeLoader
File ~\AppData\Roaming\Python\Python310\site-packages\langchain_community\document_loaders\sql_database.py:10
6 from langchain_community.document_loaders.base import BaseLoader
7 from langchain_community.utilities.sql_database import SQLDatabase
---> 10 class SQLDatabaseLoader(BaseLoader):
11 """
12 Load documents by querying database tables supported by SQLAlchemy.
13
(...)
17 Each document represents one row of the result.
18 """
20 def __init__(
21 self,
22 query: Union[str, sa.Select],
(...)
30 include_query_into_metadata: bool = False,
31 ):
File ~\AppData\Roaming\Python\Python310\site-packages\langchain_community\document_loaders\sql_database.py:22, in SQLDatabaseLoader()
10 class SQLDatabaseLoader(BaseLoader):
11 """
12 Load documents by querying database tables supported by SQLAlchemy.
13
(...)
17 Each document represents one row of the result.
18 """
20 def __init__(
21 self,
---> 22 query: Union[str, sa.Select],
23 db: SQLDatabase,
24 *,
25 parameters: Optional[Dict[str, Any]] = None,
26 page_content_mapper: Optional[Callable[..., str]] = None,
27 metadata_mapper: Optional[Callable[..., Dict[str, Any]]] = None,
28 source_columns: Optional[Sequence[str]] = None,
29 include_rownum_into_metadata: bool = False,
30 include_query_into_metadata: bool = False,
31 ):
32 """
33 Args:
34 query: The query to execute.
(...)
49 expression into the metadata dictionary. Default: False.
50 """
51 self.query = query
---------------------------------------------------------------------------
AttributeError: module 'sqlalchemy' has no attribute 'Select'
### Description
I am new to Langchain and I am stuck at an issue. My end goal is to read the contents of a file and create a vectorstore of my data which I can query later. I'm encountering an error while trying to use the SQLDatabaseLoader from the langchain_community package.
It seems to be referencing an attribute 'Select' from the 'sqlalchemy' module, but it's unable to find it. I'm not sure why this error is occurring, especially since I haven't directly used 'sqlalchemy. Select' in my code. I've tried using versions 1.3, 1.4, and 2.0 of SQLAlchemy, but I still encounter the same error.
### System Info
System Information
------------------
> OS: Windows
> OS Version: 10.0.22631
> Python Version: 3.11.1 (tags/v3.11.1:a7a450f, Dec 6 2022, 19:58:39) [MSC v.1934 64 bit (AMD64)]
Package Information
-------------------
> langchain_core: 0.1.29
> langchain: 0.1.10
> langchain_community: 0.0.25
> langsmith: 0.1.18
> langchain_openai: 0.0.8
> langchain_text_splitters: 0.0.1
> langchainhub: 0.1.15
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph
> langserve
| AttributeError: module 'sqlalchemy' has no attribute 'Select' when using SQLDatabaseLoader in langchain_community | https://api.github.com/repos/langchain-ai/langchain/issues/18552/comments | 6 | 2024-03-05T03:07:49Z | 2024-03-15T23:00:20Z | https://github.com/langchain-ai/langchain/issues/18552 | 2,168,197,280 | 18,552 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
from langchain.chains import OpenAIModerationChain
moderation_chain = OpenAIModerationChain()
moderation_chain.ainvoke("This is okay")
```
### Error Message and Stack Trace (if applicable)
_No response_
### Description
* We can't use the `OpenAIOutputModerationChain` within our async chain.
* This PR also fixes https://github.com/langchain-ai/langchain/issues/13685
### System Info
System Information
------------------
> OS: Darwin
> OS Version: Darwin Kernel Version 23.2.0: Wed Nov 15 21:54:05 PST 2023; root:xnu-10002.61.3~2/RELEASE_ARM64_T6031
> Python Version: 3.11.7 (main, Dec 4 2023, 18:10:11) [Clang 15.0.0 (clang-1500.1.0.2.5)]
Package Information
-------------------
> langchain_core: 0.1.28
> langchain: 0.1.9
> langchain_community: 0.0.24
> langsmith: 0.1.5
> langchain_experimental: 0.0.53
> langchain_openai: 0.0.8
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph
> langserve | `OpenAIModerationChain` doesn't have an asynchronous implementation | https://api.github.com/repos/langchain-ai/langchain/issues/18533/comments | 0 | 2024-03-04T22:16:33Z | 2024-06-10T16:08:13Z | https://github.com/langchain-ai/langchain/issues/18533 | 2,167,884,379 | 18,533 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
# Refactored document processing and text extraction
import io
from typing import List
from langchain_community.document_loaders import AmazonTextractPDFLoader
class DocumentProcessor:
def __init__(self):
pass
def convert_to_text(self, s3_url: str) -> List[str]:
"""Downloads the PDF from S3, processes it, and returns the text as a list of strings."""
print(f"Processing S3 URL: {s3_url}")
try:
loader = AmazonTextractPDFLoader(s3_url)
pages = loader.load_and_split()
text = [page.page_content for page in pages if page.page_content is not None]
return text
except Exception as e:
print(f"Error processing {s3_url}: {e}")
return []
### Error Message and Stack Trace (if applicable)
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
Cell In[16], line 10
5 get_ipython().system('pip install SQLAlchemy==1.3')
6 # from sqlalchemy import select
7 # import sqlalchemy as sa
8 # sa.Select
---> 10 from langchain_community.document_loaders import AmazonTextractPDFLoader
12 class DocumentProcessor:
13 def __init__(self):
File /opt/conda/lib/python3.10/site-packages/langchain_community/document_loaders/__init__.py:190
188 from langchain_community.document_loaders.snowflake_loader import SnowflakeLoader
189 from langchain_community.document_loaders.spreedly import SpreedlyLoader
--> 190 from langchain_community.document_loaders.sql_database import SQLDatabaseLoader
191 from langchain_community.document_loaders.srt import SRTLoader
192 from langchain_community.document_loaders.stripe import StripeLoader
File /opt/conda/lib/python3.10/site-packages/langchain_community/document_loaders/sql_database.py:10
6 from langchain_community.document_loaders.base import BaseLoader
7 from langchain_community.utilities.sql_database import SQLDatabase
---> 10 class SQLDatabaseLoader(BaseLoader):
11 """
12 Load documents by querying database tables supported by SQLAlchemy.
13
(...)
17 Each document represents one row of the result.
18 """
20 def __init__(
21 self,
22 query: Union[str, sa.Select],
(...)
30 include_query_into_metadata: bool = False,
31 ):
File /opt/conda/lib/python3.10/site-packages/langchain_community/document_loaders/sql_database.py:22, in SQLDatabaseLoader()
10 class SQLDatabaseLoader(BaseLoader):
11 """
12 Load documents by querying database tables supported by SQLAlchemy.
13
(...)
17 Each document represents one row of the result.
18 """
20 def __init__(
21 self,
---> 22 query: Union[str, sa.Select],
23 db: SQLDatabase,
24 *,
25 parameters: Optional[Dict[str, Any]] = None,
26 page_content_mapper: Optional[Callable[..., str]] = None,
27 metadata_mapper: Optional[Callable[..., Dict[str, Any]]] = None,
28 source_columns: Optional[Sequence[str]] = None,
29 include_rownum_into_metadata: bool = False,
30 include_query_into_metadata: bool = False,
31 ):
32 """
33 Args:
34 query: The query to execute.
(...)
49 expression into the metadata dictionary. Default: False.
50 """
51 self.query = query
AttributeError: module 'sqlalchemy' has no attribute 'Select'
### Description
I'm trying to use the latest version of langchain_community so that I can use Amazon Textract to get text from PDFs. It was working last week but now I am seeing an error from sqlalchemy in the /opt/conda/lib/python3.10/site-packages/langchain_community/document_loaders/sql_database.py file. I tried downgrading sqlalchemy to 1.3 and saw the same error.
### System Info
System Information
------------------
> OS: Linux
> OS Version: #1 SMP Fri Jan 12 09:58:17 UTC 2024
> Python Version: 3.10.13 | packaged by conda-forge | (main, Dec 23 2023, 15:36:39) [GCC 12.3.0]
Package Information
-------------------
> langchain_core: 0.1.28
> langchain: 0.1.10
> langchain_community: 0.0.25
> langsmith: 0.1.16
> langchain_openai: 0.0.8
> langchain_text_splitters: 0.0.1
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph
> langserve | AttributeError: module 'sqlalchemy' has no attribute 'Select' when importing AmazonTextractPDFLoader | https://api.github.com/repos/langchain-ai/langchain/issues/18528/comments | 8 | 2024-03-04T21:38:14Z | 2024-05-22T10:57:13Z | https://github.com/langchain-ai/langchain/issues/18528 | 2,167,816,337 | 18,528 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
from langchain_community.chat_models import BedrockChat
from langchain_core.messages import HumanMessage, SystemMessage
chat = BedrockChat(model_id="anthropic.claude-3-sonnet-20240229-v1:0", model_kwargs={"temperature": 0.1}, verbose=True)
messages = [
SystemMessage(content="You are a helpful assistant that translates English to French."),
HumanMessage(content="I love programming.")
]
chat.invoke(messages)
```
### Error Message and Stack Trace (if applicable)
ValueError: Error raised by bedrock service: An error occurred (ValidationException) when calling the InvokeModel operation: "claude-3-sonnet-20240229" is not supported on this API. Please use the Messages API instead.
### Description
Currently, the body that is prepared for model invocation uses Completions API instead of Messages API, even though you create an instance of BedrockChat. This can be seen from the source code here:
```python
input_body = LLMInputOutputAdapter.prepare_input(provider, prompt, params)
```
```python
def prepare_input(
cls, provider: str, prompt: str, model_kwargs: Dict[str, Any]
) -> Dict[str, Any]:
input_body = {**model_kwargs}
if provider == "anthropic":
input_body["prompt"] = _human_assistant_format(prompt) # here the Completions API is used instead of Messages API
elif provider in ("ai21", "cohere", "meta"):
input_body["prompt"] = prompt
elif provider == "amazon":
input_body = dict()
input_body["inputText"] = prompt
input_body["textGenerationConfig"] = {**model_kwargs}
else:
input_body["inputText"] = prompt
if provider == "anthropic" and "max_tokens_to_sample" not in input_body:
input_body["max_tokens_to_sample"] = 256
return input_body
```
Unwinding the call stack, ultimately this function is called, which simply combines all the chat messages into a single string:
```python
def convert_messages_to_prompt_anthropic(
messages: List[BaseMessage],
*,
human_prompt: str = "\n\nHuman:",
ai_prompt: str = "\n\nAssistant:",
) -> str:
"""Format a list of messages into a full prompt for the Anthropic model
Args:
messages (List[BaseMessage]): List of BaseMessage to combine.
human_prompt (str, optional): Human prompt tag. Defaults to "\n\nHuman:".
ai_prompt (str, optional): AI prompt tag. Defaults to "\n\nAssistant:".
Returns:
str: Combined string with necessary human_prompt and ai_prompt tags.
"""
messages = messages.copy() # don't mutate the original list
if not isinstance(messages[-1], AIMessage):
messages.append(AIMessage(content=""))
text = "".join(
_convert_one_message_to_text(message, human_prompt, ai_prompt)
for message in messages
)
# trim off the trailing ' ' that might come from the "Assistant: "
return text.rstrip()
```
The new Claude v3 family of models will only support Messages API, therefore none of them will work with the current version of langchain.
### System Info
langchain==0.1.10
langchain-community==0.0.25
langchain-core==0.1.28
langchain-text-splitters==0.0.1
platform: AL2
python 3.12.0 | BedrockChat is not using Messages API for Anthropic v3 models | https://api.github.com/repos/langchain-ai/langchain/issues/18514/comments | 10 | 2024-03-04T18:38:08Z | 2024-03-15T23:23:51Z | https://github.com/langchain-ai/langchain/issues/18514 | 2,167,488,404 | 18,514 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
import boto3
from langchain_community.llms import Bedrock
bedrock = boto3.client('bedrock-runtime' , 'us-east-1')
MODEL_KWARGS = {
"anthropic.claude-3-sonnet-20240229-v1:0": {
"temperature": 0,
"top_k": 250,
"top_p": 1,
"max_tokens_to_sample": 2**10
}}
model_id = 'anthropic.claude-3-sonnet-20240229-v1:0'
llm = Bedrock(model_id=model_id, model_kwargs=MODEL_KWARGS[model_id])
llm('tell me a joke')
```
### Error Message and Stack Trace (if applicable)
```
.venv/lib/python3.11/site-packages/langchain_core/_api/deprecation.py:117: LangChainDeprecationWarning: The function `__call__` was deprecated in LangChain 0.1.7 and will be removed in 0.2.0. Use invoke instead.
warn_deprecated(
Traceback (most recent call last):
File ".venv/lib/python3.11/site-packages/langchain_community/llms/bedrock.py", line 444, in _prepare_input_and_invoke
response = self.client.invoke_model(**request_options)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File ".venv/lib/python3.11/site-packages/botocore/client.py", line 553, in _api_call
return self._make_api_call(operation_name, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File ".venv/lib/python3.11/site-packages/botocore/client.py", line 1009, in _make_api_call
raise error_class(parsed_response, operation_name)
botocore.errorfactory.ValidationException: An error occurred (ValidationException) when calling the InvokeModel operation: "claude-3-sonnet-20240229" is not supported on this API. Please use the Messages API instead.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File ".venv/lib/python3.11/site-packages/langchain_core/_api/deprecation.py", line 145, in warning_emitting_wrapper
return wrapped(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^
File ".venv/lib/python3.11/site-packages/langchain_core/language_models/llms.py", line 991, in __call__
self.generate(
File ".venv/lib/python3.11/site-packages/langchain_core/language_models/llms.py", line 741, in generate
output = self._generate_helper(
^^^^^^^^^^^^^^^^^^^^^^
File ".venv/lib/python3.11/site-packages/langchain_core/language_models/llms.py", line 605, in _generate_helper
raise e
File ".venv/lib/python3.11/site-packages/langchain_core/language_models/llms.py", line 592, in _generate_helper
self._generate(
File ".venv/lib/python3.11/site-packages/langchain_core/language_models/llms.py", line 1177, in _generate
self._call(prompt, stop=stop, run_manager=run_manager, **kwargs)
File ".venv/lib/python3.11/site-packages/langchain_community/llms/bedrock.py", line 718, in _call
return self._prepare_input_and_invoke(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File ".venv/lib/python3.11/site-packages/langchain_community/llms/bedrock.py", line 451, in _prepare_input_and_invoke
raise ValueError(f"Error raised by bedrock service: {e}")
ValueError: Error raised by bedrock service: An error occurred (ValidationException) when calling the InvokeModel operation: "claude-3-sonnet-20240229" is not supported on this API. Please use the Messages API instead.
```
### Description
obviously claude3 is brand new, but initial testing with existing capabilities seems to indicate a change in how these models need to be invoked.
I'd expect that these new models would work with existing langchain capabilities as drop-in improvements.
### System Info
System Information
------------------
> OS: Linux
> OS Version: #50~20.04.1-Ubuntu SMP Wed Sep 6 17:29:11 UTC 2023
> Python Version: 3.11.4 (main, Aug 9 2023, 21:54:01) [GCC 10.2.1 20210110]
Package Information
-------------------
> langchain_core: 0.1.28
> langchain: 0.1.10
> langchain_community: 0.0.25
> langsmith: 0.1.14
> langchain_text_splitters: 0.0.1
> langchainhub: 0.1.14
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph
> langserve | Bedrock integration with Claude3 fails with ValidationException "claude-3-sonnet-20240229" is not supported on this API. Please use the Messages API instead. | https://api.github.com/repos/langchain-ai/langchain/issues/18513/comments | 8 | 2024-03-04T18:27:57Z | 2024-03-06T23:46:19Z | https://github.com/langchain-ai/langchain/issues/18513 | 2,167,471,081 | 18,513 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
from dataclasses import dataclass
from atexit import register as atexit_register
from sqlalchemy.event import listen
from sqlalchemy.engine import Engine
from sqlalchemy import create_engine
from sqlalchemy.sql.elements import quoted_name
from langchain.sql_database import SQLDatabase
@dataclass
class dao:
engine: Engine
DB_SCHEMA: str
def _set_search_path(dbapi_connection, connection_record, **kw):
existing_autocommit = dbapi_connection.autocommit
dbapi_connection.autocommit = True
cursor = dbapi_connection.cursor()
try:
#https://www.postgresql.org/docs/current/ddl-schemas.html#DDL-SCHEMAS-PATH
cursor.execute(f"SET search_path TO {dao.DB_SCHEMA},public;")
#cursor.execute("SET ROLE TO apps_engineer;")
finally:
cursor.close()
dbapi_connection.autocommit = existing_autocommit
def daoInitConfig():
print("daoInitConfig()")
DB_USER = ...
DB_PASSWORD = ...
DB_HOST = ...
DB_PORT = ...
DB_NAME = ...
db_schema = ...
url = f'postgresql+pg8000://{DB_USER}:{DB_PASSWORD}@{DB_HOST}:{DB_PORT}/{DB_NAME}'
import pg8000 as _pg800
_pg800.paramstyle = 'named'
import pg8000.legacy as _legacy
_legacy.paramstyle = 'named'
import pg8000.dbapi as _dbapi
_dbapi.paramstyle = 'named'
from sqlalchemy.dialects.postgresql.pg8000 import PGDialect_pg8000
PGDialect_pg8000.default_paramstyle = 'named'
engine = create_engine(
url)
atexit_register(engine.dispose)
# Use the listen event to call _set_search_path when a connection is created
#listen(engine, 'connect', _set_search_path)
dao.engine = engine
dao.DB_SCHEMA = quoted_name(db_schema, None)
def main():
daoInitConfig()
db = SQLDatabase(
engine=dao.engine,
schema=dao.DB_SCHEMA,
sample_rows_in_table_info=0
)
result = db.run("select 1")
assert "1" == result
if __name__ == '__main__':
main()
```
### Error Message and Stack Trace (if applicable)
Traceback (most recent call last):
File "/var/lang/lib/python3.10/site-packages/pg8000/legacy.py", line 254, in execute
self._context = self._c.execute_unnamed(
File "/var/lang/lib/python3.10/site-packages/pg8000/core.py", line 688, in execute_unnamed
self.handle_messages(context)
File "/var/lang/lib/python3.10/site-packages/pg8000/core.py", line 827, in handle_messages
raise context.error
pg8000.exceptions.DatabaseError: {'S': 'ERROR', 'V': 'ERROR', 'C': '42601', 'M': 'syntax error at or near "%"', 'P': '20', 'F': 'scan.l', 'L': '1236', 'R': 'scanner_yyerror'}
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/var/lang/lib/python3.10/site-packages/sqlalchemy/engine/base.py", line 1969, in _exec_single_context
self.dialect.do_execute(
File "/var/lang/lib/python3.10/site-packages/sqlalchemy/engine/default.py", line 922, in do_execute
cursor.execute(statement, parameters)
File "/var/lang/lib/python3.10/site-packages/pg8000/legacy.py", line 281, in execute
raise cls(msg)
pg8000.dbapi.ProgrammingError: {'S': 'ERROR', 'V': 'ERROR', 'C': '42601', 'M': 'syntax error at or near "%"', 'P': '20', 'F': 'scan.l', 'L': '1236', 'R': 'scanner_yyerror'}
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/var/lang/lib/python3.10/contextlib.py", line 153, in __exit__
self.gen.throw(typ, value, traceback)
File "/var/lang/lib/python3.10/site-packages/sqlalchemy/engine/base.py", line 3234, in begin
yield conn
File "/var/lang/lib/python3.10/site-packages/langchain_community/utilities/sql_database.py", line 438, in _execute
connection.exec_driver_sql(
File "/var/lang/lib/python3.10/site-packages/sqlalchemy/engine/base.py", line 1778, in exec_driver_sql
ret = self._execute_context(
File "/var/lang/lib/python3.10/site-packages/sqlalchemy/engine/base.py", line 1848, in _execute_context
return self._exec_single_context(
File "/var/lang/lib/python3.10/site-packages/sqlalchemy/engine/base.py", line 1988, in _exec_single_context
self._handle_dbapi_exception(
File "/var/lang/lib/python3.10/site-packages/sqlalchemy/engine/base.py", line 2343, in _handle_dbapi_exception
raise sqlalchemy_exception.with_traceback(exc_info[2]) from e
File "/var/lang/lib/python3.10/site-packages/sqlalchemy/engine/base.py", line 1969, in _exec_single_context
self.dialect.do_execute(
File "/var/lang/lib/python3.10/site-packages/sqlalchemy/engine/default.py", line 922, in do_execute
cursor.execute(statement, parameters)
File "/var/lang/lib/python3.10/site-packages/pg8000/legacy.py", line 281, in execute
raise cls(msg)
sqlalchemy.exc.ProgrammingError: (pg8000.dbapi.ProgrammingError) {'S': 'ERROR', 'V': 'ERROR', 'C': '42601', 'M': 'syntax error at or near "%"', 'P': '20', 'F': 'scan.l', 'L': '1236', 'R': 'scanner_yyerror'}
[SQL: SET search_path TO %s]
[parameters: ('rag_ask_george_qa',)]
(Background on this error at: https://sqlalche.me/e/20/f405)
Process finished with exit code 1
### Description
I'm using SqlAlchemy's engine as main facade to connect to my Postgress DB. I'm using pg8000 driver. I'm using **named** paramstyle.
So, when _db.run()_ is called it calls _db._execute()_
First thing, that he does, is **breaking** SqlAlchemy's engine facade and working on driver level. The code that is written specifically suggest that engine uses **format** paramstyle.
So, **at least code should be fixed to take into account different paramstyles**. I'm talking about _"SET search_path TO %s"_
Personally, I think that breaking engine interface is inherently wrong. SQLDatabase shouldn't manipulate on such granular level.
What **I prefer to see, is some flag, that disables call to SqlAlchemy driver at all. Than in application code I'll install listener, that will set search_path on every "connect" event.**
In the code above, I just need to change
```python
# Use the listen event to call _set_search_path when a connection is created
#listen(engine, 'connect', _set_search_path)
```
to
```python
# Use the listen event to call _set_search_path when a connection is created
listen(engine, 'connect', _set_search_path)
```
### System Info
bash-4.2# python -m langchain_core.sys_info
System Information
------------------
> OS: Linux
> OS Version: #1 SMP Thu Oct 5 21:02:42 UTC 2023
> Python Version: 3.10.13 (main, Dec 4 2023, 13:30:46) [GCC 7.3.1 20180712 (Red Hat 7.3.1-17)]
Package Information
-------------------
> langchain_core: 0.1.28
> langchain: 0.1.10
> langchain_community: 0.0.25
> langsmith: 0.1.10
> langchain_experimental: 0.0.50
> langchain_openai: 0.0.2.post1
> langchain_text_splitters: 0.0.1
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph
> langserve | SQLDatabase doesn't work with named paramstyle | https://api.github.com/repos/langchain-ai/langchain/issues/18512/comments | 1 | 2024-03-04T17:24:00Z | 2024-06-12T16:08:36Z | https://github.com/langchain-ai/langchain/issues/18512 | 2,167,359,938 | 18,512 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
Hi Team,
I am trying to use the AzureChatOpenAI as LLM in the inside the ReACT Agent Here is my code
```
def search_function(query: str) -> str:
"""use to search results corresponding to given query"""
return "Hello World"
search = StructuredTool.from_function(
func=search_function,
name="Search",
description="useful for when you need to answer questions about current events",
# coroutine= ... <- you can specify an async method if desired as well
)
tools = [search]
prompt = hub.pull("hwchase17/react")
prompt
appkey = "APP_KEY" ##Please assume I am using the correct key here
llm = AzureChatOpenAI(
openai_api_version=os.getenv('OPENAI_API_VERSION'),
azure_deployment="gpt-35-turbo",
model_kwargs=dict(
user=f'{{"appkey": "{appkey}"}}'
)
)
agent = create_react_agent(llm, tools, prompt)
agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True)
agent_executor.invoke({"input": "hi"})
```
Below is the AssertionError that I am getting.
### Error Message and Stack Trace (if applicable)
```
> Entering new AgentExecutor chain...
---------------------------------------------------------------------------
AssertionError Traceback (most recent call last)
Cell In[17], [line 1](vscode-notebook-cell:?execution_count=17&line=1)
----> [1](vscode-notebook-cell:?execution_count=17&line=1) agent_executor.invoke({"input": "hi"})
File [~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain/chains/base.py:162](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain/chains/base.py:162), in Chain.invoke(self, input, config, **kwargs)
[160](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain/chains/base.py:160) except BaseException as e:
[161](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain/chains/base.py:161) run_manager.on_chain_error(e)
--> [162](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain/chains/base.py:162) raise e
[163](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain/chains/base.py:163) run_manager.on_chain_end(outputs)
[164](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain/chains/base.py:164) final_outputs: Dict[str, Any] = self.prep_outputs(
[165](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain/chains/base.py:165) inputs, outputs, return_only_outputs
[166](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain/chains/base.py:166) )
File [~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain/chains/base.py:156](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain/chains/base.py:156), in Chain.invoke(self, input, config, **kwargs)
[149](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain/chains/base.py:149) run_manager = callback_manager.on_chain_start(
[150](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain/chains/base.py:150) dumpd(self),
[151](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain/chains/base.py:151) inputs,
[152](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain/chains/base.py:152) name=run_name,
[153](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain/chains/base.py:153) )
[154](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain/chains/base.py:154) try:
[155](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain/chains/base.py:155) outputs = (
--> [156](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain/chains/base.py:156) self._call(inputs, run_manager=run_manager)
[157](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain/chains/base.py:157) if new_arg_supported
[158](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain/chains/base.py:158) else self._call(inputs)
[159](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain/chains/base.py:159) )
[160](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain/chains/base.py:160) except BaseException as e:
[161](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain/chains/base.py:161) run_manager.on_chain_error(e)
File [~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain/agents/agent.py:1391](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain/agents/agent.py:1391), in AgentExecutor._call(self, inputs, run_manager)
[1389](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain/agents/agent.py:1389) # We now enter the agent loop (until it returns something).
[1390](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain/agents/agent.py:1390) while self._should_continue(iterations, time_elapsed):
-> [1391](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain/agents/agent.py:1391) next_step_output = self._take_next_step(
[1392](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain/agents/agent.py:1392) name_to_tool_map,
[1393](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain/agents/agent.py:1393) color_mapping,
[1394](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain/agents/agent.py:1394) inputs,
[1395](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain/agents/agent.py:1395) intermediate_steps,
[1396](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain/agents/agent.py:1396) run_manager=run_manager,
[1397](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain/agents/agent.py:1397) )
[1398](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain/agents/agent.py:1398) if isinstance(next_step_output, AgentFinish):
[1399](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain/agents/agent.py:1399) return self._return(
[1400](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain/agents/agent.py:1400) next_step_output, intermediate_steps, run_manager=run_manager
[1401](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain/agents/agent.py:1401) )
File [~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain/agents/agent.py:1097](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain/agents/agent.py:1097), in AgentExecutor._take_next_step(self, name_to_tool_map, color_mapping, inputs, intermediate_steps, run_manager)
[1088](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain/agents/agent.py:1088) def _take_next_step(
[1089](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain/agents/agent.py:1089) self,
[1090](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain/agents/agent.py:1090) name_to_tool_map: Dict[str, BaseTool],
(...)
[1094](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain/agents/agent.py:1094) run_manager: Optional[CallbackManagerForChainRun] = None,
[1095](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain/agents/agent.py:1095) ) -> Union[AgentFinish, List[Tuple[AgentAction, str]]]:
[1096](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain/agents/agent.py:1096) return self._consume_next_step(
-> [1097](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain/agents/agent.py:1097) [
[1098](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain/agents/agent.py:1098) a
[1099](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain/agents/agent.py:1099) for a in self._iter_next_step(
[1100](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain/agents/agent.py:1100) name_to_tool_map,
[1101](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain/agents/agent.py:1101) color_mapping,
[1102](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain/agents/agent.py:1102) inputs,
[1103](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain/agents/agent.py:1103) intermediate_steps,
[1104](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain/agents/agent.py:1104) run_manager,
[1105](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain/agents/agent.py:1105) )
[1106](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain/agents/agent.py:1106) ]
[1107](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain/agents/agent.py:1107) )
File [~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain/agents/agent.py:1097](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain/agents/agent.py:1097), in <listcomp>(.0)
[1088](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain/agents/agent.py:1088) def _take_next_step(
[1089](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain/agents/agent.py:1089) self,
[1090](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain/agents/agent.py:1090) name_to_tool_map: Dict[str, BaseTool],
(...)
[1094](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain/agents/agent.py:1094) run_manager: Optional[CallbackManagerForChainRun] = None,
[1095](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain/agents/agent.py:1095) ) -> Union[AgentFinish, List[Tuple[AgentAction, str]]]:
[1096](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain/agents/agent.py:1096) return self._consume_next_step(
-> [1097](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain/agents/agent.py:1097) [
[1098](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain/agents/agent.py:1098) a
[1099](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain/agents/agent.py:1099) for a in self._iter_next_step(
[1100](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain/agents/agent.py:1100) name_to_tool_map,
[1101](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain/agents/agent.py:1101) color_mapping,
[1102](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain/agents/agent.py:1102) inputs,
[1103](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain/agents/agent.py:1103) intermediate_steps,
[1104](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain/agents/agent.py:1104) run_manager,
[1105](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain/agents/agent.py:1105) )
[1106](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain/agents/agent.py:1106) ]
[1107](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain/agents/agent.py:1107) )
File [~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain/agents/agent.py:1125](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain/agents/agent.py:1125), in AgentExecutor._iter_next_step(self, name_to_tool_map, color_mapping, inputs, intermediate_steps, run_manager)
[1122](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain/agents/agent.py:1122) intermediate_steps = self._prepare_intermediate_steps(intermediate_steps)
[1124](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain/agents/agent.py:1124) # Call the LLM to see what to do.
-> [1125](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain/agents/agent.py:1125) output = self.agent.plan(
[1126](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain/agents/agent.py:1126) intermediate_steps,
[1127](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain/agents/agent.py:1127) callbacks=run_manager.get_child() if run_manager else None,
[1128](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain/agents/agent.py:1128) **inputs,
[1129](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain/agents/agent.py:1129) )
[1130](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain/agents/agent.py:1130) except OutputParserException as e:
[1131](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain/agents/agent.py:1131) if isinstance(self.handle_parsing_errors, bool):
File [~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain/agents/agent.py:387](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain/agents/agent.py:387), in RunnableAgent.plan(self, intermediate_steps, callbacks, **kwargs)
[381](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain/agents/agent.py:381) # Use streaming to make sure that the underlying LLM is invoked in a streaming
[382](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain/agents/agent.py:382) # fashion to make it possible to get access to the individual LLM tokens
[383](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain/agents/agent.py:383) # when using stream_log with the Agent Executor.
[384](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain/agents/agent.py:384) # Because the response from the plan is not a generator, we need to
[385](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain/agents/agent.py:385) # accumulate the output into final output and return that.
[386](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain/agents/agent.py:386) final_output: Any = None
--> [387](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain/agents/agent.py:387) for chunk in self.runnable.stream(inputs, config={"callbacks": callbacks}):
[388](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain/agents/agent.py:388) if final_output is None:
[389](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain/agents/agent.py:389) final_output = chunk
File [~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain_core/runnables/base.py:2424](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain_core/runnables/base.py:2424), in RunnableSequence.stream(self, input, config, **kwargs)
[2418](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain_core/runnables/base.py:2418) def stream(
[2419](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain_core/runnables/base.py:2419) self,
[2420](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain_core/runnables/base.py:2420) input: Input,
[2421](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain_core/runnables/base.py:2421) config: Optional[RunnableConfig] = None,
[2422](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain_core/runnables/base.py:2422) **kwargs: Optional[Any],
[2423](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain_core/runnables/base.py:2423) ) -> Iterator[Output]:
-> [2424](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain_core/runnables/base.py:2424) yield from self.transform(iter([input]), config, **kwargs)
File [~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain_core/runnables/base.py:2411](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain_core/runnables/base.py:2411), in RunnableSequence.transform(self, input, config, **kwargs)
[2405](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain_core/runnables/base.py:2405) def transform(
[2406](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain_core/runnables/base.py:2406) self,
[2407](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain_core/runnables/base.py:2407) input: Iterator[Input],
[2408](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain_core/runnables/base.py:2408) config: Optional[RunnableConfig] = None,
[2409](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain_core/runnables/base.py:2409) **kwargs: Optional[Any],
[2410](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain_core/runnables/base.py:2410) ) -> Iterator[Output]:
-> [2411](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain_core/runnables/base.py:2411) yield from self._transform_stream_with_config(
[2412](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain_core/runnables/base.py:2412) input,
[2413](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain_core/runnables/base.py:2413) self._transform,
[2414](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain_core/runnables/base.py:2414) patch_config(config, run_name=(config or {}).get("run_name") or self.name),
[2415](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain_core/runnables/base.py:2415) **kwargs,
[2416](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain_core/runnables/base.py:2416) )
File [~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain_core/runnables/base.py:1497](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain_core/runnables/base.py:1497), in Runnable._transform_stream_with_config(self, input, transformer, config, run_type, **kwargs)
[1495](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain_core/runnables/base.py:1495) try:
[1496](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain_core/runnables/base.py:1496) while True:
-> [1497](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain_core/runnables/base.py:1497) chunk: Output = context.run(next, iterator) # type: ignore
[1498](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain_core/runnables/base.py:1498) yield chunk
[1499](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain_core/runnables/base.py:1499) if final_output_supported:
File [~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain_core/runnables/base.py:2375](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain_core/runnables/base.py:2375), in RunnableSequence._transform(self, input, run_manager, config)
[2366](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain_core/runnables/base.py:2366) for step in steps:
[2367](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain_core/runnables/base.py:2367) final_pipeline = step.transform(
[2368](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain_core/runnables/base.py:2368) final_pipeline,
[2369](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain_core/runnables/base.py:2369) patch_config(
(...)
[2372](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain_core/runnables/base.py:2372) ),
[2373](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain_core/runnables/base.py:2373) )
-> [2375](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain_core/runnables/base.py:2375) for output in final_pipeline:
[2376](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain_core/runnables/base.py:2376) yield output
File [~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain_core/runnables/base.py:1035](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain_core/runnables/base.py:1035), in Runnable.transform(self, input, config, **kwargs)
[1032](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain_core/runnables/base.py:1032) final: Input
[1033](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain_core/runnables/base.py:1033) got_first_val = False
-> [1035](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain_core/runnables/base.py:1035) for chunk in input:
[1036](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain_core/runnables/base.py:1036) if not got_first_val:
[1037](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain_core/runnables/base.py:1037) final = chunk
File [~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain_core/runnables/base.py:4168](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain_core/runnables/base.py:4168), in RunnableBindingBase.transform(self, input, config, **kwargs)
[4162](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain_core/runnables/base.py:4162) def transform(
[4163](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain_core/runnables/base.py:4163) self,
[4164](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain_core/runnables/base.py:4164) input: Iterator[Input],
[4165](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain_core/runnables/base.py:4165) config: Optional[RunnableConfig] = None,
[4166](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain_core/runnables/base.py:4166) **kwargs: Any,
[4167](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain_core/runnables/base.py:4167) ) -> Iterator[Output]:
-> [4168](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain_core/runnables/base.py:4168) yield from self.bound.transform(
[4169](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain_core/runnables/base.py:4169) input,
[4170](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain_core/runnables/base.py:4170) self._merge_configs(config),
[4171](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain_core/runnables/base.py:4171) **{**self.kwargs, **kwargs},
[4172](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain_core/runnables/base.py:4172) )
File [~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain_core/runnables/base.py:1045](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain_core/runnables/base.py:1045), in Runnable.transform(self, input, config, **kwargs)
[1042](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain_core/runnables/base.py:1042) final = final + chunk # type: ignore[operator]
[1044](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain_core/runnables/base.py:1044) if got_first_val:
-> [1045](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain_core/runnables/base.py:1045) yield from self.stream(final, config, **kwargs)
File [~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain_core/language_models/chat_models.py:250](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain_core/language_models/chat_models.py:250), in BaseChatModel.stream(self, input, config, stop, **kwargs)
[243](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain_core/language_models/chat_models.py:243) except BaseException as e:
[244](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain_core/language_models/chat_models.py:244) run_manager.on_llm_error(
[245](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain_core/language_models/chat_models.py:245) e,
[246](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain_core/language_models/chat_models.py:246) response=LLMResult(
[247](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain_core/language_models/chat_models.py:247) generations=[[generation]] if generation else []
[248](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain_core/language_models/chat_models.py:248) ),
[249](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain_core/language_models/chat_models.py:249) )
--> [250](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain_core/language_models/chat_models.py:250) raise e
[251](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain_core/language_models/chat_models.py:251) else:
[252](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain_core/language_models/chat_models.py:252) run_manager.on_llm_end(LLMResult(generations=[[generation]]))
File [~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain_core/language_models/chat_models.py:242](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain_core/language_models/chat_models.py:242), in BaseChatModel.stream(self, input, config, stop, **kwargs)
[240](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain_core/language_models/chat_models.py:240) else:
[241](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain_core/language_models/chat_models.py:241) generation += chunk
--> [242](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain_core/language_models/chat_models.py:242) assert generation is not None
[243](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain_core/language_models/chat_models.py:243) except BaseException as e:
[244](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain_core/language_models/chat_models.py:244) run_manager.on_llm_error(
[245](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain_core/language_models/chat_models.py:245) e,
[246](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain_core/language_models/chat_models.py:246) response=LLMResult(
[247](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain_core/language_models/chat_models.py:247) generations=[[generation]] if generation else []
[248](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain_core/language_models/chat_models.py:248) ),
[249](https://file+.vscode-resource.vscode-cdn.net/Users/sagarsingh/Desktop/Work/RAG_Fusion_poc/~/Desktop/Work/RAG_Fusion_poc/.venv/lib/python3.9/site-packages/langchain_core/language_models/chat_models.py:249) )
AssertionError:
```
### Description
Hi Team,
I am trying to use the AzureChatOpenAI as LLM in the inside the ReACT Agent
AzureChatOpenAI model is working when I run it independently But When I use that in agent as LLM. It is throwing me the Assertion Error (Also Assertion Error is not descriptive)
Please help me how can I run this code, where and what I have missing
### System Info
langchain==0.1.5
langchain-community==0.0.18
langchain-core==0.1.19
langchain-openai==0.0.6
langchainhub==0.1.14 | Non Descriptive Assertion Error while using AzureChatOpenAI model with Agents | https://api.github.com/repos/langchain-ai/langchain/issues/18500/comments | 5 | 2024-03-04T11:39:52Z | 2024-08-09T16:41:40Z | https://github.com/langchain-ai/langchain/issues/18500 | 2,166,636,716 | 18,500 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```
db = SQLDatabase.from_databricks(catalog=catalog, schema=database,include_tables=table_names )
llm=ChatDatabricks(endpoint="databricks-mixtral-8x7b-instruct", model_kwargs={"temperature": 0.5, "max_tokens": 1000
,'top_p': 0.7,'error_behaviour':'truncate_input_end','num_return_sequences':1} )
agent = create_sql_query_chain(llm=llm, prompt=prompt, db=db )
response=agent.invoke({"question":q ,"top_k":3,"table_info":table_info,"catalog":catalog,"database":database,"dialect":db.dialect})
print(response)
```
### Error Message and Stack Trace (if applicable)
Its not throwing any error but it comes with its own column names or tables which doesnt event exist
### Description
Am using the above code to convert NLP to SQL project ,where am using databricks hosted mixtral instruct 7B model and passed the catalog,database and selected list of the tables names , and pass the NLP question to model ,the model is coming up with columns/table names which aren't part of my tables list input and also in addition to that its appending the special chars to column names.
Note-
1. i want only the Databricks SQL as an output response not the results which is why i didn't go thru the SQL agent approach
2. Also since its Databricks the tables doesn't have Primary keys and foreign keys , so how does the langchain db session assumes about relationships?
3. Am using fewshot template and even passing the table names along with its column name,datatype corresponding tables event though sql query used will show incorrect tables/columns in SQL response
### System Info
langchain ,databricks-sql-connector,mlflow==2.9.0 are libraries which am using as part of my code. | Tex to SQL , facing hallucination problem | https://api.github.com/repos/langchain-ai/langchain/issues/18495/comments | 3 | 2024-03-04T10:03:14Z | 2024-06-13T16:09:02Z | https://github.com/langchain-ai/langchain/issues/18495 | 2,166,444,756 | 18,495 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
from langchain_openai import ChatOpenAI
from langchain_experimental.agents import create_csv_agent
_model_name = "Qwen-1_5-72B-Chat-Int4"
_openai_api_key = "EMPTY"
_openai_api_base = "http://localhost:2783/v1"
llm = ChatOpenAI(model_name=_model_name, openai_api_key=_openai_api_key, openai_api_base = _openai_api_base)
df= './24022710201.csv'
agent = create_csv_agent(llm, df, verbose=True)
agent.invoke(
{
"input": "What is the median value of the date"
}
)
### Error Message and Stack Trace (if applicable)
Entering new AgentExecutor chain...
Thought: We can use the .shape attribute of the dataframe to get the number of rows. The .shape attribute returns a tuple where the first element is the number of rows.
Action: python_repl_ast
Action Input: df.shape[0]
ObservNameError: name 'Observ' is not definedThere seems to be an error in the output, but the command is correct. Let's try again.
Action: python_repl_ast
Action Input: df.shape[0]
ObservNameError: name 'Observ' is not definedIt appears the output is still not correct. I'll try a different approach by counting the rows directly.
Action: python_repl_ast
Action Input: len(df)
ObservNameError: name 'Observ' is not definedThe output is still not showing correctly. It seems there's an issue with the response system. However, the commands to retrieve the number of rows (df.shape[0] or len(df)) are correct. I'll have to assume that if these commands were executed in a normal Python environment, they would return the correct result.
Final Answer: The correct command to find the number of rows in the dataframe is `df.shape[0]` or `len(df)`. If executed, these commands would return the count of rows.
### Description
There will be an inexplicable occurrence of "Observ", which should not be the "Observ" replied by the large model. It is speculated that it is a string concatenation problem with the prompt of create_react_agent
### System Info
langchain 0.1.10
langchain-community 0.0.25
langchain-core 0.1.28
langchain-experimental 0.0.51
langchain-openai 0.0.8
langchain-text-splitters 0.0.1 | NameError: name 'Observ' is not defined in create_csv_agent or create_pandas_dataframe_agent | https://api.github.com/repos/langchain-ai/langchain/issues/18484/comments | 6 | 2024-03-04T07:12:29Z | 2024-07-28T00:11:57Z | https://github.com/langchain-ai/langchain/issues/18484 | 2,166,101,137 | 18,484 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
//Similarity search with filter not working
redis.add_texts(texts,metadatas,embeddings)
redis.add_documents(documents=docs)
//Similarity search with filter working
Redis.from_texts_return_keys([doc.page_content.lower() for doc in chunks],embedding,metadatas=[doc.metadata for doc in chunks],index_name=index_name)
### Error Message and Stack Trace (if applicable)
_No response_
### Description
Hello,
I'm working on implementing similarity search using Redis VectorDB. I've noticed an issue where, upon embedding my data using add_texts or add_documents, the filters don't seem to function properly, despite verifying that all metadata has been correctly added to Redis. However, when I switch to using from_texts_return_keys, everything works as expected. It appears that there needs to be a standardization across methods that are supposed to produce the same output.
### System Info
System Information
------------------
> OS: Darwin
> OS Version: Darwin Kernel Version 23.3.0: Wed Dec 20 21:28:58 PST 2023; root:xnu-10002.81.5~7/RELEASE_X86_64
> Python Version: 3.11.6 (main, Nov 2 2023, 04:51:19) [Clang 14.0.0 (clang-1400.0.29.202)]
Package Information
-------------------
> langchain_core: 0.1.22
> langchain: 0.1.1
> langchain_community: 0.0.13
> langsmith: 0.0.87
> langchain_google_genai: 0.0.5
> langchain_google_vertexai: 0.0.1.post1
> langchain_openai: 0.0.5
> langchainhub: 0.1.13 | Redis add_texts or add_documents doesn't work with filters | https://api.github.com/repos/langchain-ai/langchain/issues/18482/comments | 0 | 2024-03-04T06:43:33Z | 2024-06-10T16:07:58Z | https://github.com/langchain-ai/langchain/issues/18482 | 2,166,061,511 | 18,482 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
redis = Redis.from_existing_index(embedding=embedding, redis_url=os.environ["REDIS_URL"],
index_name=index_name, schema="redis_schema.yaml",startup_nodes=[ClusterNode(host="localhost", port=6379)])
### Error Message and Stack Trace (if applicable)

### Description
I'm trying to use Redis Cluster as vector db. Redis Cluster connection with redis_url is working good but when i want to add startup_nodes, it is broken.
I checked the source code **kwargs should be working fine because it is going to correct client.
### System Info
System Information
------------------
> OS: Darwin
> OS Version: Darwin Kernel Version 23.3.0: Wed Dec 20 21:28:58 PST 2023; root:xnu-10002.81.5~7/RELEASE_X86_64
> Python Version: 3.11.6 (main, Nov 2 2023, 04:51:19) [Clang 14.0.0 (clang-1400.0.29.202)]
Package Information
-------------------
> langchain_core: 0.1.22
> langchain: 0.1.1
> langchain_community: 0.0.13
> langsmith: 0.0.87
> langchain_google_genai: 0.0.5
> langchain_google_vertexai: 0.0.1.post1
> langchain_openai: 0.0.5
> langchainhub: 0.1.13
| Redis Client with startup_nodes doesn't work | https://api.github.com/repos/langchain-ai/langchain/issues/18481/comments | 0 | 2024-03-04T06:32:47Z | 2024-06-10T16:07:53Z | https://github.com/langchain-ai/langchain/issues/18481 | 2,166,047,341 | 18,481 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code

### Error Message and Stack Trace (if applicable)
_No response_
### Description
where is "evaluation" package
### System Info
langchain 0.1.10
python 3.8 | where is "evaluation" package | https://api.github.com/repos/langchain-ai/langchain/issues/18480/comments | 1 | 2024-03-04T06:30:07Z | 2024-03-05T00:24:05Z | https://github.com/langchain-ai/langchain/issues/18480 | 2,166,043,988 | 18,480 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
``` python
gpu = rh.cluster(name="gpu-cluster",
ips=cluster_ip,
server_connection='paramiko',
ssh_creds={'ssh_user': ssh_username, 'ssh_private_key':ssh_key}),
llm = SelfHostedHuggingFaceLLM(
model_id="gpt2", hardware=gpu, model_reqs=["pip:./", "transformers", "torch"])
```
### Error Message and Stack Trace (if applicable)
Traceback (most recent call last):
File "rhtest.py", line 44, in <module>
llm = SelfHostedHuggingFaceLLM(
File "/opt/miniconda/envs/lctest1/lib/python3.8/site-packages/langchain_community/llms/self_hosted_hugging_face.py", line 190, in __init__
super().__init__(load_fn_kwargs=load_fn_kwargs, **kwargs)
File "/opt/miniconda/envs/lctest1/lib/python3.8/site-packages/langchain_community/llms/self_hosted.py", line 162, in __init__
remote_load_fn = rh.function(fn=self.model_load_fn).to(
TypeError: to() got an unexpected keyword argument 'reqs'
### Description
I'm trying to use SelfHostedHuggingFaceLLM to host an LLM on a remote cluster. It seems that the .to() method for rh.function() no longer uses the keyword argument 'reqs' (https://www.run.house/docs/_modules/runhouse/resources/functions/function#Function.to), but self_hosted.py still uses
### System Info
langchain==0.1.9
langchain-community==0.0.24
langchain-core==0.1.27
langchain-openai==0.0.8
running on a remote cluster in a container with Rocky Linux 8
python version 3.8.18
| Langchain community self_hosted.py outdated Runhouse integration | https://api.github.com/repos/langchain-ai/langchain/issues/18479/comments | 0 | 2024-03-04T06:16:22Z | 2024-03-05T00:01:57Z | https://github.com/langchain-ai/langchain/issues/18479 | 2,166,022,860 | 18,479 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
_NO_CODE_EXAMPLE_
### Error Message and Stack Trace (if applicable)
```
Directory ../text-splitters does not exist
```
### Description
After forking and cloning the repo on my machine, I tried to open it using docker and specifically in VS Code with the option to "Reopen in Container". While building, the final command of [dev.Dockerfile](https://github.com/langchain-ai/langchain/blob/b051bba1a9f3f2c6020d7c8dbcc792d14b3cbe17/libs/langchain/dev.Dockerfile#L50) resulted in the following error:
```
Directory ../text-splitters does not exist
```
After investigating, I found out that this [commit](https://github.com/langchain-ai/langchain/commit/5efb5c099f6ced0b752306c4cb1c45370c2a6920) created a new packaged called `text-splitters` but this was never added in the [dev.Dockerfile](https://github.com/langchain-ai/langchain/blob/b051bba1a9f3f2c6020d7c8dbcc792d14b3cbe17/libs/langchain/dev.Dockerfile) file.
### System Info
_NO_SYSTEM_INFO_ | Dockerfile issues when trying to build the repo using .devcontainer | https://api.github.com/repos/langchain-ai/langchain/issues/18465/comments | 0 | 2024-03-03T23:20:14Z | 2024-06-09T16:07:47Z | https://github.com/langchain-ai/langchain/issues/18465 | 2,165,618,028 | 18,465 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
from langchain_community.llms import Bedrock
from langchain.prompts import PromptTemplate
from langchain.chains.combine_documents.stuff import StuffDocumentsChain
from langchain.chains.llm import LLMChain
from langchain.schema import Document
import langchain
langchain.verbose = True
llm = Bedrock(
model_id="mistral.mixtral-8x7b-instruct-v0:1",
model_kwargs={"max_tokens": 250},
)
prompt_template = """Summarize the following text in no more than 3 to 4 sentences:
{text}
CONCISE SUMMARY:"""
prompt = PromptTemplate.from_template(prompt_template)
llm_chain = LLMChain(llm=llm, prompt=prompt)
stuff_chain = StuffDocumentsChain(llm_chain=llm_chain, document_variable_name="text")
doc = Document(
page_content="""Today, we’re excited to announce the availability of two
high-performing Mistral AI models, Mistral 7B and Mixtral 8x7B,
on Amazon Bedrock."""
)
results = stuff_chain.run([doc])
print(results)
```
### Error Message and Stack Trace (if applicable)
/Users/skehlet/Documents/workspace/email-genai-summarizer/venv/lib/python3.11/site-packages/langchain_core/_api/deprecation.py:117: LangChainDeprecationWarning: The function `run` was deprecated in LangChain 0.1.0 and will be removed in 0.2.0. Use invoke instead.
warn_deprecated(
Traceback (most recent call last):
File "/Users/skehlet/Documents/workspace/email-genai-summarizer/venv/lib/python3.11/site-packages/langchain_community/llms/bedrock.py", line 444, in _prepare_input_and_invoke
response = self.client.invoke_model(**request_options)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/skehlet/Documents/workspace/email-genai-summarizer/venv/lib/python3.11/site-packages/botocore/client.py", line 553, in _api_call
return self._make_api_call(operation_name, kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/skehlet/Documents/workspace/email-genai-summarizer/venv/lib/python3.11/site-packages/botocore/client.py", line 1009, in _make_api_call
raise error_class(parsed_response, operation_name)
botocore.errorfactory.ValidationException: An error occurred (ValidationException) when calling the InvokeModel operation: Malformed input request: #: required key [prompt] not found#: extraneous key [inputText] is not permitted, please reformat your input and try again.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/Users/skehlet/Documents/workspace/email-genai-summarizer/test2.py", line 20, in <module>
results = stuff_chain.run([doc])
^^^^^^^^^^^^^^^^^^^^^^
File "/Users/skehlet/Documents/workspace/email-genai-summarizer/venv/lib/python3.11/site-packages/langchain_core/_api/deprecation.py", line 145, in warning_emitting_wrapper
return wrapped(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/skehlet/Documents/workspace/email-genai-summarizer/venv/lib/python3.11/site-packages/langchain/chains/base.py", line 545, in run
return self(args[0], callbacks=callbacks, tags=tags, metadata=metadata)[
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/skehlet/Documents/workspace/email-genai-summarizer/venv/lib/python3.11/site-packages/langchain_core/_api/deprecation.py", line 145, in warning_emitting_wrapper
return wrapped(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/skehlet/Documents/workspace/email-genai-summarizer/venv/lib/python3.11/site-packages/langchain/chains/base.py", line 378, in __call__
return self.invoke(
^^^^^^^^^^^^
File "/Users/skehlet/Documents/workspace/email-genai-summarizer/venv/lib/python3.11/site-packages/langchain/chains/base.py", line 163, in invoke
raise e
File "/Users/skehlet/Documents/workspace/email-genai-summarizer/venv/lib/python3.11/site-packages/langchain/chains/base.py", line 153, in invoke
self._call(inputs, run_manager=run_manager)
File "/Users/skehlet/Documents/workspace/email-genai-summarizer/venv/lib/python3.11/site-packages/langchain/chains/combine_documents/base.py", line 137, in _call
output, extra_return_dict = self.combine_docs(
^^^^^^^^^^^^^^^^^^
File "/Users/skehlet/Documents/workspace/email-genai-summarizer/venv/lib/python3.11/site-packages/langchain/chains/combine_documents/stuff.py", line 244, in combine_docs
return self.llm_chain.predict(callbacks=callbacks, **inputs), {}
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/skehlet/Documents/workspace/email-genai-summarizer/venv/lib/python3.11/site-packages/langchain/chains/llm.py", line 293, in predict
return self(kwargs, callbacks=callbacks)[self.output_key]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/skehlet/Documents/workspace/email-genai-summarizer/venv/lib/python3.11/site-packages/langchain_core/_api/deprecation.py", line 145, in warning_emitting_wrapper
return wrapped(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/skehlet/Documents/workspace/email-genai-summarizer/venv/lib/python3.11/site-packages/langchain/chains/base.py", line 378, in __call__
return self.invoke(
^^^^^^^^^^^^
File "/Users/skehlet/Documents/workspace/email-genai-summarizer/venv/lib/python3.11/site-packages/langchain/chains/base.py", line 163, in invoke
raise e
File "/Users/skehlet/Documents/workspace/email-genai-summarizer/venv/lib/python3.11/site-packages/langchain/chains/base.py", line 153, in invoke
self._call(inputs, run_manager=run_manager)
File "/Users/skehlet/Documents/workspace/email-genai-summarizer/venv/lib/python3.11/site-packages/langchain/chains/llm.py", line 103, in _call
response = self.generate([inputs], run_manager=run_manager)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/skehlet/Documents/workspace/email-genai-summarizer/venv/lib/python3.11/site-packages/langchain/chains/llm.py", line 115, in generate
return self.llm.generate_prompt(
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/skehlet/Documents/workspace/email-genai-summarizer/venv/lib/python3.11/site-packages/langchain_core/language_models/llms.py", line 568, in generate_prompt
return self.generate(prompt_strings, stop=stop, callbacks=callbacks, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/skehlet/Documents/workspace/email-genai-summarizer/venv/lib/python3.11/site-packages/langchain_core/language_models/llms.py", line 741, in generate
output = self._generate_helper(
^^^^^^^^^^^^^^^^^^^^^^
File "/Users/skehlet/Documents/workspace/email-genai-summarizer/venv/lib/python3.11/site-packages/langchain_core/language_models/llms.py", line 605, in _generate_helper
raise e
File "/Users/skehlet/Documents/workspace/email-genai-summarizer/venv/lib/python3.11/site-packages/langchain_core/language_models/llms.py", line 592, in _generate_helper
self._generate(
File "/Users/skehlet/Documents/workspace/email-genai-summarizer/venv/lib/python3.11/site-packages/langchain_core/language_models/llms.py", line 1177, in _generate
self._call(prompt, stop=stop, run_manager=run_manager, **kwargs)
File "/Users/skehlet/Documents/workspace/email-genai-summarizer/venv/lib/python3.11/site-packages/langchain_community/llms/bedrock.py", line 718, in _call
return self._prepare_input_and_invoke(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/skehlet/Documents/workspace/email-genai-summarizer/venv/lib/python3.11/site-packages/langchain_community/llms/bedrock.py", line 451, in _prepare_input_and_invoke
raise ValueError(f"Error raised by bedrock service: {e}")
ValueError: Error raised by bedrock service: An error occurred (ValidationException) when calling the InvokeModel operation: Malformed input request: #: required key [prompt] not found#: extraneous key [inputText] is not permitted, please reformat your input and try again.
### Description
* I'm trying to use langchain to use Mistral through AWS Bedrock. It just came out so understandably it's not yet supported.
I found the following quick patch to bedrock.py worked for me, hopefully this is helpful:
```diff
--- venv/lib/python3.11/site-packages/langchain_community/llms/bedrock.py.orig 2024-03-03 12:44:35
+++ venv/lib/python3.11/site-packages/langchain_community/llms/bedrock.py 2024-03-03 12:44:58
@@ -104,6 +104,9 @@
input_body = dict()
input_body["inputText"] = prompt
input_body["textGenerationConfig"] = {**model_kwargs}
+ elif provider == "mistral":
+ input_body = dict()
+ input_body["prompt"] = prompt
else:
input_body["inputText"] = prompt
@@ -126,6 +129,8 @@
text = response_body.get("generations")[0].get("text")
elif provider == "meta":
text = response_body.get("generation")
+ elif provider == "mistral":
+ text = response_body.get("outputs")[0].get("text")
else:
text = response_body.get("results")[0].get("outputText")
```
### System Info
System Information
------------------
> OS: Darwin
> OS Version: Darwin Kernel Version 23.3.0: Wed Dec 20 21:30:44 PST 2023; root:xnu-10002.81.5~7/RELEASE_ARM64_T6000
> Python Version: 3.11.7 (main, Dec 4 2023, 18:10:11) [Clang 15.0.0 (clang-1500.1.0.2.5)]
Package Information
-------------------
> langchain_core: 0.1.28
> langchain: 0.1.10
> langchain_community: 0.0.25
> langsmith: 0.1.13
> langchain_text_splitters: 0.0.1
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph
> langserve | Support Mistral through Bedrock | https://api.github.com/repos/langchain-ai/langchain/issues/18461/comments | 3 | 2024-03-03T21:22:37Z | 2024-04-02T17:44:27Z | https://github.com/langchain-ai/langchain/issues/18461 | 2,165,564,935 | 18,461 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
;
### Error Message and Stack Trace (if applicable)
Root cause:
AttributeError: 'CallbackManager' object has no attribute 'ignore_chain'
`
Traceback (most recent call last):
File "/var/lang/lib/python3.10/site-packages/langchain_core/callbacks/manager.py", line 255, in handle_event
if ignore_condition_name is None or not getattr(
AttributeError: 'CallbackManager' object has no attribute 'ignore_chain'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/var/lang/lib/python3.10/site-packages/langchain_core/runnables/base.py", line 2075, in invoke
input = step.invoke(
File "/var/lang/lib/python3.10/site-packages/langchain_core/runnables/base.py", line 4069, in invoke
return self.bound.invoke(
File "/var/lang/lib/python3.10/site-packages/langchain/chains/base.py", line 163, in invoke
raise e
File "/var/lang/lib/python3.10/site-packages/langchain/chains/base.py", line 153, in invoke
self._call(inputs, run_manager=run_manager)
File "/var/lang/lib/python3.10/site-packages/langchain/agents/agent.py", line 1391, in _call
next_step_output = self._take_next_step(
File "/var/lang/lib/python3.10/site-packages/langchain/agents/agent.py", line 1097, in _take_next_step
[
File "/var/lang/lib/python3.10/site-packages/langchain/agents/agent.py", line 1097, in <listcomp>
[
File "/var/lang/lib/python3.10/site-packages/langchain/agents/agent.py", line 1182, in _iter_next_step
yield self._perform_agent_action(
File "/var/lang/lib/python3.10/site-packages/langchain/agents/agent.py", line 1204, in _perform_agent_action
observation = tool.run(
File "/var/lang/lib/python3.10/site-packages/langchain_core/tools.py", line 419, in run
raise e
File "/var/lang/lib/python3.10/site-packages/langchain_core/tools.py", line 376, in run
self._run(*tool_args, run_manager=run_manager, **tool_kwargs)
File "/var/lang/lib/python3.10/site-packages/langchain_community/tools/vectorstore/tool.py", line 57, in _run
return chain.invoke(
File "/var/lang/lib/python3.10/site-packages/langchain/chains/base.py", line 145, in invoke
run_manager = callback_manager.on_chain_start(
File "/var/lang/lib/python3.10/site-packages/langchain_core/callbacks/manager.py", line 1296, in on_chain_start
handle_event(
File "/var/lang/lib/python3.10/site-packages/langchain_core/callbacks/manager.py", line 285, in handle_event
if handler.raise_error:
AttributeError: 'CallbackManager' object has no attribute 'raise_error'
`
### Description
Patch https://github.com/langchain-ai/langchain/pull/16949 introduced breaking change.
See my comment their https://github.com/langchain-ai/langchain/pull/16949#discussion_r1510374350
### System Info
bash-4.2# python -m langchain_core.sys_info
System Information
------------------
> OS: Linux
> OS Version: #1 SMP Thu Oct 5 21:02:42 UTC 2023
> Python Version: 3.10.13 (main, Dec 4 2023, 13:30:46) [GCC 7.3.1 20180712 (Red Hat 7.3.1-17)]
Package Information
-------------------
> langchain_core: 0.1.28
> langchain: 0.1.10
> langchain_community: 0.0.25
> langsmith: 0.1.10
> langchain_experimental: 0.0.50
> langchain_openai: 0.0.2.post1
> langchain_text_splitters: 0.0.1
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph
> langserve | Regression in VectorStoreQATool and VectorStoreQAWithSourcesTool | https://api.github.com/repos/langchain-ai/langchain/issues/18460/comments | 0 | 2024-03-03T20:01:59Z | 2024-03-05T23:57:00Z | https://github.com/langchain-ai/langchain/issues/18460 | 2,165,532,798 | 18,460 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
https://github.com/stewones/langchain-bun-binary
### Error Message and Stack Trace (if applicable)
```sh
134254 | var init_base5 = __esm(() => {
134255 | init_base4();
134256 | });
134257 |
134258 | // node_modules/@langchain/core/dist/prompts/string.js
134259 | class BaseStringPromptTemplate extends BasePromptTemplate {
^
ReferenceError: Cannot access uninitialized variable.
at /$bunfs/root/chatness:134259:40
```
### Description
I'm trying to generate a binary from an app that relies on langchain.
here's the minimal reproduction
https://github.com/stewones/langchain-bun-binary
steps to recreate:
1. `bun install`
2. `bun build --compile --sourcemap ./index.ts --outfile app`
3. `./app`
### System Info
platform: mac
Python: 3.9.6
langchain: 0.1.25 | Possible circular dependencies breaking from building binaries with Bun | https://api.github.com/repos/langchain-ai/langchain/issues/18458/comments | 1 | 2024-03-03T16:13:39Z | 2024-03-03T16:17:38Z | https://github.com/langchain-ai/langchain/issues/18458 | 2,165,434,009 | 18,458 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
from typing import List
from langchain.output_parsers.openai_tools import JsonOutputKeyToolsParser
parser = JsonOutputKeyToolsParser(key_name="Joke")
chain = prompt | model | parser
chain.invoke({"input": "tell me a joke"})
-----
[{'setup': "Why couldn't the bicycle stand up by itself?",
'punchline': 'Because it was two tired!'}]
we got a list which is correct, but when we add return_single=True, we still got a list, which is unexpected.
The same is true when we call tool in the chain (code from the document).
from operator import itemgetter
# Note: the `.map()` at the end of `multiply` allows us to pass in a list of `multiply` arguments instead of a single one.
chain = (
model_with_tools
| JsonOutputKeyToolsParser(key_name="multiply", return_single=True)
| multiply
)
chain.invoke("What's four times 23")
it would failed at pydantic validation, because it pipe a list to mulltiply.
we have to get the first item in the list. like below
# You would need to add a step in the chain to convert the list to a dictionary
def extract_arguments_from_list(input_list):
if isinstance(input_list, list) and len(input_list) == 1:
return input_list[0]
else:
raise ValueError("Expected a list with a single dictionary")
from operator import itemgetter
# Note: the `.map()` at the end of `multiply` allows us to pass in a list of `multiply` arguments instead of a single one.
chain = (
model_with_tools
| JsonOutputKeyToolsParser(key_name="multiply", return_single=True)
# | extract_arguments_from_list # This is the new step in the chain
| multiply
)
chain.invoke("What's four times 23")
then it works.
### Error Message and Stack Trace (if applicable)
---------------------------------------------------------------------------
ValidationError Traceback (most recent call last)
Cell In[71], line 10
3 # Note: the `.map()` at the end of `multiply` allows us to pass in a list of `multiply` arguments instead of a single one.
4 chain = (
5 model_with_tools
6 | JsonOutputKeyToolsParser(key_name="multiply", return_single=True)
7 # | extract_arguments_from_list # This is the new step in the chain
8 | multiply
9 )
---> 10 chain.invoke("What's four times 23")
File [~/miniconda3/envs/LLMV2/lib/python3.10/site-packages/langchain_core/runnables/base.py:2075](http://localhost:33435/home/coty/miniconda3/envs/LLMV2/lib/python3.10/site-packages/langchain_core/runnables/base.py#line=2074), in RunnableSequence.invoke(self, input, config)
2073 try:
2074 for i, step in enumerate(self.steps):
-> 2075 input = step.invoke(
2076 input,
2077 # mark each step as a child run
2078 patch_config(
2079 config, callbacks=run_manager.get_child(f"seq:step:{i+1}")
2080 ),
2081 )
2082 # finish the root run
2083 except BaseException as e:
File [~/miniconda3/envs/LLMV2/lib/python3.10/site-packages/langchain_core/tools.py:240](http://localhost:33435/home/coty/miniconda3/envs/LLMV2/lib/python3.10/site-packages/langchain_core/tools.py#line=239), in BaseTool.invoke(self, input, config, **kwargs)
233 def invoke(
234 self,
235 input: Union[str, Dict],
236 config: Optional[RunnableConfig] = None,
237 **kwargs: Any,
238 ) -> Any:
239 config = ensure_config(config)
--> 240 return self.run(
241 input,
242 callbacks=config.get("callbacks"),
243 tags=config.get("tags"),
244 metadata=config.get("metadata"),
245 run_name=config.get("run_name"),
246 **kwargs,
247 )
File [~/miniconda3/envs/LLMV2/lib/python3.10/site-packages/langchain_core/tools.py:382](http://localhost:33435/home/coty/miniconda3/envs/LLMV2/lib/python3.10/site-packages/langchain_core/tools.py#line=381), in BaseTool.run(self, tool_input, verbose, start_color, color, callbacks, tags, metadata, run_name, **kwargs)
380 except ValidationError as e:
381 if not self.handle_validation_error:
--> 382 raise e
383 elif isinstance(self.handle_validation_error, bool):
384 observation = "Tool input validation error"
File [~/miniconda3/envs/LLMV2/lib/python3.10/site-packages/langchain_core/tools.py:373](http://localhost:33435/home/coty/miniconda3/envs/LLMV2/lib/python3.10/site-packages/langchain_core/tools.py#line=372), in BaseTool.run(self, tool_input, verbose, start_color, color, callbacks, tags, metadata, run_name, **kwargs)
360 run_manager = callback_manager.on_tool_start(
361 {"name": self.name, "description": self.description},
362 tool_input if isinstance(tool_input, str) else str(tool_input),
(...)
370 **kwargs,
371 )
372 try:
--> 373 parsed_input = self._parse_input(tool_input)
374 tool_args, tool_kwargs = self._to_args_and_kwargs(parsed_input)
375 observation = (
376 self._run(*tool_args, run_manager=run_manager, **tool_kwargs)
377 if new_arg_supported
378 else self._run(*tool_args, **tool_kwargs)
379 )
File [~/miniconda3/envs/LLMV2/lib/python3.10/site-packages/langchain_core/tools.py:280](http://localhost:33435/home/coty/miniconda3/envs/LLMV2/lib/python3.10/site-packages/langchain_core/tools.py#line=279), in BaseTool._parse_input(self, tool_input)
278 else:
279 if input_args is not None:
--> 280 result = input_args.parse_obj(tool_input)
281 return {
282 k: getattr(result, k)
283 for k, v in result.dict().items()
284 if k in tool_input
285 }
286 return tool_input
File [~/miniconda3/envs/LLMV2/lib/python3.10/site-packages/pydantic/main.py:526](http://localhost:33435/home/coty/miniconda3/envs/LLMV2/lib/python3.10/site-packages/pydantic/main.py#line=525), in pydantic.main.BaseModel.parse_obj()
File [~/miniconda3/envs/LLMV2/lib/python3.10/site-packages/pydantic/main.py:341](http://localhost:33435/home/coty/miniconda3/envs/LLMV2/lib/python3.10/site-packages/pydantic/main.py#line=340), in pydantic.main.BaseModel.__init__()
ValidationError: 2 validation errors for multiplySchema
first_int
value is not a valid integer (type=type_error.integer)
second_int
field required (type=value_error.missing)
### Description
from typing import List
from langchain.output_parsers.openai_tools import JsonOutputKeyToolsParser
parser = JsonOutputKeyToolsParser(key_name="Joke")
chain = prompt | model | parser
chain.invoke({"input": "tell me a joke"})
-----
[{'setup': "Why couldn't the bicycle stand up by itself?",
'punchline': 'Because it was two tired!'}]
we got a list which is correct, but when we add return_single=True, we still got a list, which is unexpected.
The same is true when we call tool in the chain (code from the document).
from operator import itemgetter
# Note: the `.map()` at the end of `multiply` allows us to pass in a list of `multiply` arguments instead of a single one.
chain = (
model_with_tools
| JsonOutputKeyToolsParser(key_name="multiply", return_single=True)
| multiply
)
chain.invoke("What's four times 23")
it would failed at pydantic validation, because it pipe a list to mulltiply.
we have to get the first item in the list. like below
# You would need to add a step in the chain to convert the list to a dictionary
def extract_arguments_from_list(input_list):
if isinstance(input_list, list) and len(input_list) == 1:
return input_list[0]
else:
raise ValueError("Expected a list with a single dictionary")
from operator import itemgetter
# Note: the `.map()` at the end of `multiply` allows us to pass in a list of `multiply` arguments instead of a single one.
chain = (
model_with_tools
| JsonOutputKeyToolsParser(key_name="multiply", return_single=True)
# | extract_arguments_from_list # This is the new step in the chain
| multiply
)
chain.invoke("What's four times 23")
then it works.
### System Info
Name: langchain
Version: 0.1.10
Name: langchain-core
Version: 0.1.28
Name: langgraph
Version: 0.0.26
Name: langchain-community
Version: 0.0.25
Name: langchain-experimental
Version: 0.0.53
Name: langchain-openai
Version: 0.0.8
Name: langserve
Version: 0.0.46
Name: langchain-cli
Version: 0.0.21
Name: langsmith
Version: 0.1.13
Name: langgraph
Version: 0.0.26
Name: openai
Version: 1.13.3
Name: httpx
Version: 0.25.2
Name: pydantic
Version: 1.10.14
python=3.10.13
| JsonOutputKeyToolsParser return_single=True doesn't work | https://api.github.com/repos/langchain-ai/langchain/issues/18455/comments | 3 | 2024-03-03T14:42:07Z | 2024-07-11T16:06:51Z | https://github.com/langchain-ai/langchain/issues/18455 | 2,165,390,145 | 18,455 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
def get_langchain_llm_model(llm_model_id, params, region):
'''
params keys should be in [temperature, max_tokens, top_p, top_k, stop]
'''
llm = None
parameters = { item[0]:item[1] for item in params.items() if item[0] in ['temperature', 'max_tokens', 'top_p', 'top_k', 'stop']}
if llm_model_id in bedrock_llms:
boto3_bedrock = boto3.client(
service_name="bedrock-runtime",
region_name=region
)
llm = Bedrock(model_id=llm_model_id, client=boto3_bedrock, streaming=False, model_kwargs=parameters)
return llm
INVOKE_MODEL_ID = 'mistral.mistral-7b-instruct-v0:2'
llm4 = get_langchain_llm_model(INVOKE_MODEL_ID, params, REGION)
llmchain = LLMChain(llm=llm4, verbose=False, prompt=prompt_templ)
answer = llmchain.run({})
### Error Message and Stack Trace (if applicable)
_No response_
### Description
As mistrial was support on Mar.1, i guess langchain have not adapted with this update. Look forwarding to the upgrade for this.
### System Info
Linux, python3.10 | mistrial is supported by bedrock, by its langchain wrapper will throw exception | https://api.github.com/repos/langchain-ai/langchain/issues/18451/comments | 1 | 2024-03-03T13:25:07Z | 2024-06-14T16:08:39Z | https://github.com/langchain-ai/langchain/issues/18451 | 2,165,358,057 | 18,451 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
qianfan_model = QianfanChatEndpoint(model='ERNIE-Bot', qianfan_ak=...... ,
qianfan_sk=......)
for chunk in qianfan_model.stream(input_prompt):
all_output += chunk.content
print(chunk.content, end="", flush=True)
```
### Error Message and Stack Trace (if applicable)
Traceback (most recent call last):
File "E:\JMIT\JMIT3\code\ans.py", line 247, in <module>
print(retrieve_ans_chat_stream('我想離婚,孩子該怎麼判?', history=[('你好', '你好呀')]))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\JMIT\JMIT3\code\ans.py", line 236, in retrieve_ans_chat_stream
for chunk in qianfan_model.stream(input_prompt):
File "E:\envs\JMIT\Lib\site-packages\langchain_core\language_models\chat_models.py", line 250, in stream
raise e
File "E:\envs\JMIT\Lib\site-packages\langchain_core\language_models\chat_models.py", line 241, in stream
generation += chunk
File "E:\envs\JMIT\Lib\site-packages\langchain_core\outputs\chat_generation.py", line 57, in __add__
generation_info = merge_dicts(
^^^^^^^^^^^^
File "E:\envs\JMIT\Lib\site-packages\langchain_core\utils\_merge.py", line 38, in merge_dicts
raise TypeError(
TypeError: Additional kwargs key created already exists in left dict and value has unsupported type <class 'int'>.
### Description
This bug occurs when the langchain_core>0.1.12
Downgrading to langchain_core==0.1.12 is bug-free
### System Info
langchain==0.1.10
langchain-community==0.0.25
langchain-core==0.1.28
langchain-text-splitters==0.0.1
windows
Python 3.11.7 | when using qianfan chat streaming occurs TypeError: Additional kwargs key sentence_id already exists in left dict and value has unsupported type <class 'int'>. | https://api.github.com/repos/langchain-ai/langchain/issues/18441/comments | 4 | 2024-03-03T08:34:01Z | 2024-07-25T16:08:13Z | https://github.com/langchain-ai/langchain/issues/18441 | 2,165,252,328 | 18,441 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
The following code:
```python
from langchain_community.utilities import SQLDatabase
from langchain.agents import create_sql_agent
from langchain_community.agent_toolkits import SQLDatabaseToolkit
from langchain.agents.agent_types import AgentType
from langchain_groq import ChatGroq
llm = ChatGroq(temperature=0.5, groq_api_key="<api-key>", model_name="mixtral-8x7b-32768")
db = SQLDatabase.from_uri("sqlite:///Chinook.db")
agent_executor_kwargs = {
"handle_parsing_errors": True,
}
agent_executor = create_sql_agent(
llm=llm,
toolkit=SQLDatabaseToolkit(db=db, llm=llm),
# db=db,
verbose=True,
agent_type=AgentType.ZERO_SHOT_REACT_DESCRIPTION,
agent_executor_kwargs=agent_executor_kwargs,
)
agent_executor.invoke(
{"input": "List the total sales per country. Which country's customers spent the most?"}
)
```
### Error Message and Stack Trace (if applicable)
```
> Entering new SQL Agent Executor chain...
I need to query the database to get the total sales per country. After that, I can identify the country whose customers spent the most.
Action: sql\_db\_query
Action Input: SELECT country, SUM(sales) FROM orders GROUP BY country ORDER BY SUM(sales) DESC;
sql\_db\_query is not a valid tool, try one of [sql_db_query, sql_db_schema, sql_db_list_tables, sql_db_query_checker].I apologize for the confusion. You are correct that the action should be "sql\_db\_query". However, it seems that I made a mistake in the input for the action. I will try again.
Action: sql\_db\_query
Action Input: SELECT country, SUM(sales) FROM orders GROUP BY country ORDER BY SUM(sales) DESC;
sql\_db\_query is not a valid tool, try one of [sql_db_query, sql_db_schema, sql_db_list_tables, sql_db_query_checker].I apologize for the confusion. It seems I need to use the `sql_db_query_checker` tool first to ensure that my query is correct before executing it with `sql_db_query`.
Action: sql\_db\_query\_checker
Action Input: SELECT country, SUM(sales) FROM orders GROUP BY country ORDER BY SUM(sales) DESC;
sql\_db\_query\_checker is not a valid tool, try one of [sql_db_query, sql_db_schema, sql_db_list_tables, sql_db_query_checker].I apologize for the confusion. It seems I need to check the table schema first to ensure that my query is correct. I will use `sql_db_schema` tool to get the schema of the `orders` table.
Action: sql\_db\_schema
Action Input: orders
sql\_db\_schema is not a valid tool, try one of [sql_db_query, sql_db_schema, sql_db_list_tables, sql_db_query_checker].I apologize for the confusion. It seems I made a mistake in the input for the action. The correct tool to use is `sql_db_list_tables` to get the list of tables in the database.
Action: sql\_db\_list\_tables
Action Input:
sql\_db\_list\_tables is not a valid tool, try one of [sql_db_query, sql_db_schema, sql_db_list_tables, sql_db_query_checker].I apologize for the confusion. I realize now that I made a mistake in my previous inputs. The correct tool to use is `sql_db_list_tables` to get the list of tables in the database. I will try again.
...
Action Input:
sql\_db\_list\_tables is not a valid tool, try one of [sql_db_query, sql_db_schema, sql_db_list_tables, sql_db_query_checker].Invalid Format: Missing 'Action:' after 'Thought:Invalid Format: Missing 'Action:' after 'Thought:Invalid Format: Missing 'Action:' after 'Thought:Invalid Format: Missing 'Action:' after 'Thought:Invalid Format: Missing 'Action:' after 'Thought:Invalid Format: Missing 'Action:' after 'Thought:Invalid Format: Missing 'Action:' after 'Thought:Invalid Format: Missing 'Action:' after 'Thought:Invalid Format: Missing 'Action:' after 'Thought:
> Finished chain.
```
### Description
When attempting to use the SQL Database agent with LLMs that aren't from OpenAI, it produces this loop caused by back slash escape sequences.
I have included groq utilizing the mixtral model but I have also experienced this using ollama, llama.cpp testing a number of different LLM models including but not limited to mixtral, mistral, llama, phi, codellama, etc.
I do not experience this bug when using gpt-3.5-turbo and gpt-4
### System Info
langchain: 0.1.9
python: 3.11.3
OS: Windows | SQL Database Agent - 'Is not a valid tool' error | https://api.github.com/repos/langchain-ai/langchain/issues/18439/comments | 10 | 2024-03-03T06:49:47Z | 2024-07-28T15:56:48Z | https://github.com/langchain-ai/langchain/issues/18439 | 2,165,207,712 | 18,439 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```
ChatPromptTemplate(
input_variables=["agent_scratchpad", "input", "tools_string"],
partial_variables={"chat_history": ""},
messages=[
SystemMessagePromptTemplate(
prompt=PromptTemplate(
input_variables=[
"chat_history",
"tools_string",
],
template="""You are a research assistant AI that has been equipped with the following function(s) to help you answer a the question by the human. Your goal is to answer the user's question to the best of your ability, using the function(s) to gather more information if necessary to better answer the question. The result of a function call will be added to the conversation history as an observation.
In this environment you have access to a set of tools you can use to answer the user's question. In order to use a tool, you can use <tool></tool> and <tool_input></tool_input> tags.
You will then get back a response in the form <observation></observation>
For example, if you have a tool called 'search' that could run a google search, in order to search for the weather in SF you would respond:
<tool>search</tool><tool_input>weather in SF</tool_input>
<observation>64 degrees</observation>
Here are the only tool(s) available:
<tools>
{tools_string}
</tools>
Note that the function parameters have been listed in the order that they should be passed into the function.
Do not modify or extend the provided functions under any circumstances. For example, calling get_table_schema() with additional parameters would be considered modifying the function which is not allowed. Please use the functions only as defined.
DO NOT use any functions that I have not equipped you with.
Remember, your goal is to answer the user's question to the best of your ability, using only the function(s) provided to gather more information if necessary to better answer the question. Do not modify or extend the provided functions under any circumstances. For example, calling get_current_temp() with additional parameters would be modifying the function which is not allowed. Please use the functions only as defined. Be careful to only use the <tool> tag when calling a tool. You should use <tool_used> when describing a tool after its been called. The result of a function call will be added to the conversation history as an observation. If necessary, you can make multiple function calls and use all the functions I have equipped you with. Always return your final answer within <final_answer></final_answer> tags.
This is the history of your convesation so far:
{chat_history}
""".replace(
" ", ""
),
)
),
HumanMessagePromptTemplate(
prompt=PromptTemplate(input_variables=["input"], template="Human: {input} ")
),
AIMessagePromptTemplate(
prompt=PromptTemplate(
input_variables=["agent_scratchpad"],
template="Assistant: <scratchpad> I understand I cannot use functions that have not been provided to me to answer this question. {agent_scratchpad}",
)
),
],
)
```
```
def convert_tools_anthropic(tool: BaseTool) -> str:
"""Format tool into the Anthropic function API."""
if len(tool.args) > 0:
arg_names = [arg for arg in tool.args]
parmeters: str = "".join(
[
f"""\n<parameter>\n<name>\n{name}\n</name>\n<type>\n{arg["type"]}\n</type>\n<description>\n{arg["description"]}\n</description>\n</parameter>\n"""
for name, arg in tool.args.items()
]
)
else:
parmeters: str = "\n"
new_tool = (
f"""<tool_description>\n<tool_name>\n{tool.name}\n</tool_name>\n<description>\n{tool.description}\n</description>\n<parameters>{parmeters}</parameters>"""
).strip(" ")
return new_tool
def convert_steps_anthropic(intermediate_steps):
print(type(intermediate_steps))
log = ""
for action, observation in intermediate_steps:
log += (
f"<tool>{action.tool}</tool><tool_input>{action.tool_input}"
f"</tool_input><observation>{observation}</observation>"
)
return log
```
```
agent: Runnable = (
{
"input": lambda x: x["input"],
"agent_scratchpad": lambda x: convert_steps_anthropic(
intermediate_steps=x["intermediate_steps"]
),
}
| prompt.partial(tools_string=tools_string)
| model.bind(stop=["</tool_input>", "</final_answer>"])
| XMLAgentOutputParser()
)
```
### Error Message and Stack Trace (if applicable)
```
[31](https://file+.vscode-resource.vscode-cdn.net/home/zdrake/Repos/mynuspire-datascience/~/Repos/mynuspire-datascience/.venv/lib/python3.11/site-packages/langchain/agents/output_parsers/xml.py:31) def parse(self, text: str) -> Union[AgentAction, AgentFinish]:
[32](https://file+.vscode-resource.vscode-cdn.net/home/zdrake/Repos/mynuspire-datascience/~/Repos/mynuspire-datascience/.venv/lib/python3.11/site-packages/langchain/agents/output_parsers/xml.py:32) if "</tool>" in text:
---> [33](https://file+.vscode-resource.vscode-cdn.net/home/zdrake/Repos/mynuspire-datascience/~/Repos/mynuspire-datascience/.venv/lib/python3.11/site-packages/langchain/agents/output_parsers/xml.py:33) tool, tool_input = text.split("</tool>")
[34](https://file+.vscode-resource.vscode-cdn.net/home/zdrake/Repos/mynuspire-datascience/~/Repos/mynuspire-datascience/.venv/lib/python3.11/site-packages/langchain/agents/output_parsers/xml.py:34) _tool = tool.split("<tool>")[1]
[35](https://file+.vscode-resource.vscode-cdn.net/home/zdrake/Repos/mynuspire-datascience/~/Repos/mynuspire-datascience/.venv/lib/python3.11/site-packages/langchain/agents/output_parsers/xml.py:35) _tool_input = tool_input.split("<tool_input>")[1]
ValueError: too many values to unpack (expected 2)
```
### Description
I am using Antrhopic Claude 2.1 on AWS Bedrock. I am using the XML Agent and XMLOutput parser the same way it is shown in the [cookbook](https://python.langchain.com/docs/expression_language/cookbook/agent). It seems that the built-in tools are looking for tool calls to be written as:
```
<tool>search</tool><tool_input>weather in SF</tool_input>
```
However, Anthropic recently put out a [detailed guide](https://docs.google.com/spreadsheets/d/1sUrBWO0u1-ZuQ8m5gt3-1N5PLR6r__UsRsB7WeySDQA/) about function calling on Claude 2.1. It seems Claude is trained to call functions as:
```
<function_calls>
<invoke>
<tool_name>$TOOL_NAME</tool_name>
<parameters>
<$PARAMETER_NAME>$PARAMETER_VALUE</$PARAMETER_NAME>
...
</parameters>
</invoke>
</function_calls>
```
As you can see I used the prompt to tell the model to output using the Langchain way, and it will in fact return the instructions and the agent will run the code. However, Claude seems to insist on using `<tool>` to describe the tools it uses in its scratchpad. When this happens, the parse tries to execute descriptions of previous steps as if they are function calls. Is there a simpler way to override the tags used in the XMLOutputParser? It likely should be updated to call tools the way Anthropic is now detailing them.
### System Info
AWS Bedrock
Antrhopic Claude 2.1
Langchain 0.1.9 | XMLParser looks for different tags than what Claude 2.1 prefers. | https://api.github.com/repos/langchain-ai/langchain/issues/18434/comments | 0 | 2024-03-02T22:39:48Z | 2024-06-08T16:14:35Z | https://github.com/langchain-ai/langchain/issues/18434 | 2,165,008,206 | 18,434 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
agent_df_base_company = create_pandas_dataframe_agent(
ChatOpenAI(temperature=0, model="gpt-3.5-turbo-0125"),
df,
verbose=True,
#agent_type=AgentType.OPENAI_FUNCTIONS,
agent_type=AgentType.ZERO_SHOT_REACT_DESCRIPTION
)
agent_df_base_company.run('which company started production in 2023 ?')
### Error Message and Stack Trace (if applicable)
> Entering new AgentExecutor chain...
Thought: We need to extract the year of production from the unstructured text in the dataframe.
Action: Use regular expressions to extract the year of production from the unstructured text.
Action Input:
```python
import re
# Sample unstructured text
text = "year of production was 2023"
# Regular expression to extract the year
year_pattern = r'\b\d{4}\b'
# Extracting the year
years = re.findall(year_pattern, text)
years
```Use regular expressions to extract the year of production from the unstructured text. is not a valid tool, try one of [python_repl_ast].I need to use the Python shell tool to extract the year of production from the unstructured text in the dataframe.
Action: Use the Python shell tool to extract the year of production from the unstructured text.
Action Input:
```python
import re
# Sample unstructured text
text = "year of production was 2023"
# Regular expression to extract the year
year_pattern = r'\b\d{4}\b'
# Extracting the year
years = re.findall(year_pattern, text)
years
```Use the Python shell tool to extract the year of production from the unstructured text. is not a valid tool, try one of [python_repl_ast].I need to extract the year of production from the unstructured text in the dataframe using the Python shell tool.
Action: Use the Python shell tool to extract the year of production from the unstructured text.
Action Input:
```python
import re
# Sample unstructured text
text = "year of production was 2023"
# Regular expression to extract the year
year_pattern = r'\b\d{4}\b'
# Extracting the year
years = re.findall(year_pattern, text)
years
```Use the Python shell tool to extract the year of production from the unstructured text. is not a valid tool, try one of [python_repl_ast].I need to extract the year of production from the unstructured text in the dataframe using the Python shell tool.
Action: Use the Python shell tool to extract the year of production from the unstructured text.
Action Input:
```python
import re
# Sample unstructured text
text = "year of production was 2023"
# Regular expression to extract the year
year_pattern = r'\b\d{4}\b'
# Extracting the year
years = re.findall(year_pattern, text)
years
```Use the Python shell tool to extract the year of production from the unstructured text. is not a valid tool, try one of [python_repl_ast].I need to extract the year of production from the unstructured text in the dataframe using the Python shell tool.
Final Answer: Use the Python shell tool to extract the year of production from the unstructured text.
> Finished chain.
Use the Python shell tool to extract the year of production from the unstructured text.
### Description
there are two errors here:
1.
This error was specific to create_pandas_dataframe_agent. I had a data frame and followed the instruction to perform some simple queries. I tried all three types of agent:
AgentType.ZERO_SHOT_REACT_DESCRIPTION
AgentType.OPENAI_FUNCTIONS,
"openai-tools"
while sometimes it returns the results, sometime it stops at the code level and output:
Use the Python shell tool to extract the year of production from the unstructured text.
PythonREPL was one of the tools as I saw:
tools = [PythonAstREPLTool(locals=df_locals)] + list(extra_tools)
2. line 246 in pandas/base.py, where "include_df_in_prompt is not None" should be simply "include_df_in_prompt", as it was specified as True or False.
### System Info
System Information
------------------
> OS: Linux
> OS Version: #1 SMP PREEMPT_DYNAMIC Sat Nov 18 15:31:17 UTC 2023
> Python Version: 3.10.12 (main, Nov 20 2023, 15:14:05) [GCC 11.4.0]
Package Information
-------------------
> langchain_core: 0.1.28
> langchain: 0.1.10
> langchain_community: 0.0.25
> langsmith: 0.1.13
> langchain_experimental: 0.0.53
> langchain_openai: 0.0.8
> langchain_text_splitters: 0.0.1
> langchainhub: 0.1.14
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph
> langserve | pandas agent didn't invoke tool but returned code instead, and line 246 include_df_in_prompt | https://api.github.com/repos/langchain-ai/langchain/issues/18432/comments | 2 | 2024-03-02T21:51:21Z | 2024-06-06T14:33:17Z | https://github.com/langchain-ai/langchain/issues/18432 | 2,164,991,186 | 18,432 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
When attempting to connect the Google AI model with LangChain using a provided API key, the following error is encountered:
```python
ValidationError Traceback (most recent call last)
Cell In[52], line 2
1 from langchain_google_genai import GoogleGenerativeAI
----> 2 llm = GoogleGenerativeAI(model="gemini-pro", google_api_key=api_key)
3 print(
4 llm.invoke(
5 "What are some of the pros and cons of Python as a programming language?"
6 )
7 )
File /opt/conda/lib/python3.10/site-packages/langchain_core/load/serializable.py:120, in Serializable.__init__(self, **kwargs)
119 def __init__(self, **kwargs: Any) -> None:
--> 120 super().__init__(**kwargs)
121 self._lc_kwargs = kwargs
File /opt/conda/lib/python3.10/site-packages/pydantic/v1/main.py:341, in BaseModel.__init__(__pydantic_self__, **data)
339 values, fields_set, validation_error = validate_model(__pydantic_self__.__class__, data)
340 if validation_error:
--> 341 raise validation_error
342 try:
343 object_setattr(__pydantic_self__, '__dict__', values)
ValidationError: 1 validation error for GoogleGenerativeAI
__root__
'NoneType' object does not support item assignment (type=type_error)
```
this is my code
```python
from getpass import getpass
api_key = getpass()
from langchain_google_genai import GoogleGenerativeAI
llm = GoogleGenerativeAI(model="gemini-pro", google_api_key=api_key)
print(
llm.invoke(
"What are some of the pros and cons of Python as a programming language?"
)
)
```
### Error Message and Stack Trace (if applicable)
_No response_
### Description
I am trying use Google LLM model using langchian but its giving validation error , but if i am using that same key in [default llm model methods ](https://ai.google.dev/tutorials/python_quickstart) the LLM model is generating Content
### System Info
i am using Kaggle and GPU T-100 as GPU
| Validation Error Encountered When Connecting Google AI Model with LangChain | https://api.github.com/repos/langchain-ai/langchain/issues/18425/comments | 1 | 2024-03-02T15:47:41Z | 2024-08-05T16:08:16Z | https://github.com/langchain-ai/langchain/issues/18425 | 2,164,863,841 | 18,425 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```
from langchain.chains import RetrievalQA
from langchain.chains.loading import load_chain
from langchain.openai import ChatOpenAI
# Create a chat LLM model
llm = ChatOpenAI()
# Construct vector DB in any way
chunks = ...
faiss_database = FAISS.from_documents(chunks, OpenAIEmbeddings())
# Create QA chain
qa = RetrievalQA.from_llm(llm=llm, retriever=faiss_database.as_retriever())
# Save QA chain
qa.save("./langchain_qa.json")
# Load QA chain
chain = load_chain("./langchain_qa.json", retriever=faiss_database.as_retriever()) # Raise the error
```
### Error Message and Stack Trace (if applicable)
Exception:
```
ValueError: Loading openai-chat LLM not supported
```
Full stack trace:
```
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
File <command-1363443796728892>, line 2
1 from langchain.chains.loading import load_chain
----> 2 chain = load_chain("./langchain_qa.json", retriever=faiss_database.as_retriever())
File /local_disk0/.ephemeral_nfs/envs/pythonEnv-bcadb797-0667-46ee-b625-18b963e92eb7/lib/python3.10/site-packages/langchain/chains/loading.py:596, in load_chain(path, **kwargs)
594 return hub_result
595 else:
--> 596 return _load_chain_from_file(path, **kwargs)
File /local_disk0/.ephemeral_nfs/envs/pythonEnv-bcadb797-0667-46ee-b625-18b963e92eb7/lib/python3.10/site-packages/langchain/chains/loading.py:623, in _load_chain_from_file(file, **kwargs)
620 config["memory"] = kwargs.pop("memory")
622 # Load the chain from the config now.
--> 623 return load_chain_from_config(config, **kwargs)
File /local_disk0/.ephemeral_nfs/envs/pythonEnv-bcadb797-0667-46ee-b625-18b963e92eb7/lib/python3.10/site-packages/langchain/chains/loading.py:586, in load_chain_from_config(config, **kwargs)
583 raise ValueError(f"Loading {config_type} chain not supported")
585 chain_loader = type_to_loader_dict[config_type]
--> 586 return chain_loader(config, **kwargs)
File /local_disk0/.ephemeral_nfs/envs/pythonEnv-bcadb797-0667-46ee-b625-18b963e92eb7/lib/python3.10/site-packages/langchain/chains/loading.py:419, in _load_retrieval_qa(config, **kwargs)
417 if "combine_documents_chain" in config:
418 combine_documents_chain_config = config.pop("combine_documents_chain")
--> 419 combine_documents_chain = load_chain_from_config(combine_documents_chain_config)
420 elif "combine_documents_chain_path" in config:
421 combine_documents_chain = load_chain(config.pop("combine_documents_chain_path"))
File /local_disk0/.ephemeral_nfs/envs/pythonEnv-bcadb797-0667-46ee-b625-18b963e92eb7/lib/python3.10/site-packages/langchain/chains/loading.py:586, in load_chain_from_config(config, **kwargs)
583 raise ValueError(f"Loading {config_type} chain not supported")
585 chain_loader = type_to_loader_dict[config_type]
--> 586 return chain_loader(config, **kwargs)
File /local_disk0/.ephemeral_nfs/envs/pythonEnv-bcadb797-0667-46ee-b625-18b963e92eb7/lib/python3.10/site-packages/langchain/chains/loading.py:79, in _load_stuff_documents_chain(config, **kwargs)
77 if "llm_chain" in config:
78 llm_chain_config = config.pop("llm_chain")
---> 79 llm_chain = load_chain_from_config(llm_chain_config)
80 elif "llm_chain_path" in config:
81 llm_chain = load_chain(config.pop("llm_chain_path"))
File /local_disk0/.ephemeral_nfs/envs/pythonEnv-bcadb797-0667-46ee-b625-18b963e92eb7/lib/python3.10/site-packages/langchain/chains/loading.py:586, in load_chain_from_config(config, **kwargs)
583 raise ValueError(f"Loading {config_type} chain not supported")
585 chain_loader = type_to_loader_dict[config_type]
--> 586 return chain_loader(config, **kwargs)
File /local_disk0/.ephemeral_nfs/envs/pythonEnv-bcadb797-0667-46ee-b625-18b963e92eb7/lib/python3.10/site-packages/langchain/chains/loading.py:40, in _load_llm_chain(config, **kwargs)
38 if "llm" in config:
39 llm_config = config.pop("llm")
---> 40 llm = load_llm_from_config(llm_config)
41 elif "llm_path" in config:
42 llm = load_llm(config.pop("llm_path"))
File /local_disk0/.ephemeral_nfs/envs/pythonEnv-bcadb797-0667-46ee-b625-18b963e92eb7/lib/python3.10/site-packages/langchain_community/llms/loading.py:21, in load_llm_from_config(config)
18 type_to_cls_dict = get_type_to_cls_dict()
20 if config_type not in type_to_cls_dict:
---> 21 raise ValueError(f"Loading {config_type} LLM not supported")
23 llm_cls = type_to_cls_dict[config_type]()
24 return llm_cls(**config)
ValueError: Loading openai-chat LLM not supported
```
### Description
We have a logic that deserialize chain (RetrievalQA) saved as a JSON using [load_chain()](https://github.com/langchain-ai/langchain/blob/f96dd57501131840b713ed7c2e86cbf1ddc2761f/libs/langchain/langchain/chains/loading.py#L589). This method has been working correctly for chains includes legacy LLM models like [OpenAI](https://python.langchain.com/docs/integrations/llms/openai).
However, this method doesn't work for the new Chat LLM models like [ChatOpenAI](https://python.langchain.com/docs/integrations/chat/openai), [AzureChatOpenAI](https://python.langchain.com/docs/integrations/chat/azure_chat_openai), raising an error like `ValueError: Loading openai-chat LLM not supported`. This is because those models are not defined in the class mapping [here](https://github.com/ibiscp/langchain/blob/master/langchain/llms/__init__.py#L54-L76).
Alternatively, we've found that a new set of serde methods are added ([ref](https://github.com/langchain-ai/langchain/pull/8164)) under `langchain.loads` and `langchain.dumps`. This works loading a single model like `loads(dumps(ChatOpenAI())`. However, [load_chain()](https://github.com/langchain-ai/langchain/blob/f96dd57501131840b713ed7c2e86cbf1ddc2761f/libs/langchain/langchain/chains/loading.py#L589) doesn't use these new serde methods, so still doesn't handle chains that contain the new chat models.
It seems there are a few possible paths forward to solve this issue:
* Can we add `openai-chat` and other chat models to [the class mapping](https://github.com/ibiscp/langchain/blob/master/langchain/llms/__init__.py#L54-L76)? I saw [a PR proposed this](https://github.com/langchain-ai/langchain/pull/1715) was closed for a deprecation reason, but the new class ChatOpenAI actually has the same model type `openai-chat`.
```
"llm": {
"model_name": "gpt-3.5-turbo",
"model": "gpt-3.5-turbo",
"stream": false,
"n": 1,
"temperature": 0.1,
"max_tokens": 200,
"_type": "openai-chat"
},
```
* Alternatively, is there any plan to migrate [load_chain()](https://github.com/langchain-ai/langchain/blob/f96dd57501131840b713ed7c2e86cbf1ddc2761f/libs/langchain/langchain/chains/loading.py#L589) to use the new serde methods, to support new Chat LLM models?
* If neither of these is planned in the near future, could you provide any workaround we can save/load chains that include new Chat LLM models like ChatOpenAI?
This has been a blocker for us to migrate from old LLM models to the new Chat models. Since the old OpenAI class doesn't work with OpenAI >= 1.0, this effectively
Thank you in advance for your support!
### System Info
$ python -m langchain_core.sys_info
```
System Information
------------------
> OS: Linux
> OS Version: #58~20.04.1-Ubuntu SMP Mon Jan 22 17:15:01 UTC 2024
> Python Version: 3.10.12 (main, Nov 20 2023, 15:14:05) [GCC 11.4.0]
Package Information
-------------------
> langchain_core: 0.1.28
> langchain: 0.1.10
> langchain_community: 0.0.25
> langsmith: 0.1.13
> langchain_openai: 0.0.8
> langchain_text_splitters: 0.0.1
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph
> langserve
``` | load_chain() method doesn't work with new chat models (ChatOpenAI, AzureChatOpenAI) | https://api.github.com/repos/langchain-ai/langchain/issues/18420/comments | 0 | 2024-03-02T11:31:03Z | 2024-06-08T16:14:30Z | https://github.com/langchain-ai/langchain/issues/18420 | 2,164,769,403 | 18,420 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
See the Colab link: https://colab.research.google.com/drive/1vNVBmE15pAPJjOukfIB4Gs7ilygbLb5k?usp=sharing
'''
!pip install -q langchain
import langchain
print(f'Langchain Version: {langchain.__version__}')
import sys
print(f'Python Version: {sys.version}')
from langchain_text_splitters import RecursiveCharacterTextSplitter
'''
### Error Message and Stack Trace (if applicable)
---------------------------------------------------------------------------
ModuleNotFoundError Traceback (most recent call last)
[<ipython-input-2-9070a936598a>](https://localhost:8080/#) in <cell line: 7>()
5 import sys
6 print(f'Python Version: {sys.version}')
----> 7 from langchain_text_splitters import RecursiveCharacterTextSplitter
ModuleNotFoundError: No module named 'langchain_text_splitters'
---------------------------------------------------------------------------
NOTE: If your import is failing due to a missing package, you can
manually install dependencies using either !pip or !apt.
To view examples of installing some common dependencies, click the
"Open Examples" button below.
---------------------------------------------------------------------------
### Description
I was following the Quickstart Guide, and noticed that when trying to import from langchain_text_splitters import RecursiveCharacterTextSplitter, the error above is received. I made sure all dependencies are included, tried uninstalling and reinstalling LangChain.
### System Info
langchain==0.1.9
langchain-community==0.0.25
langchain-core==0.1.28
Langchain Version: 0.1.9
Python Version: 3.10.12 | ModuleNotFoundError: No module named 'langchain_text_splitters' | https://api.github.com/repos/langchain-ai/langchain/issues/18409/comments | 4 | 2024-03-02T01:34:02Z | 2024-03-05T17:34:44Z | https://github.com/langchain-ai/langchain/issues/18409 | 2,164,473,587 | 18,409 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
The following code:
``` python
import os
from langchain.agents import AgentType, initialize_agent
from langchain_community.agent_toolkits.github.toolkit import GitHubToolkit
from langchain_community.utilities.github import GitHubAPIWrapper
from langchain_openai import ChatOpenAI
# Set your environment variables using os.environ
os.environ["GITHUB_APP_ID"] = ""
os.environ["GITHUB_APP_PRIVATE_KEY"] = "pemfile"
os.environ["GITHUB_REPOSITORY"] = ""
os.environ["GITHUB_BRANCH"] = ""
os.environ["GITHUB_BASE_BRANCH"] = "main"
# This example also requires an OpenAI API key
os.environ["OPENAI_API_KEY"] = "####"
llm = ChatOpenAI(temperature=0, model="gpt-4-1106-preview")
github = GitHubAPIWrapper()
toolkit = GitHubToolkit.from_github_api_wrapper(github)
tools = toolkit.get_tools()
# STRUCTURED_CHAT includes args_schema for each tool, helps tool args parsing errors.
agent = initialize_agent(
tools,
llm,
agent=AgentType.STRUCTURED_CHAT_ZERO_SHOT_REACT_DESCRIPTION,
verbose=True,
)
print("Available tools:")
for tool in tools:
print("\t" + tool.name)
agent.run(
"You have the software engineering capabilities of a Google Principle engineer. You are tasked with completing issues on a github repository. Please look at the existing issues and complete them."
)
```
### Error Message and Stack Trace (if applicable)
```
> Entering new AgentExecutor chain...
Action:
```json
{
"action": "Get Issues",
"action_input": {}
}
```
Observation: Found 1 issues:
[{'title': 'Create a new html file', 'number': 1, 'opened_by': 'ajhous44'}]
Thought:To proceed with the task of completing the issue found in the repository, I need to first understand the details of the issue. I will fetch the title, body, and comment thread of the specific issue with number 1.
Action:
```json
{
"action": "Get Issue",
"action_input": {
"issue_number": 1
}
}
```Traceback (most recent call last):
Traceback (most recent call last):
File "c:\Users\[username]\Documents\Git\[project_name]\[script_name].py", line 34, in <module>
agent.run(
File "C:\Users\[username]\Documents\Git\[project_name]\.venv\Lib\site-packages\langchain_core\_api\deprecation.py", line 145, in warning_emitting_wrapper
return wrapped(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\[username]\Documents\Git\[project_name]\.venv\Lib\site-packages\langchain\chains\base.py", line 545, in run
return self(args[0], callbacks=callbacks, tags=tags, metadata=metadata)[
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\[username]\Documents\Git\[project_name]\.venv\Lib\site-packages\langchain_core\_api\deprecation.py", line 145, in warning_emitting_wrapper
return wrapped(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\[username]\Documents\Git\[project_name]\.venv\Lib\site-packages\langchain\chains\base.py", line 378, in __call__
return self.invoke(
^^^^^^^^^^^^
File "C:\Users\[username]\Documents\Git\[project_name]\.venv\Lib\site-packages\langchain\chains\base.py", line 163, in invoke
raise e
File "C:\Users\[username]\Documents\Git\[project_name]\.venv\Lib\site-packages\langchain\chains\base.py", line 153, in invoke
self._call(inputs, run_manager=run_manager)
File "C:\Users\[username]\Documents\Git\[project_name]\.venv\Lib\site-packages\langchain\agents\agent.py", line 1391, in _call
next_step_output = self._take_next_step(
^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\[username]\Documents\Git\[project_name]\.venv\Lib\site-packages\langchain_core\tools.py", line 376, in run
self._run(*tool_args, run_manager=run_manager, **tool_kwargs)
TypeError: GitHubAction._run() got an unexpected keyword argument 'issue_number'
```
### Description
When attempting to run a simple agent example integrating LangChain with GitHub, following the documentation provided at https://python.langchain.com/docs/integrations/toolkits/github, I encountered a TypeError during the execution of the agent.run() method. The error message indicates that the GitHubAction._run() method received an unexpected keyword argument issue_number. This issue arises despite closely following the example provided, which suggests there might be a discrepancy between the documented example and the current implementation or expectations of the GitHubAction._run() method. The environment is set up as instructed, with environment variables for GitHub integration and the OpenAI API key configured. The error occurs in the final step, where the agent is expected to process tasks related to GitHub issue management through natural language commands. This suggests a possible mismatch in the expected arguments or an oversight in the documentation.
### System Info
langchain==0.1.9
langchain-community==0.0.24
langchain-core==0.1.28
langchain-openai==0.0.8 | GitHub agent throwing TypeError: GitHubAction._run() Unexpected Keyword Argument 'issue_number' | https://api.github.com/repos/langchain-ai/langchain/issues/18406/comments | 9 | 2024-03-01T23:08:14Z | 2024-07-24T16:07:42Z | https://github.com/langchain-ai/langchain/issues/18406 | 2,164,329,350 | 18,406 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python for section, config in config_sections.items():
chunk_size = 500
splitter = RecursiveJsonSplitter(min_chunk_size=chunk_size, max_chunk_size=chunk_size)
my_chunks = splitter.split_json(json_data=config)```
### Error Message and Stack Trace (if applicable)
_No response_
### Description
The RecursiveJsonSplitter class's _json_split method uses a mutable default argument (chunks = [{}]) which leads to unintended behavior. Specifically, chunks from previous invocations of the method are being included in the results of subsequent invocations. This is because mutable default arguments in Python are initialized once at function definition time, not each time the function is called, causing the default value to be shared across all invocations.
### System Info
System Information
------------------
> Python Version: 3.11.6 | packaged by conda-forge | (main, Oct 3 2023, 10:37:07) [Clang 15.0.7 ]
Package Information
-------------------
> langchain_core: 0.1.28
> langchain: 0.1.9
> langchain_community: 0.0.24
> langsmith: 0.1.10
> langchain_openai: 0.0.2
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph
> langserve | RecursiveJsonSplitter Retains State Across Invocations Due to Mutable Default Argument | https://api.github.com/repos/langchain-ai/langchain/issues/18398/comments | 2 | 2024-03-01T18:21:10Z | 2024-07-21T16:06:00Z | https://github.com/langchain-ai/langchain/issues/18398 | 2,163,952,205 | 18,398 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
class Agent:
def __init__(self, connection_id: str, chat_id: str, user_id: str):
self.chat_id = chat_id
self.user_id = user_id
self.connection_id = connection_id
self._llm = AzureChatOpenAI(
azure_endpoint=OPENAI_AZURE_ENDPOINT,
openai_api_version=OPENAI_API_VERSION,
azure_deployment=OPENAI_AZURE_DEPLOYMENT,
openai_api_key=OPENAI_AZURE_API_KEY,
openai_api_type=OPENAI_API_TYPE,
temperature=0.4,
streaming=True,
)
self._tools = [
CodeInterpreter(),
]
message_history = DynamoDBChatMessageHistory(
table_name=DYNAMODB_HISTORY_TABLE_NAME,
session_id=self.chat_id,
key={
"PK": self.chat_id,
"SK": self.user_id,
},
)
self.memory = ConversationSummaryBufferMemory(
memory_key="chat_memory",
llm=self._llm,
max_token_limit=2000,
return_messages=True,
chat_memory=message_history
)
prompt = ChatPromptTemplate.from_messages(
[
("system", AGENT_SYSTEM_MESSAGE),
MessagesPlaceholder("chat_memory", optional=True),
("human", "{input}"),
MessagesPlaceholder("agent_scratchpad"),
]
)
agent = cast(
BaseMultiActionAgent,
create_openai_tools_agent(self._llm, self._tools, prompt),
)
self.agent_executor = AgentExecutor(
agent=agent, tools=self._tools, verbose=True, memory=self.memory,
).with_config({"run_name": "Agent"})
async def arun(self, query: str, callback: AgentCallbackHandler):
output = None
try:
async for event in self.agent_executor.astream_events(
{"input": query}, version="v1"
):
await callback.handle_event(event)
except Exception as e:
logger.error(traceback.format_exc())
logger.error("astream_events error %s", e)
```
### Error Message and Stack Trace (if applicable)
`
[ERROR] 2024-02-29T14:51:07.594Z functions::handle_message:118 Traceback (most recent call last):
File "<placeholder>\logic\functions.py", line 112, in handle_message
await agent.arun(message, callback)
File "<placeholder>\logic\agent.py", line 107, in arun
async for event in self.agent_executor.astream_events(
File "<placeholder>\langchain_core\runnables\base.py", line 4157, in astream_events
async for item in self.bound.astream_events(
File "<placeholder>\langchain_core\runnables\base.py", line 889, in astream_events
async for log in _astream_log_implementation( # type: ignore[misc]
File "<placeholder>\langchain_core\tracers\log_stream.py", line 612, in _astream_log_implementation
await task
File "<placeholder>\langchain_core\tracers\log_stream.py", line 566, in consume_astream
async for chunk in runnable.astream(input, config, **kwargs):
File "<placeholder>\langchain\agents\agent.py", line 1551, in astream
async for step in iterator:
File "<placeholder>\langchain\agents\agent_iterator.py", line 265, in __aiter__
output = await self._aprocess_next_step_output(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "<placeholder>\langchain\agents\agent_iterator.py", line 328, in _aprocess_next_step_output
return await self._areturn(next_step_output, run_manager=run_manager)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "<placeholder>\langchain\agents\agent_iterator.py", line 392, in _areturn
return self.make_final_outputs(returned_output, run_manager)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "<placeholder>\langchain\agents\agent_iterator.py", line 142, in make_final_outputs
self.agent_executor.prep_outputs(
File "<placeholder>\langchain\chains\base.py", line 440, in prep_outputs
self.memory.save_context(inputs, outputs)
File "<placeholder>\langchain\memory\summary_buffer.py", line 59, in save_context super().save_context(inputs, outputs)
File "<placeholder>\langchain\memory\chat_memory.py", line 38, in save_context
input_str, output_str = self._get_input_output(inputs, outputs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "<placeholder>\langchain\memory\chat_memory.py", line 30, in _get_input_output
raise ValueError(f"One output key expected, got {outputs.keys()}")
ValueError: One output key expected, got dict_keys(['output', 'messages'])
`
### Description
When using agentexecutor with memory, an error occurs in the save_context method of chat_memory.py. The error is raised because the expected number of output keys is one, but the actual number of output keys is more than one.
### System Info
Python version: 3.11
Libraries used:
openai==1.10.0
langchain==0.1.6
langchain-community==0.0.19
langchain-openai==0.0.5
langchainhub==0.1.14
pyshorteners==1.0.1
tiktoken==0.5.2
python-multipart==0.0.6
httpx==0.25.0
azure-core==1.29.4
azure-identity==1.14.1
azure-search-documents==11.4.0b8
requests==2.31.0
requests-aws4auth==1.2.3
| Error in agentexecutor with memory: One output key expected, got dict_keys(['output', 'messages']) | https://api.github.com/repos/langchain-ai/langchain/issues/18388/comments | 8 | 2024-03-01T14:52:04Z | 2024-06-21T16:37:53Z | https://github.com/langchain-ai/langchain/issues/18388 | 2,163,574,770 | 18,388 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [ ] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
from dotenv import load_dotenv
import streamlit as st
from PyPDF2 import PdfReader
from langchain.text_splitter import CharacterTextSplitter
#from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.vectorstores import FAISS
from langchain.chains.question_answering import load_qa_chain
from langchain.llms.huggingface_pipeline import HuggingFacePipeline
from langchain.embeddings.huggingface import HuggingFaceEmbeddings
#from langchain.llms import huggingface_hub
def main():
load_dotenv()
st.set_page_config(page_title="Ask your PDF")
st.header("Ask your PDF 💬")
# upload file
pdf = st.file_uploader("Upload your PDF", type="pdf")
# extract the text
if pdf is not None:
pdf_reader = PdfReader(pdf)
text = ""
for page in pdf_reader.pages:
text += page.extract_text()
# split into chunks
text_splitter = CharacterTextSplitter(
separator="\n",
chunk_size=1000,
chunk_overlap=200,
length_function=len
)
chunks = text_splitter.split_text(text)
# create embeddings
# embeddings = OpenAIEmbeddings()
embeddings = HuggingFaceEmbeddings(model_name="bert-base-uncased")
knowledge_base = FAISS.from_texts(chunks, embeddings)
# show user input
user_question = st.text_input("Ask a question about your PDF:")
if user_question:
docs = knowledge_base.similarity_search(user_question)
#st.write(docs)
pipeline = HuggingFacePipeline()
chain = load_qa_chain(pipeline, chain_type="stuff")
response = chain.run(input_documents=docs, question=user_question)
st.write(response)
if __name__ == '__main__':
main()
### Error Message and Stack Trace (if applicable)
<img width="765" alt="image" src="https://github.com/langchain-ai/langchain/assets/31651898/3cd49b60-ed55-4e21-ac51-a5cf136111ba">
### Description
The error mainly comming from these lines
"
pipeline = HuggingFacePipeline()
chain = load_qa_chain(pipeline, chain_type="stuff")
response = chain.run(input_documents=docs, question=user_question)**
"
### System Info
langchain== latest version
PyPDF2==3.0.1
python-dotenv==1.0.0
streamlit==1.18.1
faiss-cpu==1.7.4
altair<5
| TypeError: 'NoneType' object is not callable | https://api.github.com/repos/langchain-ai/langchain/issues/18384/comments | 0 | 2024-03-01T12:50:01Z | 2024-06-08T16:14:26Z | https://github.com/langchain-ai/langchain/issues/18384 | 2,163,340,434 | 18,384 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
from langchain_openai import ChatOpenAI
llm = ChatOpenAI()
### Error Message and Stack Trace (if applicable)
python openai.py
Traceback (most recent call last):
File "/home/fyx/Codes/langchain/openai.py", line 1, in <module>
from langchain_openai import ChatOpenAI
File "/home/fyx/anaconda3/envs/langchain/lib/python3.9/site-packages/langchain_openai/__init__.py", line 1, in <module>
from langchain_openai.chat_models import (
File "/home/fyx/anaconda3/envs/langchain/lib/python3.9/site-packages/langchain_openai/chat_models/__init__.py", line 1, in <module>
from langchain_openai.chat_models.azure import AzureChatOpenAI
File "/home/fyx/anaconda3/envs/langchain/lib/python3.9/site-packages/langchain_openai/chat_models/azure.py", line 8, in <module>
import openai
File "/home/fyx/Codes/langchain/openai.py", line 1, in <module>
from langchain_openai import ChatOpenAI
ImportError: cannot import name 'ChatOpenAI' from partially initialized module 'langchain_openai' (most likely due to a circular import) (/home/fyx/anaconda3/envs/langchain/lib/python3.9/site-packages/langchain_openai/__init__.py)
### Description
I am trying to learn langchain but the first code has this bug.
### System Info
System Information
------------------
> OS: Linux
> OS Version: #107~20.04.1-Ubuntu SMP Fri Feb 9 14:20:11 UTC 2024
> Python Version: 3.9.18 (main, Sep 11 2023, 13:41:44)
[GCC 11.2.0]
Package Information
-------------------
> langchain_core: 0.1.28
> langchain: 0.1.9
> langchain_community: 0.0.24
> langsmith: 0.1.10
> langchain_cli: 0.0.21
> langchain_experimental: 0.0.53
> langchain_openai: 0.0.8
> langserve: 0.0.45
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph | mportError: cannot import name 'ChatOpenAI' from partially initialized module 'langchain_openai' (most likely due to a circular import) | https://api.github.com/repos/langchain-ai/langchain/issues/18380/comments | 1 | 2024-03-01T11:00:32Z | 2024-06-08T16:14:20Z | https://github.com/langchain-ai/langchain/issues/18380 | 2,163,155,280 | 18,380 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```Python
from langchain_community.document_loaders import PlaywrightURLLoader
from langchain.vectorstores.utils import filter_complex_metadata
from langchain_community.vectorstores import Chroma
from langchain.text_splitter import CharacterTextSplitter, RecursiveCharacterTextSplitter, TokenTextSplitter
openai_api_key = "{OpenAI API Key}"
embeddings = OpenAIEmbeddings(api_key=openai_api_key)
llm = ChatOpenAI(model_name= openai_model, temperature=0, api_key=openai_api_key)
def connect_chromadb(collection_name: str = "langchain"):
try:
chroma_settings = chromadb.PersistentClient(path=db_directory, settings=Settings(anonymized_telemetry=False))
logging.info("Connecting vectordb...")
#vectordb = Chroma(collection_name = collection_name, embedding_function=embeddings, persist_directory=db_directory, client_settings=chroma_settings)
vectordb = Chroma(collection_name = collection_name, embedding_function=embeddings, client=chroma_settings)
logging.info("Vectordb connected successfully.")
return vectordb
except Exception as e:
logging.error("An error occured connecting to db: " + str(e))
vectordb = connect_chromadb()
urls = ["https://example.com"]
loader = PlaywrightURLLoader(urls=urls, remove_selectors=["header", "footer"], continue_on_failure=True, headless=True)
documents = loader.load()
documents = filter_complex_metadata(documents)
text_splitter = RecursiveCharacterTextSplitter.from_tiktoken_encoder(separators=["\n\n", "\n", "\t"],chunk_size=CHUNK_SIZE_TOKENS,chunk_overlap=200)
texts = text_splitter.split_documents(documents)
vectordb.add_documents(documents=texts, embedding=embeddings, persist_directory = db_directory)
```
### Error Message and Stack Trace (if applicable)

### Description
I am using following dockerfile and it is installing PlayWrightUrlLoader successfully but when I am using it it stucks on loading the PlaywrightUrlLoader, no error comes, it only stops working. When I use PlaywrightUrlLoader in local it works fine.
```DockerFile
# syntax=docker/dockerfile:1
FROM tiangolo/uvicorn-gunicorn:python3.11
ENV shm_size 2gb
WORKDIR /code
COPY requirements.txt .
COPY my_api.py .
COPY my_lib.py .
COPY my_templates.py .
COPY config.cfg .
COPY version.txt .
RUN pip install --no-cache-dir --upgrade -r requirements.txt
#COPY . .
RUN python -m nltk.downloader punkt
RUN apt-get update && apt-get install -y \
fonts-liberation \
libasound2 \
libatk-bridge2.0-0 \
libatk1.0-0 \
libatspi2.0-0 \
libcups2 \
libdbus-1-3 \
libdrm2 \
libgbm1 \
libgtk-3-0 \
# libgtk-4-1 \
libnspr4 \
libnss3 \
libwayland-client0 \
libxcomposite1 \
libxdamage1 \
libxfixes3 \
libxkbcommon0 \
libxrandr2 \
xdg-utils \
libu2f-udev \
libvulkan1
# Chrome instalation
RUN curl -LO https://dl.google.com/linux/direct/google-chrome-stable_current_amd64.deb
RUN apt-get install -y ./google-chrome-stable_current_amd64.deb
RUN rm google-chrome-stable_current_amd64.deb
# Check chrome version
RUN echo "Chrome: " && google-chrome --version
RUN apt-get install -y tesseract-ocr
RUN apt-get install poppler-utils -y
RUN playwright install
RUN mkdir -p /home/DE_ChromaDB
RUN mkdir -p /home/uploaded
RUN mkdir -p /home/chathistory
EXPOSE 3100
CMD ["gunicorn", "my_app:app", "--timeout", "600", "--bind", "0.0.0.0:3100", "--workers","4", "--worker-class", "uvicorn.workers.UvicornWorker"]
```
### System Info
System Information
------------------
> OS: Windows
> OS Version: 10.0.22621
> Python Version: 3.11.7 | packaged by conda-forge | (main, Dec 23 2023, 14:27:59) [MSC v.1937 64 bit (AMD64)]
Package Information
-------------------
> langchain_core: 0.1.26
> langchain: 0.1.9
> langchain_community: 0.0.24
> langsmith: 0.1.8
> langchain_experimental: 0.0.50
> langchain_openai: 0.0.7
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph
> langserve | PlaywrightUrlLoader Works Fine in Local But Not With Docker | https://api.github.com/repos/langchain-ai/langchain/issues/18379/comments | 0 | 2024-03-01T10:40:19Z | 2024-06-08T16:14:16Z | https://github.com/langchain-ai/langchain/issues/18379 | 2,163,119,164 | 18,379 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
from langchain.chains import ConversationalRetrievalChain
llm = GoogleGenerativeAI(model="models/text-bison-001", google_api_key=api_key1,temperature=0,verbose=True)
retriever=vectordb2.as_retriever()
#print(retriever)
qa = ConversationalRetrievalChain.from_llm(
llm,
retriever=retriever,
memory=memory,
chain_type="stuff",
response_if_no_docs_found="None")
question = "give me the number of projects did siva worked?"
context={"a":"give bye at last"}
result =qa({"question":question})
print(result['answer'])
print("\n")
result['chat_history']
### Error Message and Stack Trace (if applicable)
alueError Traceback (most recent call last)
Cell In[61], [line 13](vscode-notebook-cell:?execution_count=61&line=13)
[11](vscode-notebook-cell:?execution_count=61&line=11) question = "give me the number of projects did siva worked?"
[12](vscode-notebook-cell:?execution_count=61&line=12) context={"a":"give bye at last"}
---> [13](vscode-notebook-cell:?execution_count=61&line=13) result =qa.invoke({"question":question,"chat_history":[]})
[14](vscode-notebook-cell:?execution_count=61&line=14) print(result['answer'])
[15](vscode-notebook-cell:?execution_count=61&line=15) print("\n")
File [c:\Users\siva.kotagiri\OneDrive](file:///C:/Users/siva.kotagiri/OneDrive) - Nimble Accounting\Desktop\Kore.ai\search-assist\for images\env\lib\site-packages\langchain\chains\base.py:163, in Chain.invoke(self, input, config, **kwargs)
[161](file:///C:/Users/siva.kotagiri/OneDrive%20-%20Nimble%20Accounting/Desktop/Kore.ai/search-assist/for%20images/env/lib/site-packages/langchain/chains/base.py:161) except BaseException as e:
[162](file:///C:/Users/siva.kotagiri/OneDrive%20-%20Nimble%20Accounting/Desktop/Kore.ai/search-assist/for%20images/env/lib/site-packages/langchain/chains/base.py:162) run_manager.on_chain_error(e)
--> [163](file:///C:/Users/siva.kotagiri/OneDrive%20-%20Nimble%20Accounting/Desktop/Kore.ai/search-assist/for%20images/env/lib/site-packages/langchain/chains/base.py:163) raise e
[164](file:///C:/Users/siva.kotagiri/OneDrive%20-%20Nimble%20Accounting/Desktop/Kore.ai/search-assist/for%20images/env/lib/site-packages/langchain/chains/base.py:164) run_manager.on_chain_end(outputs)
[166](file:///C:/Users/siva.kotagiri/OneDrive%20-%20Nimble%20Accounting/Desktop/Kore.ai/search-assist/for%20images/env/lib/site-packages/langchain/chains/base.py:166) if include_run_info:
File [c:\Users\siva.kotagiri\OneDrive](file:///C:/Users/siva.kotagiri/OneDrive) - Nimble Accounting\Desktop\Kore.ai\search-assist\for images\env\lib\site-packages\langchain\chains\base.py:158, in Chain.invoke(self, input, config, **kwargs)
[151](file:///C:/Users/siva.kotagiri/OneDrive%20-%20Nimble%20Accounting/Desktop/Kore.ai/search-assist/for%20images/env/lib/site-packages/langchain/chains/base.py:151) self._validate_inputs(inputs)
[152](file:///C:/Users/siva.kotagiri/OneDrive%20-%20Nimble%20Accounting/Desktop/Kore.ai/search-assist/for%20images/env/lib/site-packages/langchain/chains/base.py:152) outputs = (
[153](file:///C:/Users/siva.kotagiri/OneDrive%20-%20Nimble%20Accounting/Desktop/Kore.ai/search-assist/for%20images/env/lib/site-packages/langchain/chains/base.py:153) self._call(inputs, run_manager=run_manager)
[154](file:///C:/Users/siva.kotagiri/OneDrive%20-%20Nimble%20Accounting/Desktop/Kore.ai/search-assist/for%20images/env/lib/site-packages/langchain/chains/base.py:154) if new_arg_supported
[155](file:///C:/Users/siva.kotagiri/OneDrive%20-%20Nimble%20Accounting/Desktop/Kore.ai/search-assist/for%20images/env/lib/site-packages/langchain/chains/base.py:155) else self._call(inputs)
[156](file:///C:/Users/siva.kotagiri/OneDrive%20-%20Nimble%20Accounting/Desktop/Kore.ai/search-assist/for%20images/env/lib/site-packages/langchain/chains/base.py:156) )
--> [158](file:///C:/Users/siva.kotagiri/OneDrive%20-%20Nimble%20Accounting/Desktop/Kore.ai/search-assist/for%20images/env/lib/site-packages/langchain/chains/base.py:158) final_outputs: Dict[str, Any] = self.prep_outputs(
[159](file:///C:/Users/siva.kotagiri/OneDrive%20-%20Nimble%20Accounting/Desktop/Kore.ai/search-assist/for%20images/env/lib/site-packages/langchain/chains/base.py:159) inputs, outputs, return_only_outputs
...
[18](file:///C:/Users/siva.kotagiri/OneDrive%20-%20Nimble%20Accounting/Desktop/Kore.ai/search-assist/for%20images/env/lib/site-packages/langchain/memory/utils.py:18) if len(prompt_input_keys) != 1:
---> [19](file:///C:/Users/siva.kotagiri/OneDrive%20-%20Nimble%20Accounting/Desktop/Kore.ai/search-assist/for%20images/env/lib/site-packages/langchain/memory/utils.py:19) raise ValueError(f"One input key expected got {prompt_input_keys}")
[20](file:///C:/Users/siva.kotagiri/OneDrive%20-%20Nimble%20Accounting/Desktop/Kore.ai/search-assist/for%20images/env/lib/site-packages/langchain/memory/utils.py:20) return prompt_input_keys[0]
ValueError: One input key expected got ['chat_history', 'question']
Output is truncated. View as a [scrollable element](command:cellOutput.enableScrolling?8758bc90-64a6-49eb-bff6-d453c708d237) or open in a [text editor](command:workbench.action.openLargeOutput?8758bc90-64a6-49eb-bff6-d453c708d237). Adjust cell output [settings](command:workbench.action.openSettings?%5B%22%40tag%3AnotebookOutputLayout%22%5D)...
### Description
The above code is working fine in the colab but not working in the vscode
### System Info
windows | Not executing properly | https://api.github.com/repos/langchain-ai/langchain/issues/18376/comments | 0 | 2024-03-01T10:26:35Z | 2024-06-08T16:14:10Z | https://github.com/langchain-ai/langchain/issues/18376 | 2,163,094,790 | 18,376 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
**Error**
```python
values["spark_app_url"] = get_from_dict_or_env(
values,
"spark_app_url",
"IFLYTEK_SPARK_APP_URL",
"wss://spark-api.xf-yun.com/v3.1/chat",
)
values["spark_llm_domain"] = get_from_dict_or_env(
values,
"spark_llm_domain",
"IFLYTEK_SPARK_LLM_DOMAIN",
"generalv3",
)
# put extra params into model_kwargs
values["model_kwargs"]["temperature"] = values["temperature"] or cls.temperature
values["model_kwargs"]["top_k"] = values["top_k"] or cls.top_k
values["client"] = _SparkLLMClient(
app_id=values["spark_app_id"],
api_key=values["spark_api_key"],
api_secret=values["spark_api_secret"],
api_url=values["spark_api_url"],
spark_domain=values["spark_llm_domain"],
model_kwargs=values["model_kwargs"],
)
return values
```
**Suggested Code**
```python
values["spark_api_url"] = get_from_dict_or_env(
values,
"spark_api_url",
"IFLYTEK_SPARK_API_URL",
"wss://spark-api.xf-yun.com/v3.1/chat",
)
```
File Path: https://github.com/langchain-ai/langchain/blob/v0.1.9/libs/community/langchain_community/chat_models/sparkllm.py
**Lines 193 to 198**
### Error Message and Stack Trace (if applicable)
Exception Error
### Description
Currently, when using the ChatModel provided by `sparkllm`, the default version is V3.1. However, the official version V3.5 is already available. When trying to use the V3.5 version and specifying the new version information through environment variables, it was found that the `IFLYTEK_SPARK_APP_URL` setting was not working. After tracing the code, it was discovered that the error was due to a misspelled key for parameter retrieval in the code. Errors occur in all Release versions. I hope this can be fixed as soon as possible. The specific error code is as follows:
``` python
values["spark_app_url"] = get_from_dict_or_env(
values,
"spark_app_url",
"IFLYTEK_SPARK_APP_URL",
"wss://spark-api.xf-yun.com/v3.1/chat",
)
values["spark_llm_domain"] = get_from_dict_or_env(
values,
"spark_llm_domain",
"IFLYTEK_SPARK_LLM_DOMAIN",
"generalv3",
)
# put extra params into model_kwargs
values["model_kwargs"]["temperature"] = values["temperature"] or cls.temperature
values["model_kwargs"]["top_k"] = values["top_k"] or cls.top_k
values["client"] = _SparkLLMClient(
app_id=values["spark_app_id"],
api_key=values["spark_api_key"],
api_secret=values["spark_api_secret"],
api_url=values["spark_api_url"],
spark_domain=values["spark_llm_domain"],
model_kwargs=values["model_kwargs"],
)
return values
```
File Path: https://github.com/langchain-ai/langchain/blob/v0.1.9/libs/community/langchain_community/chat_models/sparkllm.py
`spark_api_key` key does not exist
### System Info
langchain:V.1.8
Python:V3.12
OS:Windows11
langchain_core: 0.1.24
langchain: 0.1.8
langchain_community: 0.0.21
langsmith: 0.1.3 | LangChain Community ChatModel for sparkllm Bug | https://api.github.com/repos/langchain-ai/langchain/issues/18370/comments | 0 | 2024-03-01T07:13:07Z | 2024-03-01T18:49:31Z | https://github.com/langchain-ai/langchain/issues/18370 | 2,162,760,580 | 18,370 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
vector_search = MongoDBAtlasVectorSearch.from_connection_string(
uri,
DB_NAME + "." + COLLECTION_NAME,
embeddings,
index_name=ATLAS_VECTOR_SEARCH_INDEX_NAME,
relevance_score_fn='cosine',
)
qa_retriever = vector_search.as_retriever(
search_type="similarity_score_threshold",
search_kwargs={'score_threshold': 0.5},
)
### Error Message and Stack Trace (if applicable)
UserWarning: No relevant docs were retrieved using the relevance score threshold 0.5
warnings.warn()
### Description
I'm trying to use MongoDBAtlasVectorSearch, the similarity_score_threshold is used, but it is always returning an empty list. Only if the score is set to 0.0 then the documents are returned.
### System Info
System Information
------------------
> OS: Windows
> OS Version: 10.0.22631
> Python Version: 3.10.13 | packaged by Anaconda, Inc. | (main, Sep 11 2023, 13:24:38) [MSC v.1916 64 bit (AMD64)]
Package Information
-------------------
> langchain_core: 0.1.22
> langchain: 0.1.4
> langchain_community: 0.0.19
> langsmith: 0.0.87
> langchain_cli: 0.0.21
> langchain_openai: 0.0.5
> langserve: 0.0.41 | similarity_score_threshold isn't working for MongoDB Atlas Vector Search | https://api.github.com/repos/langchain-ai/langchain/issues/18365/comments | 3 | 2024-03-01T04:37:23Z | 2024-08-05T16:08:11Z | https://github.com/langchain-ai/langchain/issues/18365 | 2,162,589,243 | 18,365 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```
import os
import math
import faiss
from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.chat_models import ChatOpenAI
from langchain.vectorstores import FAISS
from langchain.memory import VectorStoreRetrieverMemory
from langchain.docstore import InMemoryDocstore
from langchain.chains import LLMChain
from langchain.agents import AgentExecutor, Tool, ZeroShotAgent, initialize_agent, agent_types
from langchain.callbacks import get_openai_callback
from langchain.retrievers import TimeWeightedVectorStoreRetriever
from langchain.prompts import ChatPromptTemplate, SystemMessagePromptTemplate, MessagesPlaceholder, HumanMessagePromptTemplate
#使用open ai embedding
embedding_size = 1536 # OpenAIEmbeddings 的维度
index = faiss.IndexFlatL2(embedding_size)
embedding_fn = OpenAIEmbeddings()
#创建llm
llm = ChatOpenAI(model="gpt-4-0125-preview")
from langchain.memory import ConversationBufferMemory, CombinedMemory, ConversationSummaryMemory
conv_memory = ConversationBufferMemory(
memory_key="chat_history",
input_key="input"
)
#创建agent
from tools.MyTimer import MyTimer
from tools.QueryTime import QueryTime
from tools.Weather import Weather
from tools.Calculator import Calculator
from tools.CheckSensor import CheckSensor
from tools.Switch import Switch
from tools.Knowledge import Knowledge
from tools.Say import Say
from tools.QueryTimerDB import QueryTimerDB
from tools.DeleteTimer import DeleteTimer
from tools.GetSwitchLog import GetSwitchLog
from tools.getOnRunLinkage import getOnRunLinkage
from tools.TimeCalc import TimeCalc
from tools.SetChatStatus import SetChatStatus
my_timer = MyTimer()
query_time_tool = QueryTime()
weather_tool = Weather()
calculator_tool = Calculator()
check_sensor_tool = CheckSensor()
switch_tool = Switch()
knowledge_tool = Knowledge()
say_tool = Say()
query_timer_db_tool = QueryTimerDB()
delete_timer_tool = DeleteTimer()
get_switch_log = GetSwitchLog()
get_on_run_linkage = getOnRunLinkage()
time_calc_tool = TimeCalc()
set_chat_status_tool = SetChatStatus()
tools = [
Tool(
name=my_timer.name,
func=my_timer.run,
description=my_timer.description
),
# Tool( #卸载时间查询工具
# name=query_time_tool.name,
# func=query_time_tool.run,
# description=query_time_tool.description
# ),
Tool(
name=weather_tool.name,
func=weather_tool.run,
description=weather_tool.description
),
Tool(
name=calculator_tool.name,
func=calculator_tool.run,
description=calculator_tool.description
),
Tool(
name=check_sensor_tool.name,
func=check_sensor_tool.run,
description=check_sensor_tool.description
),
Tool(
name=switch_tool.name,
func=switch_tool.run,
description=switch_tool.description
),
Tool(
name=knowledge_tool.name,
func=knowledge_tool.run,
description=knowledge_tool.description
),
Tool(
name=say_tool.name,
func=say_tool.run,
description=say_tool.description
),
Tool(
name=query_timer_db_tool.name,
func=query_timer_db_tool.run,
description=query_timer_db_tool.description
),
Tool(
name=delete_timer_tool.name,
func=delete_timer_tool.run,
description=delete_timer_tool.description
),
Tool(
name=get_switch_log.name,
func=get_switch_log.run,
description=get_switch_log.description
),
Tool(
name=get_on_run_linkage.name,
func=get_on_run_linkage.run,
description=get_on_run_linkage.description
),
Tool(
name=time_calc_tool.name,
func=time_calc_tool.run,
description=time_calc_tool.description
),
Tool(
name=set_chat_status_tool.name,
func=set_chat_status_tool.run,
description=set_chat_status_tool.description
),
]
agent = initialize_agent(agent_types=agent_types.AgentType.CHAT_CONVERSATIONAL_REACT_DESCRIPTION,
tools=tools, llm=llm, verbose=True, handle_parsing_errors=True, max_history=5, memory=conv_memory)
agent.run("my name is guozebin.")
agent.run("your?")
```
### Error Message and Stack Trace (if applicable)
agent.run("my name is guozebin.")
'Hello, Guozebin! How can I assist you today?'
agent.run("your?")
"Your question seems to be incomplete. Could you please provide more details or clarify what you're asking?"
### Description
I tested with langchain v0.0.336 and v0.0.339rc versions of the Chat ReAct Agent, but I couldn't have a continuous conversation. Here is my sample code, please advise.
### System Info
langchain 0.0.339rc
windows 10
python 3.10 | Troubleshooting Continuous Conversation Issues in Langchain Chat ReAct Agent Versions v0.0.336 and v0.0.339rc: A Request for Guidance | https://api.github.com/repos/langchain-ai/langchain/issues/18364/comments | 0 | 2024-03-01T03:31:40Z | 2024-06-08T16:14:05Z | https://github.com/langchain-ai/langchain/issues/18364 | 2,162,527,514 | 18,364 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
I would expect to be able to pass a template to QuerySQLCheckerTool when constructing it for my own custom toolkit, as such:
```python
query_sql_checker_tool = QuerySQLCheckerTool(
db=sqldb,
llm=model,
description=new_description,
template=new_prompt_template
)
```
however, this is not the case.
### Error Message and Stack Trace (if applicable)
_No response_
### Description
I would like to rewrite the template this tool uses for its Query checker for my certain use case. I would like to do this by composing it into the existing tool from the library, and not have to create my own custom tool based on the given tool for this purpose.
It appears template is already able to be passed as an argument, but it is not being used. This is especially confusing as there is no other use of this argument but it is hardcoded to the same value no matter what we pass for this argument when constructing this tool from the library for our own custom toolkits.
### System Info
This is a code issue as you can see:
https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/tools/sql_database/tool.py#L99
https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/tools/sql_database/tool.py#L116
| SQL query checker tool doesn't respect prompt template passed when creating default LLMChain | https://api.github.com/repos/langchain-ai/langchain/issues/18351/comments | 1 | 2024-03-01T00:59:30Z | 2024-06-08T16:14:00Z | https://github.com/langchain-ai/langchain/issues/18351 | 2,162,360,131 | 18,351 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
@classmethod
async def run_message(cls, text, thread_id):
"""
Put a message in our conversation thread
:param thread: and int that contain our thread identifier
:param text: message
"""
# Error in 'await'
msg = await OpenAIAssistantRunnable(assistant_id=settings.ASSISTANT_ID, as_agent=True).ainvoke({
"content": text,
"thread_id": thread_id
})
return msg.return_values["output"]
```
### Error Message and Stack Trace (if applicable)
```
Traceback (most recent call last):
File "C:\Users\Kyo\AppData\Local\Programs\Python\Python311\Lib\site-packages\django\core\handlers\exception.py", line 55, in inner
response = get_response(request)
^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Kyo\AppData\Local\Programs\Python\Python311\Lib\site-packages\django\core\handlers\base.py", line 197, in _get_response
response = wrapped_callback(request, *callback_args, **callback_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Kyo\AppData\Local\Programs\Python\Python311\Lib\site-packages\asgiref\sync.py", line 277, in __call__
return call_result.result()
^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Kyo\AppData\Local\Programs\Python\Python311\Lib\concurrent\futures\_base.py", line 449, in result
return self.__get_result()
^^^^^^^^^^^^^^^^^^^
File "C:\Users\Kyo\AppData\Local\Programs\Python\Python311\Lib\concurrent\futures\_base.py", line 401, in __get_result
raise self._exception
File "C:\Users\RSSpe\AppData\Local\Programs\Python\Python311\Lib\site-packages\asgiref\sync.py", line 353, in main_wrap
result = await self.awaitable(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Kyo\AppData\Local\Programs\Python\Python311\Lib\site-packages\django\views\decorators\csrf.py", line 60, in _view_wrapper
return await view_func(request, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Kyo\AppData\Local\Programs\Python\Python311\Lib\site-packages\adrf\views.py", line 77, in async_dispatch
response = self.handle_exception(exc)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Kyo\AppData\Local\Programs\Python\Python311\Lib\site-packages\rest_framework\views.py", line 469, in handle_exception
self.raise_uncaught_exception(exc)
File "C:\Users\Kyo\AppData\Local\Programs\Python\Python311\Lib\site-packages\rest_framework\views.py", line 480, in raise_uncaught_exception
raise exc
File "C:\Users\Kyo\AppData\Local\Programs\Python\Python311\Lib\site-packages\adrf\views.py", line 70, in async_dispatch
response = await handler(request, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Kyo\AppData\Local\Programs\Python\Python311\Lib\site-packages\adrf\decorators.py", line 50, in handler
return await func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Kyo\Desktop\chatbot-django-capacitacion-ventas\apps\chatbot\views\openai_views.py", line 86, in post_message
text = await openairep.post_user_message(text, thread_id)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Kyo\Desktop\chatbot-django-capacitacion-ventas\apps\chatbot\repositories\openai_repository.py", line 18, in post_user_message
response = await OpenAISingleton.add_message(text, thread)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Kyo\Desktop\chatbot-django-capacitacion-ventas\apps\chatbot\desing_patterns\creational_patterns\singleton\openai_singleton.py", line 134, in add_message
msg = await OpenAIAssistantRunnable(assistant_id=settings.ASSISTANT_ID, as_agent=True).ainvoke({
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\Kyo\AppData\Local\Programs\Python\Python311\Lib\site-packages\langchain\agents\openai_assistant\base.py", line 418, in ainvoke
raise e
File "C:\Users\Kyo\AppData\Local\Programs\Python\Python311\Lib\site-packages\langchain\agents\openai_assistant\base.py", line 408, in ainvoke
run = await self._create_run(input)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
TypeError: object Run can't be used in 'await' expression
```
### Description
Some days ago I was using OpenAIAssistantRunnable with await for async, but now I can´t use it. If call it without an 'await' expression I get a coroutine.
### System Info
```
langchain==0.1.3
langchain-community==0.0.15
langchain-core==0.1.15
langchain-openai==0.0.3
langcodes==3.3.0
langsmith==0.0.83
openai==1.9.0
Django==5.0.2
djangorestframework==3.14.0
```
Python version: 3.11.6 | TypeError: object Run can't be used in 'await' expression | https://api.github.com/repos/langchain-ai/langchain/issues/18337/comments | 6 | 2024-02-29T18:44:53Z | 2024-08-09T16:07:12Z | https://github.com/langchain-ai/langchain/issues/18337 | 2,161,882,320 | 18,337 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
from langchain.document_loaders import (
UnstructuredExcelLoader,
PyPDFLoader,
UnstructuredWordDocumentLoader,
UnstructuredPowerPointLoader,
UnstructuredFileLoader,
AmazonTextractPDFLoader
)
file="General Drug Interactions (EU,NO) 1.docx"
loader = UnstructuredWordDocumentLoader(file)
pages = loader.load()
### Error Message and Stack Trace (if applicable)
KeyError Traceback (most recent call last)~\AppData\Local\Temp\ipykernel_13540\3705926774.py in <module> 9 ) 10 loader = UnstructuredWordDocumentLoader(file)---> 11 pages = loader.load()~\Anaconda3\lib\site-packages\langchain_community\document_loaders\unstructured.py in load(self) 85 def load(self) -> List[Document]: 86 """Load file."""---> 87 elements = self._get_elements() 88 self._post_process_elements(elements) 89 if self.mode == "elements":~\Anaconda3\lib\site-packages\langchain_community\document_loaders\word_document.py in _get_elements(self) 122 from unstructured.partition.docx import partition_docx 123 --> 124 return partition_docx(filename=self.file_path, **self.unstructured_kwargs)~\Anaconda3\lib\site-packages\unstructured\documents\elements.py in wrapper(*args, **kwargs) 300 @functools.wraps(func) 301 def wrapper(*args: _P.args, **kwargs: _P.kwargs) -> List[Element]:--> 302 elements = func(*args, **kwargs) 303 sig = inspect.signature(func) 304 params: Dict[str, Any] = dict(**dict(zip(sig.parameters, args)), **kwargs)~\Anaconda3\lib\site-packages\unstructured\file_utils\filetype.py in wrapper(*args, **kwargs) 589 @functools.wraps(func) 590 def wrapper(*args: _P.args, **kwargs: _P.kwargs) -> List[Element]:--> 591 elements = func(*args, **kwargs) 592 sig = inspect.signature(func) 593 params: Dict[str, Any] = dict(**dict(zip(sig.parameters, args)), **kwargs)~\Anaconda3\lib\site-packages\unstructured\file_utils\filetype.py in wrapper(*args, **kwargs) 544 @functools.wraps(func) 545 def wrapper(*args: _P.args, **kwargs: _P.kwargs) -> List[Element]:--> 546 elements = func(*args, **kwargs) 547 sig = inspect.signature(func) 548 params: Dict[str, Any] = dict(**dict(zip(sig.parameters, args)), **kwargs)~\Anaconda3\lib\site-packages\unstructured\chunking\title.py in wrapper(*args, **kwargs) 322 @functools.wraps(func) 323 def wrapper(*args: _P.args, **kwargs: _P.kwargs) -> List[Element]:--> 324 elements = func(*args, **kwargs) 325 sig = inspect.signature(func) 326 params: Dict[str, Any] = dict(**dict(zip(sig.parameters, args)), **kwargs)~\Anaconda3\lib\site-packages\unstructured\partition\docx.py in partition_docx(filename, file, metadata_filename, include_page_breaks, include_metadata, infer_table_structure, metadata_last_modified, chunking_strategy, languages, detect_language_per_element, **kwargs) 229 detect_language_per_element=detect_language_per_element, 230 )--> 231 return list(elements) 232 233
~\Anaconda3\lib\site-packages\unstructured\partition\lang.py in apply_lang_metadata(elements, languages, detect_language_per_element) 312 313 if not isinstance(elements, List):--> 314 elements = list(elements) 315 316 full_text = " ".join(e.text for e in elements if hasattr(e, "text"))~\Anaconda3\lib\site-packages\unstructured\partition\docx.py in _iter_document_elements(self) 308 # -- characteristic of a generator avoids repeated code to form interim results into lists. 309 --> 310 if not self._document.sections: 311 for paragraph in self._document.paragraphs: 312 yield from self._iter_paragraph_elements(paragraph)~\Anaconda3\lib\site-packages\unstructured\utils.py in __get__(self, obj, type) 118 # --- and store that value in the (otherwise unused) host-object 119 # --- __dict__ value of same name ('fget' nominally)--> 120 value = self._fget(obj) 121 obj.__dict__[self._name] = value 122 return cast(_T, value)~\Anaconda3\lib\site-packages\unstructured\partition\docx.py in _document(self) 334 335 if filename is not None:--> 336 return docx.Document(filename) 337 338 assert file is not None~\Anaconda3\lib\site-packages\docx\api.py in Document(docx) 21 """ 22 docx = _default_docx_path() if docx is None else docx---> 23 document_part = Package.open(docx).main_document_part 24 if document_part.content_type != CT.WML_DOCUMENT_MAIN: 25 tmpl = "file '%s' is not a Word file, content type is '%s'"~\Anaconda3\lib\site-packages\docx\opc\package.py in open(cls, pkg_file) 114 def open(cls, pkg_file): 115 """Return an |OpcPackage| instance loaded with the contents of `pkg_file`."""--> 116 pkg_reader = PackageReader.from_file(pkg_file) 117 package = cls() 118 Unmarshaller.unmarshal(pkg_reader, package, PartFactory)~\Anaconda3\lib\site-packages\docx\opc\pkgreader.py in from_file(pkg_file) 23 content_types = _ContentTypeMap.from_xml(phys_reader.content_types_xml) 24 pkg_srels = PackageReader._srels_for(phys_reader, PACKAGE_URI)---> 25 sparts = PackageReader._load_serialized_parts( 26 phys_reader, pkg_srels, content_types 27 )~\Anaconda3\lib\site-packages\docx\opc\pkgreader.py in _load_serialized_parts(phys_reader, pkg_srels, content_types) 51 sparts = [] 52 part_walker = PackageReader._walk_phys_parts(phys_reader, pkg_srels)---> 53 for partname, blob, reltype, srels in part_walker: 54 content_type = content_types[partname] 55 spart = _SerializedPart(partname, content_type, reltype, blob, srels)~\Anaconda3\lib\site-packages\docx\opc\pkgreader.py in _walk_phys_parts(phys_reader, srels, visited_partnames) 84 phys_reader, part_srels, visited_partnames 85 )---> 86 for partname, blob, reltype, srels in next_walker: 87 yield (partname, blob, reltype, srels) 88
~\Anaconda3\lib\site-packages\docx\opc\pkgreader.py in _walk_phys_parts(phys_reader, srels, visited_partnames) 84 phys_reader, part_srels, visited_partnames 85 )---> 86 for partname, blob, reltype, srels in next_walker: 87 yield (partname, blob, reltype, srels) 88
~\Anaconda3\lib\site-packages\docx\opc\pkgreader.py in _walk_phys_parts(phys_reader, srels, visited_partnames) 79 reltype = srel.reltype 80 part_srels = PackageReader._srels_for(phys_reader, partname)---> 81 blob = phys_reader.blob_for(partname) 82 yield (partname, blob, reltype, part_srels) 83 next_walker = PackageReader._walk_phys_parts(~\Anaconda3\lib\site-packages\docx\opc\phys_pkg.py in blob_for(self, pack_uri) 81 Raises |ValueError| if no matching member is present in zip archive. 82 """---> 83 return self._zipf.read(pack_uri.membername) 84 85 def close(self):~\Anaconda3\lib\zipfile.py in read(self, name, pwd) 1470 def read(self, name, pwd=None): 1471 """Return file bytes for name."""-> 1472 with self.open(name, "r", pwd) as fp: 1473 return fp.read() 1474
~\Anaconda3\lib\zipfile.py in open(self, name, mode, pwd, force_zip64) 1509 else: 1510 # Get info object for name-> 1511 zinfo = self.getinfo(name) 1512 1513 if mode == 'w':
~\Anaconda3\lib\zipfile.py in getinfo(self, name)
1436 info = self.NameToInfo.get(name)
1437 if info is None:
-> 1438 raise KeyError(
1439 'There is no item named %r in the archive' % name)
1440
KeyError: "There is no item named 'word/#_top' in the archive"
### Description
Hello Team,
I am trying to use latest langchain version to load the docx document, attached the error that i am getting, Just to include the file that i am using is perfectly file and its not corrupted.
### System Info
Langchain latest=0.1.8

| Error while loading docx file using UnstructuredWordDocumentLoader | https://api.github.com/repos/langchain-ai/langchain/issues/18329/comments | 0 | 2024-02-29T16:34:14Z | 2024-06-08T16:13:55Z | https://github.com/langchain-ai/langchain/issues/18329 | 2,161,656,887 | 18,329 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```
import torch
from langchain.sql_database import SQLDatabase
from langchain.prompts import PromptTemplate # ChatPromptTemplate, SystemMessagePromptTemplate, HumanMessagePromptTemplate,
from langchain.chains import LLMChain, create_sql_query_chain # SequentialChain,
from langchain import HuggingFacePipeline
from langchain_experimental.sql import SQLDatabaseChain
from transformers import AutoModelForCausalLM, AutoTokenizer, GenerationConfig, pipeline
from urllib.parse import quote_plus
db = SQLDatabase.from_uri(new_con, include_tables=[...])
model = AutoModelForCausalLM.from_pretrained(
r'model_path',
torch_dtype=torch.float32,
trust_remote_code=True,
device_map="auto",
low_cpu_mem_usage=True
)
tokenizer = AutoTokenizer.from_pretrained(r'tokenizer_path')
generation_config = GenerationConfig.from_pretrained(r'generationconfig_path')
pipeline = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
return_full_text=True,
generation_config=generation_config
)
llm = HuggingFacePipeline(pipeline=pipeline)
system_message =
"""
prompt instructions
question: {query}
"""
prompt_template = PromptTemplate(
template=system_message, input_variables=["query"]
)
llm_chain = LLMChain(
llm = llm,
prompt = prompt_template
)
# chain using prompt and llm_chain
db_chain_1 = SQLDatabaseChain.from_llm(llm_chain, db, verbose=True, prompt = prompt_template, use_query_checker=False, input_key = 'query')
# using only the llm and no prompt
db_chain_2 = SQLDatabaseChain.from_llm(llm, db, verbose=True)
# another chain test
db_chain_3 = create_sql_query_chain(llm, db)
question = "give me the top ...."
```
Now, I've tried these chains and got different errors:
```
db_chain_1.invoke(question) # ValueError: Missing some input keys: {'query'}
db_chain_2.invoke(question) # TypeError: TextEncodeInput must be Union[TextInputSequence, Tuple[InputSequence, InputSequence]]
db_chain_3.invoke(question) # AssertionError: The input to RunnablePassthrough.assign() must be a dict.
```
Note that with ```langchain==0.0.350``` I was able to run ```db_chain_2.run(question)``` (no prompt) and using the prompt running with ```db_chain_2.run(system_message.format(question=question))```
Package versions:
```
langchain==0.1.9
langchain-community==0.0.24
langchain-core==0.1.27
langchain-experimental==0.0.52
langserve==0.0.43
langsmith==0.1.10
```
The full error message bellow refers to the following chain invoke: ```db_chain_2.invoke(question) # TypeError: TextEncodeInput must be Union[TextInputSequence, Tuple[InputSequence, InputSequence]]```
Nonetheless, I would like to run with using the prompt (the one that I get the ```ValueError: Missing some input keys: {'query'}```)
### Error Message and Stack Trace (if applicable)
> Entering new SQLDatabaseChain chain...
give me the top....
SQLQuery:Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "mypath\.venv\lib\site-packages\langchain\chains\base.py", line 163, in invoke
raise e
File "mypath\.venv\lib\site-packages\langchain\chains\base.py", line 153, in invoke
self._call(inputs, run_manager=run_manager)
File "mypath\.venv\lib\site-packages\langchain_experimental\sql\base.py", line 201, in _call
raise exc
File "mypath\.venv\lib\site-packages\langchain_experimental\sql\base.py", line 132, in _call
sql_cmd = self.llm_chain.predict(
File "mypath\.venv\lib\site-packages\langchain\chains\llm.py", line 293, in predict
return self(kwargs, callbacks=callbacks)[self.output_key]
File "mypath\.venv\lib\site-packages\langchain_core\_api\deprecation.py", line 145, in warning_emitting_wrapper
return wrapped(*args, **kwargs)
File "mypath\.venv\lib\site-packages\langchain\chains\base.py", line 378, in __call__
return self.invoke(
File "mypath\.venv\lib\site-packages\langchain\chains\base.py", line 163, in invoke
raise e
File "mypath\.venv\lib\site-packages\langchain\chains\base.py", line 153, in invoke
self._call(inputs, run_manager=run_manager)
File "mypath\.venv\lib\site-packages\langchain\chains\llm.py", line 103, in _call
response = self.generate([inputs], run_manager=run_manager)
File "mypath\.venv\lib\site-packages\langchain\chains\llm.py", line 115, in generate
return self.llm.generate_prompt(
File "mypath\.venv\lib\site-packages\langchain_core\language_models\llms.py", line 568, in generate_prompt
return self.generate(prompt_strings, stop=stop, callbacks=callbacks, **kwargs)
File "mypath\.venv\lib\site-packages\langchain_core\language_models\llms.py", line 741, in generate
output = self._generate_helper(
File "mypath\.venv\lib\site-packages\langchain_core\language_models\llms.py", line 605, in _generate_helper
raise e
File "mypath\.venv\lib\site-packages\langchain_core\language_models\llms.py", line 592, in _generate_helper
self._generate(
File "mypath\.venv\lib\site-packages\langchain_community\llms\huggingface_pipeline.py", line 202, in _generate
responses = self.pipeline(
File "mypath\.venv\lib\site-packages\transformers\pipelines\text_generation.py", line 241, in __call__
return super().__call__(text_inputs, **kwargs)
File "mypath\.venv\lib\site-packages\transformers\pipelines\base.py", line 1148, in __call__
preprocess_params, forward_params, postprocess_params = self._sanitize_parameters(**kwargs)
File "mypath\.venv\lib\site-packages\transformers\pipelines\text_generation.py", line 171, in _sanitize_parameters
stop_sequence_ids = self.tokenizer.encode(stop_sequence, add_special_tokens=False)
File "mypath\.venv\lib\site-packages\transformers\tokenization_utils_base.py", line 2600, in encode
File "mypath\.venv\lib\site-packages\transformers\tokenization_utils_base.py", line 3008, in encode_plus
return self._encode_plus(
File "mypath\.venv\lib\site-packages\transformers\tokenization_utils_fast.py", line 576, in _encode_plus
batched_output = self._batch_encode_plus(
File "mypath\.venv\lib\site-packages\transformers\tokenization_utils_fast.py", line 504, in _batch_encode_plus
encodings = self._tokenizer.encode_batch(
TypeError: TextEncodeInput must be Union[TextInputSequence, Tuple[InputSequence, InputSequence]]
### Description
I was using lower versions and the code worked with the run method.
```
langchain==0.0.350
langchain-community==0.0.3
langchain-core==0.1.0
langchain-experimental==0.0.47
langcodes==3.3.0
langserve==0.0.43
langsmith==0.0.70
```
Upgrading the package versions to
```
langchain==0.1.9
langchain-community==0.0.24
langchain-core==0.1.27
langchain-experimental==0.0.52
langserve==0.0.43
langsmith==0.1.10
```
And change the `run` to `invoke` raises the error.
### System Info
### System info
OS: Windows
Python Version: 3.9.7
### Pip freeze
accelerate==0.27.2
aiohttp==3.9.3
aiosignal==1.3.1
anyio==4.3.0
argon2-cffi==23.1.0
argon2-cffi-bindings==21.2.0
asgiref==3.7.2
async-timeout==4.0.3
attrs==23.2.0
backoff==2.2.1
bcrypt==4.1.2
bitsandbytes==0.42.0
build==1.0.3
cachetools==5.3.3
certifi==2024.2.2
cffi==1.16.0
charset-normalizer==3.3.2
chroma-hnswlib==0.7.3
chromadb==0.4.24
click==8.1.7
colorama==0.4.6
coloredlogs==15.0.1
dataclasses-json==0.6.4
Deprecated==1.2.14
exceptiongroup==1.2.0
fastapi==0.110.0
filelock==3.13.1
flatbuffers==23.5.26
frozenlist==1.4.1
fsspec==2024.2.0
google-auth==2.28.1
googleapis-common-protos==1.62.0
gpt4all==2.2.1.post1
greenlet==3.0.3
grpcio==1.62.0
h11==0.14.0
httpcore==1.0.4
httptools==0.6.1
httpx==0.27.0
httpx-sse==0.4.0
huggingface-hub==0.21.1
humanfriendly==10.0
idna==3.6
importlib-metadata==6.11.0
importlib_resources==6.1.2
Jinja2==3.1.3
joblib==1.3.2
jsonpatch==1.33
jsonpointer==2.4
kubernetes==29.0.0
langchain==0.1.9
langchain-community==0.0.24
langchain-core==0.1.27
langchain-experimental==0.0.52
langserve==0.0.43
langsmith==0.1.10
MarkupSafe==2.1.5
marshmallow==3.21.0
minio==7.2.4
mmh3==4.1.0
monotonic==1.6
mpmath==1.3.0
multidict==6.0.5
mypy-extensions==1.0.0
networkx==3.2.1
numpy==1.26.4
oauthlib==3.2.2
onnxruntime==1.17.1
opentelemetry-api==1.23.0
opentelemetry-exporter-otlp-proto-common==1.23.0
opentelemetry-exporter-otlp-proto-grpc==1.23.0
opentelemetry-instrumentation==0.44b0
opentelemetry-instrumentation-asgi==0.44b0
opentelemetry-instrumentation-fastapi==0.44b0
opentelemetry-proto==1.23.0
opentelemetry-sdk==1.23.0
opentelemetry-semantic-conventions==0.44b0
opentelemetry-util-http==0.44b0
orjson==3.9.15
overrides==7.7.0
packaging==23.2
pillow==10.2.0
posthog==3.4.2
protobuf==4.25.3
psutil==5.9.8
pulsar-client==3.4.0
pyasn1==0.5.1
pyasn1-modules==0.3.0
pycparser==2.21
pycryptodome==3.20.0
pydantic==1.10.13
pyodbc==5.1.0
pypdf==4.0.2
PyPika==0.48.9
pyproject_hooks==1.0.0
pyreadline3==3.4.1
python-dateutil==2.8.2
python-dotenv==1.0.1
PyYAML==6.0.1
regex==2023.12.25
requests==2.31.0
requests-oauthlib==1.3.1
rsa==4.9
safetensors==0.4.2
scikit-learn==1.4.1.post1
scipy==1.12.0
sentence-transformers==2.4.0
six==1.16.0
sniffio==1.3.1
SQLAlchemy==2.0.27
sse-starlette==1.8.2
starlette==0.36.3
sympy==1.12
tenacity==8.2.3
threadpoolctl==3.3.0
tokenizers==0.15.2
tomli==2.0.1
torch==2.2.1
tqdm==4.66.2
transformers==4.38.1
typer==0.9.0
typing-inspect==0.9.0
typing_extensions==4.10.0
urllib3==2.2.1
uvicorn==0.27.1
watchfiles==0.21.0
websocket-client==1.7.0
websockets==12.0
wrapt==1.16.0
yarl==1.9.4
zipp==3.17.0
| SQLDatabaseChain invoke ValueError: Missing some input keys: {'query'} | https://api.github.com/repos/langchain-ai/langchain/issues/18328/comments | 2 | 2024-02-29T14:58:44Z | 2024-06-08T16:13:50Z | https://github.com/langchain-ai/langchain/issues/18328 | 2,161,460,487 | 18,328 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
from langchain_community.vectorstores import Neo4jVector
from langchain.graphs import Neo4jGraph
from langchain.chat_models import ChatOllama
from langchain_community.embeddings import OllamaEmbeddings
from langchain.prompts import ChatPromptTemplate
from langchain_core.runnables import RunnablePassthrough
from langchain_core.output_parsers import StrOutputParser
llm=ChatOllama(temperature=0, base_url=ollama_base_url, model="llama2:latest", streaming=True, top_k=10, top_p=0.3, num_ctx=3072)
neo4j_graph = Neo4jGraph(url=NEO4J_URI, username=NEO4J_USERNAME, password=NEO4J_PASSWORD)
embeddings = OllamaEmbeddings(base_url=ollama_base_url, model="llama2:latest")
template = """Answer the question based only on the following context:
Always do a case-insensitive and fuzzy search for any search.
Do not include any explanations or apologies in your responses.
If you don't know the answer, just say that you don't know, don't try to make up an answer.
{context}
Question: {question}
"""
prompt = ChatPromptTemplate.from_template(template)
kg = Neo4jVector.from_existing_index(
embedding=embeddings,
url=NEO4J_URI,
username=NEO4J_USERNAME,
password=NEO4J_PASSWORD,
database="neo4j",
index_name="person_eg",
search_type='vector'
)
retriever = kg.as_retriever()
question= "In which department is Erik Valle working?"
rag_chain = (
{"context": retriever, "question": RunnablePassthrough()}
| prompt
| llm
| StrOutputParser()
)
rag_chain.invoke(question)
```
### Error Message and Stack Trace (if applicable)
```bash
Cell In[2], line 50
41 question= "In which department is Erik Valle working?"
43 rag_chain = (
44 {"context": retriever, "question": RunnablePassthrough()}
45 | prompt
46 | llm
47 | StrOutputParser()
48 )
---> 50 rag_chain.invoke(question)
File [~/.pyenv/versions/3.11.7/lib/python3.11/site-packages/langchain_core/runnables/base.py:2053](http://localhost:8888/home/.pyenv/versions/3.11.7/lib/python3.11/site-packages/langchain_core/runnables/base.py#line=2052), in RunnableSequence.invoke(self, input, config)
2051 try:
2052 for i, step in enumerate(self.steps):
-> 2053 input = step.invoke(
2054 input,
2055 # mark each step as a child run
2056 patch_config(
2057 config, callbacks=run_manager.get_child(f"seq:step:{i+1}")
2058 ),
2059 )
2060 # finish the root run
2061 except BaseException as e:
File [~/.pyenv/versions/3.11.7/lib/python3.11/site-packages/langchain_core/runnables/base.py:2692](http://localhost:8888/home/.pyenv/versions/3.11.7/lib/python3.11/site-packages/langchain_core/runnables/base.py#line=2691), in RunnableParallel.invoke(self, input, config)
2679 with get_executor_for_config(config) as executor:
2680 futures = [
2681 executor.submit(
2682 step.invoke,
(...)
2690 for key, step in steps.items()
2691 ]
-> 2692 output = {key: future.result() for key, future in zip(steps, futures)}
2693 # finish the root run
2694 except BaseException as e:
File [~/.pyenv/versions/3.11.7/lib/python3.11/site-packages/langchain_core/runnables/base.py:2692](http://localhost:8888/home/.pyenv/versions/3.11.7/lib/python3.11/site-packages/langchain_core/runnables/base.py#line=2691), in <dictcomp>(.0)
2679 with get_executor_for_config(config) as executor:
2680 futures = [
2681 executor.submit(
2682 step.invoke,
(...)
2690 for key, step in steps.items()
2691 ]
-> 2692 output = {key: future.result() for key, future in zip(steps, futures)}
2693 # finish the root run
2694 except BaseException as e:
File [~/.pyenv/versions/3.11.7/lib/python3.11/concurrent/futures/_base.py:456](http://localhost:8888/home/.pyenv/versions/3.11.7/lib/python3.11/concurrent/futures/_base.py#line=455), in Future.result(self, timeout)
454 raise CancelledError()
455 elif self._state == FINISHED:
--> 456 return self.__get_result()
457 else:
458 raise TimeoutError()
File [~/.pyenv/versions/3.11.7/lib/python3.11/concurrent/futures/_base.py:401](http://localhost:8888/home/.pyenv/versions/3.11.7/lib/python3.11/concurrent/futures/_base.py#line=400), in Future.__get_result(self)
399 if self._exception:
400 try:
--> 401 raise self._exception
402 finally:
403 # Break a reference cycle with the exception in self._exception
404 self = None
File [~/.pyenv/versions/3.11.7/lib/python3.11/concurrent/futures/thread.py:58](http://localhost:8888/home/.pyenv/versions/3.11.7/lib/python3.11/concurrent/futures/thread.py#line=57), in _WorkItem.run(self)
55 return
57 try:
---> 58 result = self.fn(*self.args, **self.kwargs)
59 except BaseException as exc:
60 self.future.set_exception(exc)
File [~/.pyenv/versions/3.11.7/lib/python3.11/site-packages/langchain_core/retrievers.py:121](http://localhost:8888/home/.pyenv/versions/3.11.7/lib/python3.11/site-packages/langchain_core/retrievers.py#line=120), in BaseRetriever.invoke(self, input, config, **kwargs)
117 def invoke(
118 self, input: str, config: Optional[RunnableConfig] = None, **kwargs: Any
119 ) -> List[Document]:
120 config = ensure_config(config)
--> 121 return self.get_relevant_documents(
122 input,
123 callbacks=config.get("callbacks"),
124 tags=config.get("tags"),
125 metadata=config.get("metadata"),
126 run_name=config.get("run_name"),
127 **kwargs,
128 )
File [~/.pyenv/versions/3.11.7/lib/python3.11/site-packages/langchain_core/retrievers.py:224](http://localhost:8888/home/.pyenv/versions/3.11.7/lib/python3.11/site-packages/langchain_core/retrievers.py#line=223), in BaseRetriever.get_relevant_documents(self, query, callbacks, tags, metadata, run_name, **kwargs)
222 except Exception as e:
223 run_manager.on_retriever_error(e)
--> 224 raise e
225 else:
226 run_manager.on_retriever_end(
227 result,
228 )
File [~/.pyenv/versions/3.11.7/lib/python3.11/site-packages/langchain_core/retrievers.py:217](http://localhost:8888/home/.pyenv/versions/3.11.7/lib/python3.11/site-packages/langchain_core/retrievers.py#line=216), in BaseRetriever.get_relevant_documents(self, query, callbacks, tags, metadata, run_name, **kwargs)
215 _kwargs = kwargs if self._expects_other_args else {}
216 if self._new_arg_supported:
--> 217 result = self._get_relevant_documents(
218 query, run_manager=run_manager, **_kwargs
219 )
220 else:
221 result = self._get_relevant_documents(query, **_kwargs)
File [~/.pyenv/versions/3.11.7/lib/python3.11/site-packages/langchain_core/vectorstores.py:654](http://localhost:8888/home/.pyenv/versions/3.11.7/lib/python3.11/site-packages/langchain_core/vectorstores.py#line=653), in VectorStoreRetriever._get_relevant_documents(self, query, run_manager)
650 def _get_relevant_documents(
651 self, query: str, *, run_manager: CallbackManagerForRetrieverRun
652 ) -> List[Document]:
653 if self.search_type == "similarity":
--> 654 docs = self.vectorstore.similarity_search(query, **self.search_kwargs)
655 elif self.search_type == "similarity_score_threshold":
656 docs_and_similarities = (
657 self.vectorstore.similarity_search_with_relevance_scores(
658 query, **self.search_kwargs
659 )
660 )
File [~/.pyenv/versions/3.11.7/lib/python3.11/site-packages/langchain_community/vectorstores/neo4j_vector.py:564](http://localhost:8888/home/.pyenv/versions/3.11.7/lib/python3.11/site-packages/langchain_community/vectorstores/neo4j_vector.py#line=563), in Neo4jVector.similarity_search(self, query, k, **kwargs)
554 """Run similarity search with Neo4jVector.
555
556 Args:
(...)
561 List of Documents most similar to the query.
562 """
563 embedding = self.embedding.embed_query(text=query)
--> 564 return self.similarity_search_by_vector(
565 embedding=embedding,
566 k=k,
567 query=query,
568 )
File [~/.pyenv/versions/3.11.7/lib/python3.11/site-packages/langchain_community/vectorstores/neo4j_vector.py:659](http://localhost:8888/home/.pyenv/versions/3.11.7/lib/python3.11/site-packages/langchain_community/vectorstores/neo4j_vector.py#line=658), in Neo4jVector.similarity_search_by_vector(self, embedding, k, **kwargs)
644 def similarity_search_by_vector(
645 self,
646 embedding: List[float],
647 k: int = 4,
648 **kwargs: Any,
649 ) -> List[Document]:
650 """Return docs most similar to embedding vector.
651
652 Args:
(...)
657 List of Documents most similar to the query vector.
658 """
--> 659 docs_and_scores = self.similarity_search_with_score_by_vector(
660 embedding=embedding, k=k, **kwargs
661 )
662 return [doc for doc, _ in docs_and_scores]
File [~/.pyenv/versions/3.11.7/lib/python3.11/site-packages/langchain_community/vectorstores/neo4j_vector.py:630](http://localhost:8888/home/.pyenv/versions/3.11.7/lib/python3.11/site-packages/langchain_community/vectorstores/neo4j_vector.py#line=629), in Neo4jVector.similarity_search_with_score_by_vector(self, embedding, k, **kwargs)
620 parameters = {
621 "index": self.index_name,
622 "k": k,
(...)
625 "query": remove_lucene_chars(kwargs["query"]),
626 }
628 results = self.query(read_query, params=parameters)
--> 630 docs = [
631 (
632 Document(
633 page_content=result["text"],
634 metadata={
635 k: v for k, v in result["metadata"].items() if v is not None
636 },
637 ),
638 result["score"],
639 )
640 for result in results
641 ]
642 return docs
File [~/.pyenv/versions/3.11.7/lib/python3.11/site-packages/langchain_community/vectorstores/neo4j_vector.py:632](http://localhost:8888/home/.pyenv/versions/3.11.7/lib/python3.11/site-packages/langchain_community/vectorstores/neo4j_vector.py#line=631), in <listcomp>(.0)
620 parameters = {
621 "index": self.index_name,
622 "k": k,
(...)
625 "query": remove_lucene_chars(kwargs["query"]),
626 }
628 results = self.query(read_query, params=parameters)
630 docs = [
631 (
--> 632 Document(
633 page_content=result["text"],
634 metadata={
635 k: v for k, v in result["metadata"].items() if v is not None
636 },
637 ),
638 result["score"],
639 )
640 for result in results
641 ]
642 return docs
File [~/.pyenv/versions/3.11.7/lib/python3.11/site-packages/langchain_core/documents/base.py:22](http://localhost:8888/home/.pyenv/versions/3.11.7/lib/python3.11/site-packages/langchain_core/documents/base.py#line=21), in Document.__init__(self, page_content, **kwargs)
20 def __init__(self, page_content: str, **kwargs: Any) -> None:
21 """Pass page_content in as positional or named arg."""
---> 22 super().__init__(page_content=page_content, **kwargs)
File [~/.pyenv/versions/3.11.7/lib/python3.11/site-packages/langchain_core/load/serializable.py:107](http://localhost:8888/home/.pyenv/versions/3.11.7/lib/python3.11/site-packages/langchain_core/load/serializable.py#line=106), in Serializable.__init__(self, **kwargs)
106 def __init__(self, **kwargs: Any) -> None:
--> 107 super().__init__(**kwargs)
108 self._lc_kwargs = kwargs
File [~/.pyenv/versions/3.11.7/lib/python3.11/site-packages/pydantic/v1/main.py:341](http://localhost:8888/home/.pyenv/versions/3.11.7/lib/python3.11/site-packages/pydantic/v1/main.py#line=340), in BaseModel.__init__(__pydantic_self__, **data)
339 values, fields_set, validation_error = validate_model(__pydantic_self__.__class__, data)
340 if validation_error:
--> 341 raise validation_error
342 try:
343 object_setattr(__pydantic_self__, '__dict__', values)
ValidationError: 1 validation error for Document
page_content
none is not an allowed value (type=type_error.none.not_allowed)
```
### Description
We use embeddings obtained from a Neo4j database to query a question via Neo4jVector.from_existing_index. The question is a simple string processed by a retriever based on Neo4jVector.from_existing_index, called k. There, we specify parameters such as the langchain_community.embeddings method (e.g., OllamaEmbeddings, AzureOpenAIEmbeddings, SentenceTransformerEmbeddings, etc.), Neo4j server, and an index we created before. We can successfully see the node label and the embedding node properly using k.node_label and k.embedding_node_property, respectively. The issue arises when we pass embeddings no matter the embedding model we use (e.g., LLAMA2, text-embedding-ada-002, or SentenceTransformer) or LLM. Instead, if we use an index pointing to a set of empty embeddings, the LLM replies something false. Indeed, the answer has nothing to do with the database since the embeddings are null. We have already tried using itemgetter as shorthand to extract data from the map when combining it with RunnableParallel, and it brings the same error.
```python
rag_chain = (
{
"context": itemgetter("question") | retriever,
"question": itemgetter("question"),
}
| prompt
| llm
| StrOutputParser()
)
```
Do you have any idea how to solve this issue?
### System Info
**"pip freeze | grep langchain"**
langchain==0.1.7
langchain-cli==0.0.21
langchain-community==0.0.20
langchain-core==0.1.23
langchain-openai==0.0.6
-e /home/insightlab/langchain-samples/my-app/packages/neo4j-advanced-rag
**platform**: linux (Ubuntu 22.04)
**python version**: Python 3.11.7 | Cannot retrieve input from a vector store using an existing index. | https://api.github.com/repos/langchain-ai/langchain/issues/18327/comments | 2 | 2024-02-29T14:46:19Z | 2024-03-08T11:42:23Z | https://github.com/langchain-ai/langchain/issues/18327 | 2,161,434,054 | 18,327 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.