issue_owner_repo listlengths 2 2 | issue_body stringlengths 0 261k ⌀ | issue_title stringlengths 1 925 | issue_comments_url stringlengths 56 81 | issue_comments_count int64 0 2.5k | issue_created_at stringlengths 20 20 | issue_updated_at stringlengths 20 20 | issue_html_url stringlengths 37 62 | issue_github_id int64 387k 2.46B | issue_number int64 1 127k |
|---|---|---|---|---|---|---|---|---|---|
[
"weidai11",
"cryptopp"
] | misc.h:177:26: runtime error: null pointer passed as argument 1, which is declared to never be null
misc.h:177:26: runtime error: null pointer passed as argument 2, which is declared to never be null
I suggest adding a check for `count > 0` to `memcpy_s` and `memmov_s` if they are supposed to be "secure".
| Null pointers passed to memcpy | https://api.github.com/repos/weidai11/cryptopp/issues/4/comments | 13 | 2015-07-13T10:25:15Z | 2015-10-31T13:46:19Z | https://github.com/weidai11/cryptopp/issues/4 | 94,690,242 | 4 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
I am using the exact example provided [here](https://ai.google.dev/gemma/docs/integrations/langchain):
```python
from langchain_google_vertexai import GemmaVertexAIModelGarden
llm = GemmaVertexAIModelGarden(
endpoint_id=VERTEX_AI_ENDPOINT_ID,
project=VERTEX_AI_PROJECT,
location=VERTEX_AI_LOCATION,
)
output = llm.invoke("What is the meaning of life?")
print(output)
```
### Error Message and Stack Trace (if applicable)
```
Traceback (most recent call last):
File "C:\Users\18goldr\my_project\venv\Lib\site-packages\google\api_core\grpc_helpers.py", line 76, in error_remapped_callable
return callable_(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\18goldr\my_project\venv\Lib\site-packages\grpc\_channel.py", line 1181, in __call__
return _end_unary_response_blocking(state, call, False, None)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\18goldr\my_project\venv\Lib\site-packages\grpc\_channel.py", line 1006, in _end_unary_response_blocking
raise _InactiveRpcError(state) # pytype: disable=not-instantiable
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with:
status = StatusCode.FAILED_PRECONDITION
details = "Failed to deserialize the JSON body into the target type: instances[0]: missing field `inputs` at line 1 column 55"
debug_error_string = "UNKNOWN:Error received from peer ipv4:142.251.40.106:443 {created_time:"2024-08-10T20:51:37.8271213+00:00", grpc_status:9, grpc_message:"Failed to deserialize the JSON body into the target type: instances[0]: missing field `inputs` at line 1 column 55"}"
>
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "C:\Users\18goldr\my_project\tests\gemma.py", line 14, in <module>
output = llm.invoke("What is the meaning of life?")
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\18goldr\my_project\venv\Lib\site-packages\langchain_core\language_models\llms.py", line 344, in invoke
self.generate_prompt(
File "C:\Users\18goldr\my_project\venv\Lib\site-packages\langchain_core\language_models\llms.py", line 701, in generate_prompt
return self.generate(prompt_strings, stop=stop, callbacks=callbacks, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\18goldr\my_project\venv\Lib\site-packages\langchain_core\language_models\llms.py", line 880, in generate
output = self._generate_helper(
^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\18goldr\my_project\venv\Lib\site-packages\langchain_core\language_models\llms.py", line 738, in _generate_helper
raise e
File "C:\Users\18goldr\my_project\venv\Lib\site-packages\langchain_core\language_models\llms.py", line 725, in _generate_helper
self._generate(
File "C:\Users\18goldr\my_project\venv\Lib\site-packages\langchain_google_vertexai\model_garden.py", line 96, in _generate
response = self.client.predict(endpoint=self.endpoint_path, instances=instances)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\18goldr\my_project\venv\Lib\site-packages\google\cloud\aiplatform_v1\services\prediction_service\client.py", line 848, in predict
response = rpc(
^^^^
File "C:\Users\18goldr\my_project\venv\Lib\site-packages\google\api_core\gapic_v1\method.py", line 131, in __call__
return wrapped_func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\18goldr\my_project\venv\Lib\site-packages\google\api_core\grpc_helpers.py", line 78, in error_remapped_callable
raise exceptions.from_grpc_error(exc) from exc
google.api_core.exceptions.FailedPrecondition: 400 Failed to deserialize the JSON body into the target type: instances[0]: missing field `inputs` at line 1 column 55
```
### Description
I'm trying to use the `langchain` library to use `Gemma2-9b-it` from `Google Vertex AI`.
I expect it to do what's in the tutorial (ie. print the output).
Instead, it prints an error about deserialization of JSON.
### System Info
System Information
------------------
> OS: Windows
> OS Version: 10.0.19045
> Python Version: 3.11.9 (tags/v3.11.9:de54cf5, Apr 2 2024, 10:12:12) [MSC v.1938 64 bit (AMD64)]
Package Information
-------------------
> langchain_core: 0.2.29
> langchain: 0.2.12
> langsmith: 0.1.98
> langchain_google_vertexai: 1.0.8
> langchain_text_splitters: 0.2.2
Optional packages not installed
-------------------------------
> langgraph
> langserve
Other Dependencies
------------------
> aiohttp: 3.10.3
> anthropic[vertexai]: Installed. No version info available.
> async-timeout: Installed. No version info available.
> google-cloud-aiplatform: 1.61.0
> google-cloud-storage: 2.18.2
> jsonpatch: 1.33
> numpy: 1.26.4
> orjson: 3.10.7
> packaging: 24.1
> pydantic: 2.8.2
> PyYAML: 6.0.2
> requests: 2.32.3
> SQLAlchemy: 2.0.32
> tenacity: 8.5.0
> typing-extensions: 4.12.2
| JSON Deserialization Error When Trying To Run Gemma Through Vertex AI | https://api.github.com/repos/langchain-ai/langchain/issues/25268/comments | 0 | 2024-08-10T21:08:48Z | 2024-08-10T21:11:30Z | https://github.com/langchain-ai/langchain/issues/25268 | 2,459,290,965 | 25,268 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```shell
cd libs/core
poetry install --with test
make test
```
### Error Message and Stack Trace (if applicable)
# output
```shell
poetry run pytest tests/unit_tests/
========================================================================= test session starts ==========================================================================
platform linux -- Python 3.11.9, pytest-7.4.4, pluggy-1.5.0
rootdir: /home/cheney/expr/langchain/libs/core
configfile: pyproject.toml
plugins: asyncio-0.21.2, mock-3.14.0, vcr-1.0.2, cov-4.1.0, anyio-4.4.0, dotenv-0.5.2, profiling-1.7.0, requests-mock-1.12.1, syrupy-4.6.1, socket-0.6.0
asyncio: mode=Mode.AUTO
collected 1040 items
tests/unit_tests/test_globals.py . [ 0%]
tests/unit_tests/test_graph_vectorstores.py .. [ 0%]
tests/unit_tests/test_imports.py .. [ 0%]
tests/unit_tests/test_messages.py .......................................... [ 4%]
tests/unit_tests/test_outputs.py .. [ 4%]
tests/unit_tests/test_sys_info.py . [ 4%]
tests/unit_tests/test_tools.py ......................................................................................................s. [ 14%]
tests/unit_tests/_api/test_beta_decorator.py ............. [ 16%]
tests/unit_tests/_api/test_deprecation.py ............... [ 17%]
tests/unit_tests/_api/test_imports.py . [ 17%]
tests/unit_tests/_api/test_path.py . [ 17%]
tests/unit_tests/caches/test_in_memory_cache.py ......... [ 18%]
tests/unit_tests/callbacks/test_dispatch_custom_event.py ..... [ 19%]
tests/unit_tests/callbacks/test_imports.py . [ 19%]
tests/unit_tests/chat_history/test_chat_history.py ... [ 19%]
tests/unit_tests/document_loaders/test_base.py .... [ 19%]
tests/unit_tests/documents/test_imports.py . [ 19%]
tests/unit_tests/documents/test_str.py .. [ 20%]
tests/unit_tests/embeddings/test_deterministic_embedding.py . [ 20%]
tests/unit_tests/example_selectors/test_base.py .. [ 20%]
tests/unit_tests/example_selectors/test_imports.py . [ 20%]
tests/unit_tests/example_selectors/test_length_based_example_selector.py .... [ 20%]
tests/unit_tests/example_selectors/test_similarity.py .......... [ 21%]
tests/unit_tests/fake/test_fake_chat_model.py ...... [ 22%]
tests/unit_tests/indexing/test_hashed_document.py ..... [ 22%]
tests/unit_tests/indexing/test_in_memory_indexer.py ........................ [ 25%]
tests/unit_tests/indexing/test_in_memory_record_manager.py ......... [ 26%]
tests/unit_tests/indexing/test_indexing.py .......................... [ 28%]
tests/unit_tests/indexing/test_public_api.py . [ 28%]
tests/unit_tests/language_models/test_imports.py . [ 28%]
tests/unit_tests/language_models/chat_models/test_base.py .............. [ 30%]
tests/unit_tests/language_models/chat_models/test_cache.py ........x... [ 31%]
tests/unit_tests/language_models/chat_models/test_rate_limiting.py ......... [ 32%]
tests/unit_tests/language_models/llms/test_base.py ........ [ 32%]
tests/unit_tests/language_models/llms/test_cache.py .... [ 33%]
tests/unit_tests/load/test_imports.py . [ 33%]
tests/unit_tests/load/test_serializable.py ... [ 33%]
tests/unit_tests/messages/test_ai.py .. [ 33%]
tests/unit_tests/messages/test_imports.py . [ 33%]
tests/unit_tests/messages/test_utils.py ........................... [ 36%]
tests/unit_tests/output_parsers/test_base_parsers.py .. [ 36%]
tests/unit_tests/output_parsers/test_imports.py . [ 36%]
tests/unit_tests/output_parsers/test_json.py .................................. [ 40%]
tests/unit_tests/output_parsers/test_list_parser.py ......... [ 40%]
tests/unit_tests/output_parsers/test_openai_functions.py ............ [ 42%]
tests/unit_tests/output_parsers/test_openai_tools.py ...........s [ 43%]
tests/unit_tests/output_parsers/test_pydantic_parser.py ...... [ 43%]
tests/unit_tests/output_parsers/test_xml_parser.py .............. [ 45%]
tests/unit_tests/outputs/test_chat_generation.py ...... [ 45%]
tests/unit_tests/outputs/test_imports.py . [ 45%]
tests/unit_tests/prompts/test_chat.py .................................... [ 49%]
tests/unit_tests/prompts/test_few_shot.py ................ [ 50%]
tests/unit_tests/prompts/test_few_shot_with_templates.py .. [ 51%]
tests/unit_tests/prompts/test_image.py .. [ 51%]
tests/unit_tests/prompts/test_imports.py . [ 51%]
tests/unit_tests/prompts/test_loading.py .......... [ 52%]
tests/unit_tests/prompts/test_pipeline_prompt.py .... [ 52%]
tests/unit_tests/prompts/test_prompt.py ........................................ [ 56%]
tests/unit_tests/prompts/test_structured.py .. [ 56%]
tests/unit_tests/prompts/test_utils.py . [ 56%]
tests/unit_tests/rate_limiters/test_in_memory_rate_limiter.py ..... [ 57%]
tests/unit_tests/runnables/test_config.py .... [ 57%]
tests/unit_tests/runnables/test_configurable.py ..... [ 58%]
tests/unit_tests/runnables/test_context.py ................ [ 59%]
tests/unit_tests/runnables/test_fallbacks.py ............ [ 60%]
tests/unit_tests/runnables/test_graph.py ....... [ 61%]
tests/unit_tests/runnables/test_history.py .................... [ 63%]
tests/unit_tests/runnables/test_imports.py . [ 63%]
tests/unit_tests/runnables/test_runnable.py .............................................................................................. [ 72%]
tests/unit_tests/runnables/test_runnable_events_v1.py ...........x.....x.X [ 74%]
tests/unit_tests/runnables/test_runnable_events_v2.py .............x................... [ 77%]
tests/unit_tests/runnables/test_tracing_interops.py .............. [ 79%]
tests/unit_tests/runnables/test_utils.py ...... [ 79%]
tests/unit_tests/stores/test_in_memory.py ................................ [ 82%]
tests/unit_tests/tracers/test_async_base_tracer.py ............ [ 83%]
tests/unit_tests/tracers/test_base_tracer.py ............. [ 85%]
tests/unit_tests/tracers/test_imports.py . [ 85%]
tests/unit_tests/tracers/test_langchain.py .... [ 85%]
tests/unit_tests/tracers/test_memory_stream.py .... [ 85%]
tests/unit_tests/tracers/test_run_collector.py . [ 86%]
tests/unit_tests/tracers/test_schemas.py . [ 86%]
tests/unit_tests/utils/test_aiter.py .... [ 86%]
tests/unit_tests/utils/test_env.py . [ 86%]
tests/unit_tests/utils/test_function_calling.py ..x.......... [ 87%]
tests/unit_tests/utils/test_html.py ........... [ 88%]
tests/unit_tests/utils/test_imports.py . [ 89%]
tests/unit_tests/utils/test_iter.py .... [ 89%]
tests/unit_tests/utils/test_json_schema.py .......... [ 90%]
tests/unit_tests/utils/test_pydantic.py ......s.. [ 91%]
tests/unit_tests/utils/test_rm_titles.py .... [ 91%]
tests/unit_tests/utils/test_utils.py .....................................s.......... [ 96%]
tests/unit_tests/vectorstores/test_in_memory.py ................................... [ 99%]
tests/unit_tests/vectorstores/test_vectorstore.py .... [100%]
=========================================================================== warnings summary ===========================================================================
tests/unit_tests/test_messages.py::test_convert_to_messages
/home/cheney/expr/langchain/libs/core/langchain_core/_api/beta_decorator.py:87: LangChainBetaWarning: The class `RemoveMessage` is in beta. It is actively being worked on, so the API may change.
warn_beta(
tests/unit_tests/test_messages.py::test_tool_message_serdes
/home/cheney/expr/langchain/libs/core/langchain_core/_api/beta_decorator.py:87: LangChainBetaWarning: The function `load` is in beta. It is actively being worked on, so the API may change.
warn_beta(
tests/unit_tests/test_tools.py::test_convert_from_runnable_dict
tests/unit_tests/runnables/test_runnable_events_v1.py::test_event_stream_with_simple_function_tool
/home/cheney/expr/langchain/libs/core/langchain_core/_api/beta_decorator.py:87: LangChainBetaWarning: This API is in beta and may change in the future.
warn_beta(
tests/unit_tests/test_tools.py::test_args_schema_as_pydantic[FooProper]
tests/unit_tests/test_tools.py::test_args_schema_as_pydantic[FooProper]
tests/unit_tests/test_tools.py::test_args_schema_explicitly_typed
tests/unit_tests/test_tools.py::test_args_schema_explicitly_typed
tests/unit_tests/test_tools.py::test_structured_tool_with_different_pydantic_versions[FooProper]
tests/unit_tests/test_tools.py::test_structured_tool_with_different_pydantic_versions[FooProper]
tests/unit_tests/test_tools.py::test_tool_args_schema_pydantic_v2_with_metadata
/home/cheney/expr/langchain/.venv/lib/python3.11/site-packages/pydantic/main.py:1328: PydanticDeprecatedSince20: The `schema` method is deprecated; use `model_json_schema` instead. Deprecated in Pydantic V2.0 to be removed in V3.0. See Pydantic V2 Migration Guide at https://errors.pydantic.dev/2.8/migration/
warnings.warn(
tests/unit_tests/test_tools.py::test_structured_tool_with_different_pydantic_versions[FooProper]
tests/unit_tests/test_tools.py::test_tool_args_schema_pydantic_v2_with_metadata
tests/unit_tests/test_tools.py::test_tool_args_schema_pydantic_v2_with_metadata
/home/cheney/expr/langchain/.venv/lib/python3.11/site-packages/pydantic/main.py:1132: PydanticDeprecatedSince20: The `parse_obj` method is deprecated; use `model_validate` instead. Deprecated in Pydantic V2.0 to be removed in V3.0. See Pydantic V2 Migration Guide at https://errors.pydantic.dev/2.8/migration/
warnings.warn(
tests/unit_tests/test_tools.py::test_structured_tool_with_different_pydantic_versions[FooProper]
tests/unit_tests/test_tools.py::test_tool_args_schema_pydantic_v2_with_metadata
/home/cheney/expr/langchain/.venv/lib/python3.11/site-packages/pydantic/main.py:1087: PydanticDeprecatedSince20: The `dict` method is deprecated; use `model_dump` instead. Deprecated in Pydantic V2.0 to be removed in V3.0. See Pydantic V2 Migration Guide at https://errors.pydantic.dev/2.8/migration/
warnings.warn('The `dict` method is deprecated; use `model_dump` instead.', category=PydanticDeprecatedSince20)
tests/unit_tests/indexing/test_in_memory_indexer.py::TestDocumentIndexerTestSuite::test_upsert_documents_has_no_ids
/home/cheney/expr/langchain/libs/core/langchain_core/_api/beta_decorator.py:87: LangChainBetaWarning: Introduced in version 0.2.29. Underlying abstraction subject to change.
warn_beta(
tests/unit_tests/language_models/chat_models/test_rate_limiting.py::test_rate_limit_invoke
/home/cheney/expr/langchain/libs/core/langchain_core/_api/beta_decorator.py:87: LangChainBetaWarning: Introduced in 0.2.24. API subject to change.
warn_beta(
tests/unit_tests/output_parsers/test_pydantic_parser.py::test_pydantic_parser_chaining[ForecastV2]
tests/unit_tests/output_parsers/test_pydantic_parser.py::test_pydantic_parser_chaining[ForecastV2]
tests/unit_tests/output_parsers/test_pydantic_parser.py::test_pydantic_parser_chaining[ForecastV2]
tests/unit_tests/output_parsers/test_pydantic_parser.py::test_pydantic_parser_validation[ForecastV2]
tests/unit_tests/output_parsers/test_pydantic_parser.py::test_pydantic_parser_validation[ForecastV2]
tests/unit_tests/output_parsers/test_pydantic_parser.py::test_pydantic_parser_validation[ForecastV2]
/home/cheney/expr/langchain/.venv/lib/python3.11/site-packages/pydantic/_internal/_model_construction.py:268: PydanticDeprecatedSince20: The `__fields__` attribute is deprecated, use `model_fields` instead. Deprecated in Pydantic V2.0 to be removed in V3.0. See Pydantic V2 Migration Guide at https://errors.pydantic.dev/2.8/migration/
warnings.warn(
tests/unit_tests/prompts/test_chat.py::test_message_prompt_template_from_template_file
tests/unit_tests/prompts/test_prompt.py::test_prompt_from_file
/home/cheney/expr/langchain/libs/core/langchain_core/prompts/prompt.py:237: DeprecationWarning: `input_variables' is deprecated and ignored.
warnings.warn(
tests/unit_tests/prompts/test_image.py::test_image_prompt_template_deserializable
/home/cheney/expr/langchain/libs/core/langchain_core/_api/beta_decorator.py:87: LangChainBetaWarning: The function `loads` is in beta. It is actively being worked on, so the API may change.
warn_beta(
tests/unit_tests/vectorstores/test_in_memory.py::test_inmemory_upsert
tests/unit_tests/vectorstores/test_in_memory.py::test_inmemory_upsert
/home/cheney/expr/langchain/libs/core/langchain_core/_api/deprecation.py:141: LangChainDeprecationWarning: This was a beta API that was added in 0.2.11. It'll be removed in 0.3.0.
warn_deprecated(
-- Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html
----------------------------------------------------------------------- snapshot report summary ------------------------------------------------------------------------
54 snapshots passed.
========================================================================= slowest 5 durations ==========================================================================
5.11s call tests/unit_tests/test_imports.py::test_importable_all_via_subprocess
1.50s call tests/unit_tests/runnables/test_runnable.py::test_retrying
0.77s call tests/unit_tests/runnables/test_runnable.py::test_map_astream
0.61s call tests/unit_tests/runnables/test_runnable.py::test_map_stream
0.50s call tests/unit_tests/runnables/test_runnable_events_v2.py::test_cancel_astream_events
================================================= 1030 passed, 4 skipped, 5 xfailed, 1 xpassed, 29 warnings in 21.37s ==================================================
```
### Description
I haven't changed the code, the code is from the master branch.
In `libs/core`, tests cannot pass all tests, and some test cases fail.
### System Info
python: 3.11.9 | (core)(test): core tests cannot pass all tests, and some test cases fail | https://api.github.com/repos/langchain-ai/langchain/issues/25261/comments | 0 | 2024-08-10T09:13:45Z | 2024-08-10T11:01:46Z | https://github.com/langchain-ai/langchain/issues/25261 | 2,459,001,188 | 25,261 |
[
"langchain-ai",
"langchain"
] | ### Privileged issue
- [X] I am a LangChain maintainer, or was asked directly by a LangChain maintainer to create an issue here.
### Issue Content
Currently, the Firecrawl loader only supports usage of the API and does not support self-hosted instances. I will create a pull request to fix this. | Enhancement: Adding Firecrawl API URL option for document loader for self-hosted Firecrawl instance | https://api.github.com/repos/langchain-ai/langchain/issues/25259/comments | 0 | 2024-08-10T06:12:14Z | 2024-08-10T06:13:50Z | https://github.com/langchain-ai/langchain/issues/25259 | 2,458,929,193 | 25,259 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Description
Issue
When using the ChatHuggingFace model from the langchain_huggingface library with a locally downloaded model, the system prompts for a Hugging Face API key. This behavior persists even though the model is stored locally and accessed without an internet connection. The issue does not occur when using the llm.invoke() method directly with the HuggingFacePipeline.
Description
I have downloaded an LLM model locally and want to access it without an internet connection. The model works correctly when invoked directly using llm.invoke(), but when attempting to use the ChatHuggingFace model, it requires a Hugging Face API key.
### Example Code
from transformers import BitsAndBytesConfig
from langchain_huggingface import HuggingFacePipeline
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
#### Define Quantization
quantization_config = BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_quant_type="nf4",
bnb_4bit_compute_dtype="bfloat16",
bnb_4bit_use_double_quant=True
)
#### Initialize LLM and Parameters
model_id = "microsoft/Phi-3-mini-4k-instruct"
tokenizer = AutoTokenizer.from_pretrained("../Phi-3-mini-4k-instruct/")
model = AutoModelForCausalLM.from_pretrained("../Phi-3-mini-4k-instruct/", quantization_config=quantization_config)
#### Initialize pipeline
pipe = pipeline(
task="text-generation",
model=model,
tokenizer=tokenizer,
max_new_tokens=1024,
do_sample=False,
repetition_penalty=1.03,
)
#### Initialize HuggingFace Pipeline
llm = HuggingFacePipeline(
pipeline=pipe,
)
#### Successful Invocation
llm.invoke("What is HuggingFace?")
(Output: "What is HuggingFace?\n\nHuggingFace is a company and an open-source community...")
#### Using Chat Model
from langchain_huggingface import ChatHuggingFace
llm_engine_hf = ChatHuggingFace(llm=llm)
llm_engine_hf.invoke("Hugging Face is")
#### Error encountered
"""
Error Message
LocalTokenNotFoundError: Token is required (`token=True`), but no token found.
You need to provide a token or be logged in to Hugging Face with `huggingface-cli login`
or `huggingface_hub.login`. See https://huggingface.co/settings/tokens
"""
### Error Message and Stack Trace (if applicable)
Error Message
LocalTokenNotFoundError: Token is required (`token=True`), but no token found.
You need to provide a token or be logged in to Hugging Face with `huggingface-cli login`
or `huggingface_hub.login`. See https://huggingface.co/settings/tokens
### System Info
linux - ubuntu 20.04.06 LTS
Python - 3.8
langchain-huggingface==0.0.3 | `ChatHuggingFace` Asking for API Token Even for Locally Downloaded Model | https://api.github.com/repos/langchain-ai/langchain/issues/25258/comments | 0 | 2024-08-10T02:32:41Z | 2024-08-10T02:37:26Z | https://github.com/langchain-ai/langchain/issues/25258 | 2,458,859,649 | 25,258 |
[
"langchain-ai",
"langchain"
] | ### URL
https://python.langchain.com/v0.2/docs/tutorials/agents/
### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
Referencing this tutorial: https://python.langchain.com/v0.2/docs/tutorials/agents/
This does not work unless you import from langgraph-checkpoint-sqlite:
langgraph.checkpoint.sqlite import SqliteSaver
This generates an error:
agent_executor = create_react_agent(model, tools, checkpointer=memory)
The error is:
Traceback (most recent call last):
File "c:\Users\steph\source\repos\git-repos\LangChain-101-For-Beginners-Python\my-lesson-04a-build-an-agent.py", line 23, in <module>
agent_executor = create_react_agent(model, tools, checkpointer=memory)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\steph\.virtualenvs\LangChain-101-For-Beginners-Python-fkBkJkaL\Lib\site-packages\langgraph\_api\deprecation.py", line 80, in wrapper
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\steph\.virtualenvs\LangChain-101-For-Beginners-Python-fkBkJkaL\Lib\site-packages\langgraph\prebuilt\chat_agent_executor.py", line 511, in create_react_agent
return workflow.compile(
^^^^^^^^^^^^^^^^^
File "C:\Users\steph\.virtualenvs\LangChain-101-For-Beginners-Python-fkBkJkaL\Lib\site-packages\langgraph\graph\state.py", line 431, in compile
compiled = CompiledStateGraph(
^^^^^^^^^^^^^^^^^^^
File "C:\Users\steph\.virtualenvs\LangChain-101-For-Beginners-Python-fkBkJkaL\Lib\site-packages\langchain_core\load\serializable.py", line 113, in __init__
super().__init__(*args, **kwargs)
File "C:\Users\steph\.virtualenvs\LangChain-101-For-Beginners-Python-fkBkJkaL\Lib\site-packages\pydantic\v1\main.py", line 341, in __init__
raise validation_error
pydantic.v1.error_wrappers.ValidationError: 1 validation error for CompiledStateGraph
checkpointer
instance of BaseCheckpointSaver expected (type=type_error.arbitrary_type; expected_arbitrary_type=BaseCheckpointSaver)
### Idea or request for content:
_No response_ | DOC: <Issue related to /v0.2/docs/tutorials/agents/> | https://api.github.com/repos/langchain-ai/langchain/issues/25257/comments | 0 | 2024-08-09T23:43:41Z | 2024-08-09T23:46:09Z | https://github.com/langchain-ai/langchain/issues/25257 | 2,458,798,344 | 25,257 |
[
"langchain-ai",
"langchain"
] | ### URL
https://python.langchain.com/v0.2/docs/integrations/tools/tavily_search/
### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
In the [installation](https://python.langchain.com/v0.2/docs/integrations/tools/tavily_search/#instantiation) part of the TavilySearch the import of the TavilySearchResults is wrong.
### Idea or request for content:
The import should be ```from langchain_community.tools.tavily_search import TavilySearchResults``` instead of ```from langchain_community.tools import TavilySearchResults``` (missing ```.tavily_search``` after **tools**) | Incorrect Import Statement DOC: <Issue related to /v0.2/docs/integrations/tools/tavily_search/> | https://api.github.com/repos/langchain-ai/langchain/issues/25256/comments | 0 | 2024-08-09T21:39:28Z | 2024-08-09T21:41:58Z | https://github.com/langchain-ai/langchain/issues/25256 | 2,458,697,633 | 25,256 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
VLLM._generate = generate
self.model = VLLM(model= model_name,
quantization="awq",
trust_remote_code=config.trust_remote_code,
max_new_tokens=config.max_new_tokens,
batch_size = config.batch_size,
top_k=config.top_k,
top_p=config.top_p,
temperature=config.temperature,
repetition_penalty=1.1,
stream=True,
vllm_kwargs={"max_model_len":4096,"gpu_memory_utilization":0.5}
)
#self.cache = OrderedDict()
#self.capacity = 500
self.logger.info("model loaded succesfully")
except TypeError as e:
self.logger.error(f"TypeError initializing model: {e}")
except ValueError as e:
self.logger.error(f"ValueError initializing model: {e}")
except Exception as e:
self.logger.error(f"Error initializing model: {e}")
### Error Message and Stack Trace (if applicable)
2024-08-09 20:01:33,991 - INFO - Model Name: /home/llama3.1-instruct-8b-sft-lora
2024-08-09 20:01:35,845 - ERROR - Error initializing model: 'type'
### Description
I am trying to load the llama model with langchain vllm,, but unable to do so
### System Info
langchain==0.1.20
langchain-community==0.0.38
langchain-core==0.1.52
langchain-experimental==0.0.47
langchain-text-splitters==0.0.1 | not able to load llama 3.1 quantised model with langchain vllm | https://api.github.com/repos/langchain-ai/langchain/issues/25251/comments | 0 | 2024-08-09T20:10:10Z | 2024-08-09T20:12:40Z | https://github.com/langchain-ai/langchain/issues/25251 | 2,458,599,755 | 25,251 |
[
"langchain-ai",
"langchain"
] | ### URL
https://python.langchain.com/v0.2/docs/integrations/text_embedding/cohere/
### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
ModuleNotFoundError Traceback (most recent call last)
[<ipython-input-5-3c65dae4e654>](https://localhost:8080/#) in <cell line: 3>()
1 from langchain_community.document_loaders import WebBaseLoader
2 from langchain_community.vectorstores import FAISS
----> 3 from langchain_cohere import CohereEmbeddings
4 from langchain_text_splitters import RecursiveCharacterTextSplitter
ModuleNotFoundError: No module named 'langchain_cohere'
the link for the documentation i am referring to is: https://python.langchain.com/v0.2/docs/integrations/text_embedding/cohere/
### Idea or request for content:
This is the modification i have done!
from langchain_community.document_loaders import WebBaseLoader
from langchain_community.vectorstores import FAISS
from langchain.embeddings import CohereEmbeddings
from langchain_text_splitters import RecursiveCharacterTextSplitter | DOC: <Issue related to /v0.2/docs/integrations/text_embedding/cohere/> | https://api.github.com/repos/langchain-ai/langchain/issues/25238/comments | 0 | 2024-08-09T15:51:59Z | 2024-08-09T15:54:37Z | https://github.com/langchain-ai/langchain/issues/25238 | 2,458,215,625 | 25,238 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
The following code causes an issue where the AI is unable to invoke the tool properly
This is happening because the argument name is "config"
The error being thrown is -
`TypeError: get_save_create_table_config.<locals>.save_create_table_config() missing 1 required positional argument: 'config'`
```
@tool
def save_create_table_config(config: CreateTableConfig) -> str:
'''Calls the endpoint to save the CreateTableConfig'''
....
return ...
```
By simply renaming the argument to something else, the tool invocation works perfectly
```
@tool
def save_create_table_config(create_table_config: CreateTableConfig) -> str:
'''Calls the endpoint to save the CreateTableConfig'''
....
return ...
```
### Error Message and Stack Trace (if applicable)
```TypeError("get_save_create_table_config.<locals>.save_create_table_config() missing 1 required positional argument: 'config'")Traceback (most recent call last):
File "/usr/local/lib/python3.11/site-packages/langchain/agents/agent_iterator.py", line 266, in __aiter__
async for chunk in self.agent_executor._aiter_next_step(
File "/usr/local/lib/python3.11/site-packages/langchain/agents/agent.py", line 1539, in _aiter_next_step
result = await asyncio.gather(
^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/langchain/agents/agent.py", line 1572, in _aperform_agent_action
observation = await tool.arun(
^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/langchain_core/tools.py", line 729, in arun
raise error_to_raise
File "/usr/local/lib/python3.11/site-packages/langchain_core/tools.py", line 696, in arun
response = await asyncio.create_task(coro, context=context) # type: ignore
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/langchain_core/tools.py", line 947, in _arun
return await super()._arun(
^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/langchain_core/tools.py", line 496, in _arun
return await run_in_executor(None, self._run, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/langchain_core/runnables/config.py", line 619, in run_in_executor
return await asyncio.get_running_loop().run_in_executor(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/concurrent/futures/thread.py", line 58, in run
result = self.fn(*self.args, **self.kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/langchain_core/runnables/config.py", line 610, in wrapper
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/langchain_core/tools.py", line 927, in _run
return self.func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^
TypeError: get_save_create_table_config.<locals>.save_create_table_config() missing 1 required positional argument: 'config'```
### System Info
System Information
------------------
> OS: Darwin
> OS Version: Darwin Kernel Version 23.6.0: Mon Jul 29 21:14:30 PDT 2024; root:xnu-10063.141.2~1/RELEASE_ARM64_T6000
> Python Version: 3.11.1 (v3.11.1:a7a450f84a, Dec 6 2022, 15:24:06) [Clang 13.0.0 (clang-1300.0.29.30)]
Package Information
-------------------
> langchain_core: 0.2.29
> langchain: 0.2.12
> langchain_community: 0.2.11
> langsmith: 0.1.81
> langchain_anthropic: 0.1.22
> langchain_experimental: 0.0.64
> langchain_google_community: 1.0.7
> langchain_google_genai: 1.0.4
> langchain_google_vertexai: 1.0.8
> langchain_postgres: 0.0.9
> langchain_text_splitters: 0.2.0
> langgraph: 0.0.51
Optional packages not installed
-------------------------------
> langserve
Other Dependencies
------------------
> aiohttp: 3.9.5
> anthropic: 0.33.0
> anthropic[vertexai]: Installed. No version info available.
> async-timeout: Installed. No version info available.
> beautifulsoup4: 4.12.3
> dataclasses-json: 0.6.6
> db-dtypes: Installed. No version info available.
> defusedxml: 0.7.1
> gapic-google-longrunning: Installed. No version info available.
> google-api-core: 2.19.1
> google-api-python-client: 2.133.0
> google-auth-httplib2: 0.2.0
> google-auth-oauthlib: 1.2.0
> google-cloud-aiplatform: 1.61.0
> google-cloud-bigquery: 3.25.0
> google-cloud-bigquery-storage: Installed. No version info available.
> google-cloud-contentwarehouse: Installed. No version info available.
> google-cloud-discoveryengine: Installed. No version info available.
> google-cloud-documentai: Installed. No version info available.
> google-cloud-documentai-toolbox: Installed. No version info available.
> google-cloud-speech: Installed. No version info available.
> google-cloud-storage: 2.18.2
> google-cloud-texttospeech: Installed. No version info available.
> google-cloud-translate: Installed. No version info available.
> google-cloud-vision: 3.7.2
> google-generativeai: 0.5.4
> googlemaps: Installed. No version info available.
> grpcio: 1.65.4
> jsonpatch: 1.33
> lxml: 5.2.2
> numpy: 1.26.4
> orjson: 3.10.5
> packaging: 24.1
> pandas: 2.2.2
> pgvector: 0.2.5
> pillow: 10.3.0
> psycopg: 3.2.1
> psycopg-pool: 3.2.2
> pyarrow: 16.1.0
> pydantic: 2.8.2
> PyYAML: 6.0.1
> requests: 2.32.3
> SQLAlchemy: 2.0.30
> sqlalchemy: 2.0.30
> tenacity: 8.3.0
> typing-extensions: 4.12.2
> uuid6: 2024.1.12 | Show proper error when tool function argument is named "config" | https://api.github.com/repos/langchain-ai/langchain/issues/25228/comments | 0 | 2024-08-09T11:11:18Z | 2024-08-09T11:15:24Z | https://github.com/langchain-ai/langchain/issues/25228 | 2,457,686,325 | 25,228 |
[
"langchain-ai",
"langchain"
] | ### URL
https://python.langchain.com/v0.2/docs/integrations/chat/zhipuai/
### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
import os
import platform
from typing import Optional, Type
from langchain.pydantic_v1 import BaseModel, Field
from langchain.tools import BaseTool, StructuredTool, tool
from langchain.callbacks.manager import (
AsyncCallbackManagerForToolRun,
CallbackManagerForToolRun,
)
# 创建一个桌面文件夹
def create_folder_on_desktop(folder_name: Optional[str] = None) -> str:
if not folder_name:
folder_name = 'workspace'
# 获取当前操作系统
os_type = platform.system()
# print(os_type)
# 根据操作系统获取桌面路径
if os_type == "Windows":
desktop_path = os.path.join(os.path.expanduser("~"), "Desktop")
elif os_type == "Darwin": # macOS 的系统名称是 "Darwin"
desktop_path = os.path.join(os.path.expanduser("~"), "Desktop")
else:
raise OSError("Unsupported operating system")
# 创建文件夹路径
folder_path = os.path.join(desktop_path, folder_name)
# 如果文件夹不存在,则创建
if not os.path.exists(folder_path):
os.makedirs(folder_path)
return folder_path
# langchain 使用2个或以上参数时会报错,暂时无解决方案
class WriteDocument(BaseModel):
filename: str = Field(description="The name of the file")
file_content: str = Field(description="Contents of the file")
class WriteFilesTool(BaseTool):
name = "write_document"
description = "Write the file contents to the file."
args_schema: Type[BaseModel] = WriteDocument
return_direct: bool = False
def _run(
self, filename: str, file_content: str, run_manager: Optional[CallbackManagerForToolRun] = None
) -> str:
"""Write the file contents to the file."""
# 文件夹的名称
folder_name = 'workspace'
# 获取桌面文件夹路径
current_path = create_folder_on_desktop(folder_name)
# 创建并写入内容到 .md 文件
file_path = os.path.join(current_path, filename)
with open(file_path, 'w', encoding='utf-8') as f:
f.write(file_content)
return "The file has been written successfully."
async def _arun(
self,
a: int,
b: int,
run_manager: Optional[AsyncCallbackManagerForToolRun] = None,
) -> str:
"""Use the tool asynchronously."""
raise NotImplementedError("Calculator does not support async")
# exector
from langchain_community.chat_models import ChatZhipuAI
from langchain import hub
from langchain.agents import AgentExecutor, create_json_chat_agent, create_tool_calling_agent, create_react_agent
from langchain_community.tools.tavily_search import TavilySearchResults
from langchain_experimental.tools import PythonREPLTool
from write_file_tool import WriteFilesTool
from dotenv import load_dotenv, find_dotenv
import os
load_dotenv(find_dotenv())
key = os.environ['ZHIPUAI_API_KEY']
tools = [TavilySearchResults(max_results=3), WriteFilesTool()]
# prompt = hub.pull("hwchase17/react-chat-json")
prompt = hub.pull("hwchase17/react")
llm = ChatZhipuAI(temperature=0.6, model="glm-4")
# agent = create_json_chat_agent(llm, tools, prompt)
agent = create_react_agent(llm, tools, prompt)
# 代理执行器
agent_executor = AgentExecutor(
agent=agent, tools=tools, verbose=True
)
# langchain的使用教程,并把内容写入到markdown文件中
result = agent_executor.invoke({'input': 'langchain的使用教程,并把内容写入到markdown文件中'})
print(result['output'])
### Idea or request for content:
langchain 0.2.12
langchain-anthropic 0.1.22
langchain-community 0.2.11
langchain-core 0.2.29
langchain-experimental 0.0.64
langchain-openai 0.1.20
langchain-text-splitters 0.2.2
langchainhub 0.1.20
langgraph 0.2.2
langgraph-checkpoint 1.0.2
langsmith 0.1.98
pydantic 2.8.2
pydantic_core 2.20.1
| DOC: <Issue related to /v0.2/docs/integrations/chat/zhipuai/> | https://api.github.com/repos/langchain-ai/langchain/issues/25224/comments | 0 | 2024-08-09T09:18:38Z | 2024-08-09T09:21:19Z | https://github.com/langchain-ai/langchain/issues/25224 | 2,457,490,436 | 25,224 |
[
"langchain-ai",
"langchain"
] | ### URL
https://python.langchain.com/v0.2/docs/how_to/functions/
### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
At the end of the second paragraph:
Note that all inputs to these functions need to be a SINGLE argument. If you have a function that accepts multiple arguments, you should write a wrapper that accepts a single dict input and unpacks it into multiple **argument**.
May be "arguments"?
### Idea or request for content:
_No response_ | Grammar Error? | https://api.github.com/repos/langchain-ai/langchain/issues/25222/comments | 0 | 2024-08-09T08:07:21Z | 2024-08-09T08:09:53Z | https://github.com/langchain-ai/langchain/issues/25222 | 2,457,362,330 | 25,222 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
from langchain_huggingface import HuggingFaceEndpoint, ChatHuggingFace
from langchain_community.callbacks import get_openai_callback
llm = HuggingFaceEndpoint(
repo_id=repo_id, temperature=0.01,max_new_tokens=2048, huggingfacehub_api_token=HUGGINGFACE_API_KEY)
llm = ChatHuggingFace(llm=llm)
messages = [
(
"system",
"You are a smart AI that understand the tabular data structure.",
),
("user", f"{prompt}"),
]
with get_openai_callback() as cb:
response = llm.invoke(messages)
print(cb)
if not isinstance(response, str):
response = response.content
print(response)****
### Error Message and Stack Trace (if applicable)
Tokens Used: 1668
Prompt Tokens: 1568
Completion Tokens: 100
Successful Requests: 1
### Description
I am trying to use the `Mistral-7B` model from the huggingface. While I am using the `HuggingFaceEndpoint` I am getting the expected answer. But while using the `ChatHuggingFace` , I am always getting 100 tokens. I have gone through the existing issues but couldn't find the solutions yet.
### System Info
System Information
------------------
> OS: Linux
> OS Version: #44~22.04.1-Ubuntu SMP PREEMPT_DYNAMIC Tue Jun 18 14:36:16 UTC 2
> Python Version: 3.10.0 (default, Mar 3 2022, 09:58:08) [GCC 7.5.0]
Package Information
-------------------
> langchain_core: 0.2.29
> langchain: 0.2.12
> langchain_community: 0.2.11
> langsmith: 0.1.98
> langchain_huggingface: 0.0.3
> langchain_text_splitters: 0.2.2
| ChatHuggingFace always returns only 100 tokens as response without considering the `max_new_tokens` parameter | https://api.github.com/repos/langchain-ai/langchain/issues/25219/comments | 4 | 2024-08-09T05:31:22Z | 2024-08-10T13:20:44Z | https://github.com/langchain-ai/langchain/issues/25219 | 2,457,136,389 | 25,219 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
Top
``` python
import logging
import os
import random
from concurrent.futures import ThreadPoolExecutor, as_completed
from enum import Enum
from typing import BinaryIO
from typing import cast, Literal, Union
from langchain_community.document_loaders.generic import GenericLoader
from langchain_community.document_loaders.parsers.audio import OpenAIWhisperParser
from pydub import AudioSegment
from cache import conditional_lru_cache
from youtube.loader import YoutubeAudioLoader
....
loader = GenericLoader(YoutubeAudioLoader([url], save_dir, proxy_servers),
OpenAIWhisperParser(api_key=get_settings().openai_api_key,
language=lang.value,
response_format="srt",
temperature=0
))
```
YoutubeAudioLoader is my customization of Langchain YoutubeAudioLoader which allows the use of a proxy for Youtube.
```python
import random
from typing import Iterable, List
from langchain_community.document_loaders.blob_loaders import FileSystemBlobLoader
from langchain_community.document_loaders.blob_loaders.schema import Blob, BlobLoader
class YoutubeAudioLoader(BlobLoader):
"""Load YouTube urls as audio file(s)."""
def __init__(self, urls: List[str], save_dir: str, proxy_servers: List[str] = None):
if not isinstance(urls, list):
raise TypeError("urls must be a list")
self.urls = urls
self.save_dir = save_dir
self.proxy_servers = proxy_servers
def yield_blobs(self) -> Iterable[Blob]:
"""Yield audio blobs for each url."""
try:
import yt_dlp
except ImportError:
raise ImportError(
"yt_dlp package not found, please install it with "
"`pip install yt_dlp`"
)
# Use yt_dlp to download audio given a YouTube url
ydl_opts = {
"format": "m4a/bestaudio/best",
"noplaylist": True,
"outtmpl": self.save_dir + "/%(title)s.%(ext)s",
"postprocessors": [
{
"key": "FFmpegExtractAudio",
"preferredcodec": "m4a",
}
],
'netrc': True,
'verbose': True,
"extractor_args": {"youtube": "youtube:player_skip=webpage"}
}
if (self.proxy_servers):
ydl_opts["proxy"] = random.choice(self.proxy_servers)
for url in self.urls:
# Download file
with yt_dlp.YoutubeDL(ydl_opts) as ydl:
ydl.download(url)
# Yield the written blobs
loader = FileSystemBlobLoader(self.save_dir, glob="*.m4a")
for blob in loader.yield_blobs():
yield blob
```
## My workaround in `OpenAIWhisperParser` class
``` python
if hasattr(transcript, 'text'):
yield Document(
page_content=transcript.text,
metadata={"source": blob.source, "chunk": split_number},
)
else:
yield Document(
page_content=transcript,
metadata={"source": blob.source, "chunk": split_number},
)
```
### Error Message and Stack Trace (if applicable)
```bash
2024-08-09 04:54:51,225 [DEBUG] [AnyIO worker thread] HTTP Response: POST https://api.openai.com/v1/audio/transcriptions "200 OK" Headers([('date', 'Fri, 09 Aug 2024 04:54:51 GMT'), ('content-type', 'text/plain; charset=utf-8'), ('transfer-encoding', 'chunked'), ('connection', 'keep-alive'), ('openai-organization', 'user-imywxd1x3dz2koid5nl3pykg'), ('openai-processing-ms', '65120'), ('openai-version', '2020-10-01'), ('strict-transport-security', 'max-age=15552000; includeSubDomains; preload'), ('x-ratelimit-limit-requests', '50'), ('x-ratelimit-remaining-requests', '49'), ('x-ratelimit-reset-requests', '1.2s'), ('x-request-id', 'req_ee43e9b5d13b87213865e038c5cb2b27'), ('cf-cache-status', 'DYNAMIC'), ('set-cookie', '__cf_bm=fIoOXAGFHjq12ZFNqV2aJW9VpSZ7F.EEwLZCLjQE7xE-1723179291-1.0.1.1-QwPirU_LuFjrc4wkDkk9Trr5C9.th_1ZY3_DpiXDelVA7LMsWOyKBwyQ18l.4.H42VyroK.spHCXh.pW.1LZVA; path=/; expires=Fri, 09-Aug-24 05:24:51 GMT; domain=.api.openai.com; HttpOnly; Secure; SameSite=None'), ('x-content-type-options', 'nosniff'), ('set-cookie', '_cfuvid=LI.AshH8TiEGFHWzAy95eYdOziTNvrGLH9.bRjsl_d8-1723179291106-0.0.1.1-604800000; path=/; domain=.api.openai.com; HttpOnly; Secure; SameSite=None'), ('server', 'cloudflare'), ('cf-ray', '8b0524df2babc7a7-DUS'), ('content-encoding', 'br'), ('alt-svc', 'h3=":443"; ma=86400')])
2024-08-09 04:54:51,226 [DEBUG] [AnyIO worker thread] request_id: req_ee43e9b5d13b87213865e038c5cb2b27
2024-08-09 04:54:51,227 [DEBUG] [AnyIO worker thread] Could not read JSON from response data due to <class 'json.decoder.JSONDecodeError'> - Extra data: line 2 column 1 (char 2)
Transcribing part 1!
INFO: 172.18.0.1:59254 - "POST /youtube/summarize HTTP/1.0" 500 Internal Server Error
ERROR: Exception in ASGI application
Traceback (most recent call last):
File "/usr/local/lib/python3.12/site-packages/uvicorn/protocols/http/httptools_impl.py", line 399, in run_asgi
result = await app( # type: ignore[func-returns-value]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/uvicorn/middleware/proxy_headers.py", line 70, in __call__
return await self.app(scope, receive, send)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/fastapi/applications.py", line 1054, in __call__
await super().__call__(scope, receive, send)
File "/usr/local/lib/python3.12/site-packages/starlette/applications.py", line 123, in __call__
await self.middleware_stack(scope, receive, send)
File "/usr/local/lib/python3.12/site-packages/starlette/middleware/errors.py", line 186, in __call__
raise exc
File "/usr/local/lib/python3.12/site-packages/starlette/middleware/errors.py", line 164, in __call__
await self.app(scope, receive, _send)
File "/usr/local/lib/python3.12/site-packages/starlette/middleware/exceptions.py", line 65, in __call__
await wrap_app_handling_exceptions(self.app, conn)(scope, receive, send)
File "/usr/local/lib/python3.12/site-packages/starlette/_exception_handler.py", line 64, in wrapped_app
raise exc
File "/usr/local/lib/python3.12/site-packages/starlette/_exception_handler.py", line 53, in wrapped_app
await app(scope, receive, sender)
File "/usr/local/lib/python3.12/site-packages/starlette/routing.py", line 756, in __call__
await self.middleware_stack(scope, receive, send)
File "/usr/local/lib/python3.12/site-packages/starlette/routing.py", line 776, in app
await route.handle(scope, receive, send)
File "/usr/local/lib/python3.12/site-packages/starlette/routing.py", line 297, in handle
await self.app(scope, receive, send)
File "/usr/local/lib/python3.12/site-packages/starlette/routing.py", line 77, in app
await wrap_app_handling_exceptions(app, request)(scope, receive, send)
File "/usr/local/lib/python3.12/site-packages/starlette/_exception_handler.py", line 64, in wrapped_app
raise exc
File "/usr/local/lib/python3.12/site-packages/starlette/_exception_handler.py", line 53, in wrapped_app
await app(scope, receive, sender)
File "/usr/local/lib/python3.12/site-packages/starlette/routing.py", line 72, in app
response = await func(request)
^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/fastapi/routing.py", line 278, in app
raw_response = await run_endpoint_function(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/fastapi/routing.py", line 193, in run_endpoint_function
return await run_in_threadpool(dependant.call, **values)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/starlette/concurrency.py", line 42, in run_in_threadpool
return await anyio.to_thread.run_sync(func, *args)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/anyio/to_thread.py", line 56, in run_sync
return await get_async_backend().run_sync_in_worker_thread(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/anyio/_backends/_asyncio.py", line 2144, in run_sync_in_worker_thread
return await future
^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/anyio/_backends/_asyncio.py", line 851, in run
result = context.run(func, *args)
^^^^^^^^^^^^^^^^^^^^^^^^
File "/code/app/routers/youtube.py", line 30, in yt_summarize
transcription = yt_transcribe(request.url,
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/code/app/transcribe/utils.py", line 69, in yt_transcribe
docs = loader.load()
^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/langchain_core/document_loaders/base.py", line 30, in load
return list(self.lazy_load())
^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/langchain_community/document_loaders/generic.py", line 116, in lazy_load
yield from self.blob_parser.lazy_parse(blob)
File "/usr/local/lib/python3.12/site-packages/langchain_community/document_loaders/parsers/audio.py", line 132, in lazy_parse
page_content=transcript.text,
^^^^^^^^^^^^^^^
AttributeError: 'str' object has no attribute 'text'
```
### Description
* I'm expecting to get YT video transcription in any supported formats like srt, txt, vtt using OpenAI whisper integration.
### System Info
langchain_openai>=0.1.8
langchain_community>=0.2.5 | AttributeError: 'str' object has no attribute 'text' in OpenAI document_loaders/audio | https://api.github.com/repos/langchain-ai/langchain/issues/25218/comments | 0 | 2024-08-09T05:10:36Z | 2024-08-09T12:35:12Z | https://github.com/langchain-ai/langchain/issues/25218 | 2,457,114,837 | 25,218 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
from langchain_chroma import Chroma
from langchain_ollama import OllamaEmbeddings
local_embeddings = OllamaEmbeddings(model="nomic-embed-text")
vectorstore = Chroma.from_documents(documents=all_splits, embedding=local_embeddings)
### Error Message and Stack Trace (if applicable)
_No response_
### Description
I'm stuck at the [code](https://python.langchain.com/v0.2/docs/tutorials/local_rag/#:~:text=RecursiveCharacterTextSplitter%20%7C%20WebBaseLoader-,Next,-%2C%20the%20below%20steps) of vector store creation in the official document.
The output error information is:
```shell
{
"name": "ResponseError",
"message": "404 page not found",
"stack": "---------------------------------------------------------------------------
ResponseError Traceback (most recent call last)
Cell In[2], line 6
2 from langchain_ollama import OllamaEmbeddings
4 local_embeddings = OllamaEmbeddings(model=\"nomic-embed-text\")
----> 6 vectorstore = Chroma.from_documents(documents=all_splits, embedding=local_embeddings)
File ~/miniconda3/lib/python3.10/site-packages/langchain_chroma/vectorstores.py:921, in Chroma.from_documents(cls, documents, embedding, ids, collection_name, persist_directory, client_settings, client, collection_metadata, **kwargs)
919 texts = [doc.page_content for doc in documents]
920 metadatas = [doc.metadata for doc in documents]
--> 921 return cls.from_texts(
922 texts=texts,
923 embedding=embedding,
924 metadatas=metadatas,
925 ids=ids,
926 collection_name=collection_name,
927 persist_directory=persist_directory,
928 client_settings=client_settings,
929 client=client,
930 collection_metadata=collection_metadata,
931 **kwargs,
932 )
File ~/miniconda3/lib/python3.10/site-packages/langchain_chroma/vectorstores.py:882, in Chroma.from_texts(cls, texts, embedding, metadatas, ids, collection_name, persist_directory, client_settings, client, collection_metadata, **kwargs)
876 chroma_collection.add_texts(
877 texts=batch[3] if batch[3] else [],
878 metadatas=batch[2] if batch[2] else None, # type: ignore
879 ids=batch[0],
880 )
881 else:
--> 882 chroma_collection.add_texts(texts=texts, metadatas=metadatas, ids=ids)
883 return chroma_collection
File ~/miniconda3/lib/python3.10/site-packages/langchain_chroma/vectorstores.py:389, in Chroma.add_texts(self, texts, metadatas, ids, **kwargs)
387 texts = list(texts)
388 if self._embedding_function is not None:
--> 389 embeddings = self._embedding_function.embed_documents(texts)
390 if metadatas:
391 # fill metadatas with empty dicts if somebody
392 # did not specify metadata for all texts
393 length_diff = len(texts) - len(metadatas)
File ~/miniconda3/lib/python3.10/site-packages/langchain_ollama/embeddings.py:31, in OllamaEmbeddings.embed_documents(self, texts)
29 def embed_documents(self, texts: List[str]) -> List[List[float]]:
30 \"\"\"Embed search docs.\"\"\"
---> 31 embedded_docs = ollama.embed(self.model, texts)[\"embedding\"]
32 return embedded_docs
File ~/miniconda3/lib/python3.10/site-packages/ollama/_client.py:261, in Client.embed(self, model, input, truncate, options, keep_alive)
258 if not model:
259 raise RequestError('must provide a model')
--> 261 return self._request(
262 'POST',
263 '/api/embed',
264 json={
265 'model': model,
266 'input': input,
267 'truncate': truncate,
268 'options': options or {},
269 'keep_alive': keep_alive,
270 },
271 ).json()
File ~/miniconda3/lib/python3.10/site-packages/ollama/_client.py:74, in Client._request(self, method, url, **kwargs)
72 response.raise_for_status()
73 except httpx.HTTPStatusError as e:
---> 74 raise ResponseError(e.response.text, e.response.status_code) from None
76 return response
ResponseError: 404 page not found"
}
```
It seems that ollama with version of 0.3.1 can't work well with ollama.embed:
Code for reproducing:
```python
import ollama
ollama.embed(
model="nomic-embed-text",
input=["hello"]
)
```
and the output is:
```shell
{
"name": "ResponseError",
"message": "404 page not found",
"stack": "---------------------------------------------------------------------------
ResponseError Traceback (most recent call last)
Cell In[3], line 2
1 import ollama
----> 2 ollama.embed(
3 model=\"nomic-embed-text\",
4 input=[\"hello\"]
5 )
File ~/miniconda3/lib/python3.10/site-packages/ollama/_client.py:261, in Client.embed(self, model, input, truncate, options, keep_alive)
258 if not model:
259 raise RequestError('must provide a model')
--> 261 return self._request(
262 'POST',
263 '/api/embed',
264 json={
265 'model': model,
266 'input': input,
267 'truncate': truncate,
268 'options': options or {},
269 'keep_alive': keep_alive,
270 },
271 ).json()
File ~/miniconda3/lib/python3.10/site-packages/ollama/_client.py:74, in Client._request(self, method, url, **kwargs)
72 response.raise_for_status()
73 except httpx.HTTPStatusError as e:
---> 74 raise ResponseError(e.response.text, e.response.status_code) from None
76 return response
ResponseError: 404 page not found"
}
```
It seems similar with the error before.
We can also experiment with the code:
```shell
curl http://localhost:11434/api/embed -d '{
"model": "nomic-embed-text:latest",
"prompt": "Here is an article about llamas..."
}'
```
and then the output is `404 page not found`.
### System Info
```shell
langchain-ollama 0.1.1
ollama 0.3.1
```
Extra info:The version of ollama in linux system is 0.1.45. | Error with Chroma.from_documents and OllamaEmbeddings: ResponseError: 404 page not found | https://api.github.com/repos/langchain-ai/langchain/issues/25216/comments | 1 | 2024-08-09T03:14:44Z | 2024-08-09T07:42:15Z | https://github.com/langchain-ai/langchain/issues/25216 | 2,457,007,824 | 25,216 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
Hi - I am trying to pip install langchain but am getting the below error. This error starting coming from today morning. Please let me know how I can get this resolved? Cargo, the Rust package manager, is not installed or is not on PATH.
This package requires Rust and Cargo to compile extensions. Install it through
the system's package manager or via https://rustup.rs/
Checking for Rust toolchain....
error: subprocess-exited-with-error
× Preparing metadata (pyproject.toml) did not run successfully.
│ exit code: 1
╰─> See above for output.
note: This error originates from a subprocess, and is likely not a problem with pip.
full command: /usr/bin/python3 /usr/local/lib/python3.10/dist-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py prepare_metadata_for_build_wheel /tmp/tmpcq29s8av
cwd: /tmp/pip-install-2ldceiwf/orjson_4e150d9758f64d94abda37bb761d1916
Preparing metadata (pyproject.toml) ... error
error: metadata-generation-failed
× Encountered error while generating package metadata.
╰─> See above for output.
### Error Message and Stack Trace (if applicable)
Cargo, the Rust package manager, is not installed or is not on PATH.
This package requires Rust and Cargo to compile extensions. Install it through
the system's package manager or via https://rustup.rs/
Checking for Rust toolchain....
error: subprocess-exited-with-error
× Preparing metadata (pyproject.toml) did not run successfully.
│ exit code: 1
╰─> See above for output.
note: This error originates from a subprocess, and is likely not a problem with pip.
full command: /usr/bin/python3 /usr/local/lib/python3.10/dist-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py prepare_metadata_for_build_wheel /tmp/tmpcq29s8av
cwd: /tmp/pip-install-2ldceiwf/orjson_4e150d9758f64d94abda37bb761d1916
Preparing metadata (pyproject.toml) ... error
error: metadata-generation-failed
× Encountered error while generating package metadata.
╰─> See above for output.
### Description
I am getting errors when trying to pip install langchain modules. The error seems to be related to Rust being required but even after trying to install the packages related to Rust , the same error persists.
### System Info
Windows, Python 3.10 | Getting issues when pip installing langchain modules | https://api.github.com/repos/langchain-ai/langchain/issues/25215/comments | 4 | 2024-08-09T02:59:54Z | 2024-08-09T03:09:36Z | https://github.com/langchain-ai/langchain/issues/25215 | 2,456,995,772 | 25,215 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
from dotenv import load_dotenv
from langchain_openai import ChatOpenAI
from langchain_core.tools import tool, StructuredTool
from langchain.agents import AgentExecutor, create_tool_calling_agent
from langchain_core.prompts import ChatPromptTemplate
load_dotenv()
def add_numbers(a: int, b: int) -> int:
"""Adds a and b.
Args:
a: first int
b: second int
"""
return a + b
def multiply_numbers(a: int, b: int) -> int:
"""Multiplies a and b.
Args:
a: first int
b: second int
"""
return a * b
add_numbers_tool = StructuredTool.from_function(
func=add_numbers, name="Add numbers", description="Adds a and b."
)
multiply_numbers_tool = StructuredTool.from_function(
func=multiply_numbers, name="Multiply numbers", description="Multiplies a and b."
)
llm = ChatOpenAI(model="gpt-4o", temperature=0)
tools = [add_numbers_tool, multiply_numbers_tool]
prompt = ChatPromptTemplate.from_messages(
[
(
"system",
"You are a helpful assistant. Make sure to respond to teh user with most accurate results and information.",
),
("placeholder", "{chat_history}"),
("human", "{input}"),
("placeholder", "{agent_scratchpad}"),
]
)
# Construct the Tools agent
agent = create_tool_calling_agent(llm, tools, prompt)
query = "What is 3 * 12? Also, what is 11 + 49?"
agent_executor = AgentExecutor(
agent=agent, tools=tools, verbose=True, return_intermediate_steps=True
)
result = agent_executor.invoke({"input": query})
print(result)
```
### Error Message and Stack Trace (if applicable)
(.venv) Cipher@Crippd assistcx-agent % python task.py
Entering new AgentExecutor chain...
Traceback (most recent call last):
File "/Users/Cipher/AssistCX/assistcx-agent/task.py", line 63, in <module>
result = agent_executor.invoke({"input": query})
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/Cipher/AssistCX/assistcx-agent/.venv/lib/python3.11/site-packages/langchain/chains/base.py", line 166, in invoke
raise e
File "/Users/Cipher/AssistCX/assistcx-agent/.venv/lib/python3.11/site-packages/langchain/chains/base.py", line 156, in invoke
self._call(inputs, run_manager=run_manager)
File "/Users/Cipher/AssistCX/assistcx-agent/.venv/lib/python3.11/site-packages/langchain/agents/agent.py", line 1612, in _call
next_step_output = self._take_next_step(
^^^^^^^^^^^^^^^^^^^^^
File "/Users/Cipher/AssistCX/assistcx-agent/.venv/lib/python3.11/site-packages/langchain/agents/agent.py", line 1318, in _take_next_step
[
File "/Users/Cipher/AssistCX/assistcx-agent/.venv/lib/python3.11/site-packages/langchain/agents/agent.py", line 1318, in <listcomp>
[
File "/Users/Cipher/AssistCX/assistcx-agent/.venv/lib/python3.11/site-packages/langchain/agents/agent.py", line 1346, in _iter_next_step
output = self.agent.plan(
^^^^^^^^^^^^^^^^
File "/Users/Cipher/AssistCX/assistcx-agent/.venv/lib/python3.11/site-packages/langchain/agents/agent.py", line 580, in plan
for chunk in self.runnable.stream(inputs, config={"callbacks": callbacks}):
File "/Users/Cipher/AssistCX/assistcx-agent/.venv/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 3253, in stream
yield from self.transform(iter([input]), config, **kwargs)
File "/Users/Cipher/AssistCX/assistcx-agent/.venv/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 3240, in transform
yield from self._transform_stream_with_config(
File "/Users/Cipher/AssistCX/assistcx-agent/.venv/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 2053, in _transform_stream_with_config
chunk: Output = context.run(next, iterator) # type: ignore
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/Cipher/AssistCX/assistcx-agent/.venv/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 3202, in _transform
for output in final_pipeline:
File "/Users/Cipher/AssistCX/assistcx-agent/.venv/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 1271, in transform
for ichunk in input:
File "/Users/Cipher/AssistCX/assistcx-agent/.venv/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 5267, in transform
yield from self.bound.transform(
File "/Users/Cipher/AssistCX/assistcx-agent/.venv/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 1289, in transform
yield from self.stream(final, config, **kwargs)
File "/Users/Cipher/AssistCX/assistcx-agent/.venv/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 373, in stream
raise e
File "/Users/Cipher/AssistCX/assistcx-agent/.venv/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 353, in stream
for chunk in self._stream(messages, stop=stop, **kwargs):
File "/Users/Cipher/AssistCX/assistcx-agent/.venv/lib/python3.11/site-packages/langchain_openai/chat_models/base.py", line 521, in _stream
response = self.client.create(**payload)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/Cipher/AssistCX/assistcx-agent/.venv/lib/python3.11/site-packages/openai/_utils/_utils.py", line 277, in wrapper
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/Users/Cipher/AssistCX/assistcx-agent/.venv/lib/python3.11/site-packages/openai/resources/chat/completions.py", line 646, in create
return self._post(
^^^^^^^^^^^
File "/Users/Cipher/AssistCX/assistcx-agent/.venv/lib/python3.11/site-packages/openai/_base_client.py", line 1266, in post
return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/Cipher/AssistCX/assistcx-agent/.venv/lib/python3.11/site-packages/openai/_base_client.py", line 942, in request
return self._request(
^^^^^^^^^^^^^^
File "/Users/Cipher/AssistCX/assistcx-agent/.venv/lib/python3.11/site-packages/openai/_base_client.py", line 1046, in _request
raise self._make_status_error_from_response(err.response) from None
openai.BadRequestError: Error code: 400 - {'error': {'message': "Invalid 'tools[0].function.name': string does not match pattern. Expected a string that matches the pattern '^[a-zA-Z0-9_-]+$'.", 'type': 'invalid_request_error', 'param': 'tools[0].function.name', 'code': 'invalid_value'}}
(.venv) Cipher@Crippd assistcx-agent %
### Description
I have setup a simple tool calling agent following the guide here: https://python.langchain.com/v0.1/docs/modules/agents/agent_types/tool_calling/
I have defined tools using `StructuredTool` class as shown in my code. When I try to run this code I get the error that shared above. If I remove the space in tool names and replace it with dash (-) or underscore (_), the code works fine and agent execution happens successfully.
I haven't seen anywhere that tool name shouldn't include space and several Langchain documentation uses space in tool names. Am I missing something or making any silly error?
### System Info
langchain==0.2.11
langchain-community==0.2.10
langchain-core==0.2.25
langchain-groq==0.1.8
langchain-ollama==0.1.0
langchain-openai==0.1.19
langchain-text-splitters==0.2.2
langchain-together==0.1.4
platform: Mac
python version: 3.11.7 | Tool calling agent: Function name error in calling tool | https://api.github.com/repos/langchain-ai/langchain/issues/25211/comments | 1 | 2024-08-09T01:17:32Z | 2024-08-10T13:26:12Z | https://github.com/langchain-ai/langchain/issues/25211 | 2,456,917,100 | 25,211 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
from langchain_elasticsearch import ElasticsearchStore, DenseVectorStrategy, BM25Strategy
elastic_vector_search = ElasticsearchStore(
es_url=YOUR_URL,
index_name=YOUR_INDEX,
es_user=YOUR_LOGIN,
es_params = {'verify_certs':False,'request_timeout':1000},
es_password=YOUR_PASSWORD,
embedding=embeddings,
strategy=DenseVectorStrategy()
)
retriever = elastic_vector_search.as_retriever(search_type="similarity_score_threshold",search_kwargs={'score_threshold': 0.85, 'k':150,'fetch_k' : 10000}, include_original=True)
retriever.get_relevant_documents('query')
### Error Message and Stack Trace (if applicable)
---------------------------------------------------------------------------
BadRequestError Traceback (most recent call last)
Cell In[6], line 1
----> 1 retriever.get_relevant_documents('réseau')
File ~\AppData\Local\Programs\Python\Python39\lib\site-packages\langchain_core\_api\deprecation.py:168, in deprecated.<locals>.deprecate.<locals>.warning_emitting_wrapper(*args, **kwargs)
166 warned = True
167 emit_warning()
--> 168 return wrapped(*args, **kwargs)
File ~\AppData\Local\Programs\Python\Python39\lib\site-packages\langchain_core\retrievers.py:358, in BaseRetriever.get_relevant_documents(self, query, callbacks, tags, metadata, run_name, **kwargs)
356 if run_name:
357 config["run_name"] = run_name
--> 358 return self.invoke(query, config, **kwargs)
File ~\AppData\Local\Programs\Python\Python39\lib\site-packages\langchain_core\retrievers.py:219, in BaseRetriever.invoke(self, input, config, **kwargs)
217 except Exception as e:
218 run_manager.on_retriever_error(e)
--> 219 raise e
220 else:
221 run_manager.on_retriever_end(
222 result,
223 )
File ~\AppData\Local\Programs\Python\Python39\lib\site-packages\langchain_core\retrievers.py:212, in BaseRetriever.invoke(self, input, config, **kwargs)
210 _kwargs = kwargs if self._expects_other_args else {}
211 if self._new_arg_supported:
--> 212 result = self._get_relevant_documents(
213 input, run_manager=run_manager, **_kwargs
214 )
215 else:
216 result = self._get_relevant_documents(input, **_kwargs)
File ~\AppData\Local\Programs\Python\Python39\lib\site-packages\langchain_core\vectorstores\base.py:1249, in VectorStoreRetriever._get_relevant_documents(self, query, run_manager)
1246 docs = self.vectorstore.similarity_search(query, **self.search_kwargs)
1247 elif self.search_type == "similarity_score_threshold":
1248 docs_and_similarities = (
-> 1249 self.vectorstore.similarity_search_with_relevance_scores(
1250 query, **self.search_kwargs
1251 )
1252 )
1253 docs = [doc for doc, _ in docs_and_similarities]
1254 elif self.search_type == "mmr":
File ~\AppData\Local\Programs\Python\Python39\lib\site-packages\langchain_core\vectorstores\base.py:777, in VectorStore.similarity_search_with_relevance_scores(self, query, k, **kwargs)
761 """Return docs and relevance scores in the range [0, 1].
762
763 0 is dissimilar, 1 is most similar.
(...)
773 List of Tuples of (doc, similarity_score).
774 """
775 score_threshold = kwargs.pop("score_threshold", None)
--> 777 docs_and_similarities = self._similarity_search_with_relevance_scores(
778 query, k=k, **kwargs
779 )
780 if any(
781 similarity < 0.0 or similarity > 1.0
782 for _, similarity in docs_and_similarities
783 ):
784 warnings.warn(
785 "Relevance scores must be between"
786 f" 0 and 1, got {docs_and_similarities}"
787 )
File ~\AppData\Local\Programs\Python\Python39\lib\site-packages\langchain_core\vectorstores\base.py:725, in VectorStore._similarity_search_with_relevance_scores(self, query, k, **kwargs)
707 """
708 Default similarity search with relevance scores. Modify if necessary
709 in subclass.
(...)
722 List of Tuples of (doc, similarity_score)
723 """
724 relevance_score_fn = self._select_relevance_score_fn()
--> 725 docs_and_scores = self.similarity_search_with_score(query, k, **kwargs)
726 return [(doc, relevance_score_fn(score)) for doc, score in docs_and_scores]
File ~\AppData\Local\Programs\Python\Python39\lib\site-packages\langchain_elasticsearch\vectorstores.py:883, in ElasticsearchStore.similarity_search_with_score(self, query, k, filter, custom_query, doc_builder, **kwargs)
877 if (
878 isinstance(self._store.retrieval_strategy, DenseVectorStrategy)
879 and self._store.retrieval_strategy.hybrid
880 ):
881 raise ValueError("scores are currently not supported in hybrid mode")
--> 883 hits = self._store.search(
884 query=query, k=k, filter=filter, custom_query=custom_query
885 )
886 return _hits_to_docs_scores(
887 hits=hits,
888 content_field=self.query_field,
889 doc_builder=doc_builder,
890 )
File ~\AppData\Local\Programs\Python\Python39\lib\site-packages\elasticsearch\helpers\vectorstore\_sync\vectorstore.py:274, in VectorStore.search(self, query, query_vector, k, num_candidates, fields, filter, custom_query)
271 query_body = custom_query(query_body, query)
272 logger.debug(f"Calling custom_query, Query body now: {query_body}")
--> 274 response = self.client.search(
275 index=self.index,
276 **query_body,
277 size=k,
278 source=True,
279 source_includes=fields,
280 )
281 hits: List[Dict[str, Any]] = response["hits"]["hits"]
283 return hits
File ~\AppData\Local\Programs\Python\Python39\lib\site-packages\elasticsearch\_sync\client\utils.py:446, in _rewrite_parameters.<locals>.wrapper.<locals>.wrapped(*args, **kwargs)
443 except KeyError:
444 pass
--> 446 return api(*args, **kwargs)
File ~\AppData\Local\Programs\Python\Python39\lib\site-packages\elasticsearch\_sync\client\__init__.py:4119, in Elasticsearch.search(self, index, aggregations, aggs, allow_no_indices, allow_partial_search_results, analyze_wildcard, analyzer, batched_reduce_size, ccs_minimize_roundtrips, collapse, default_operator, df, docvalue_fields, error_trace, expand_wildcards, explain, ext, fields, filter_path, force_synthetic_source, from_, highlight, human, ignore_throttled, ignore_unavailable, indices_boost, knn, lenient, max_concurrent_shard_requests, min_compatible_shard_node, min_score, pit, post_filter, pre_filter_shard_size, preference, pretty, profile, q, query, rank, request_cache, rescore, rest_total_hits_as_int, retriever, routing, runtime_mappings, script_fields, scroll, search_after, search_type, seq_no_primary_term, size, slice, sort, source, source_excludes, source_includes, stats, stored_fields, suggest, suggest_field, suggest_mode, suggest_size, suggest_text, terminate_after, timeout, track_scores, track_total_hits, typed_keys, version, body)
4117 if __body is not None:
4118 __headers["content-type"] = "application/json"
-> 4119 return self.perform_request( # type: ignore[return-value]
4120 "POST",
4121 __path,
4122 params=__query,
4123 headers=__headers,
4124 body=__body,
4125 endpoint_id="search",
4126 path_parts=__path_parts,
4127 )
File ~\AppData\Local\Programs\Python\Python39\lib\site-packages\elasticsearch\_sync\client\_base.py:271, in BaseClient.perform_request(self, method, path, params, headers, body, endpoint_id, path_parts)
255 def perform_request(
256 self,
257 method: str,
(...)
264 path_parts: Optional[Mapping[str, Any]] = None,
265 ) -> ApiResponse[Any]:
266 with self._otel.span(
267 method,
268 endpoint_id=endpoint_id,
269 path_parts=path_parts or {},
270 ) as otel_span:
--> 271 response = self._perform_request(
272 method,
273 path,
274 params=params,
275 headers=headers,
276 body=body,
277 otel_span=otel_span,
278 )
279 otel_span.set_elastic_cloud_metadata(response.meta.headers)
280 return response
File ~\AppData\Local\Programs\Python\Python39\lib\site-packages\elasticsearch\_sync\client\_base.py:352, in BaseClient._perform_request(self, method, path, params, headers, body, otel_span)
349 except (ValueError, KeyError, TypeError):
350 pass
--> 352 raise HTTP_EXCEPTIONS.get(meta.status, ApiError)(
353 message=message, meta=meta, body=resp_body
354 )
356 # 'X-Elastic-Product: Elasticsearch' should be on every 2XX response.
357 if not self._verified_elasticsearch:
358 # If the header is set we mark the server as verified.
BadRequestError: BadRequestError(400, 'illegal_argument_exception', '[num_candidates] cannot be less than [k]')
### Description
I am trying to fetch 150 documents (k) and I specify the fetch_k (supposed to be num_candidates) and it does not work:
'[num_candidates] cannot be less than [k]')
Two options:
- either it is a bug and the fetch_k is not working
- either the num_candidates has to be specified in a different way, and I would like to know how because I have tried everything
### System Info
langchain-0.2.12
python 3.9 | num_candidates as fetch_k won't work on langchain | https://api.github.com/repos/langchain-ai/langchain/issues/25180/comments | 0 | 2024-08-08T14:03:31Z | 2024-08-08T14:06:09Z | https://github.com/langchain-ai/langchain/issues/25180 | 2,455,900,231 | 25,180 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, AwqConfig
model_id = "hugging-quants/Meta-Llama-3.1-70B-Instruct-AWQ-INT4"
llm = HuggingFaceLLM(
context_window=8192, #4096
max_new_tokens=512,
generate_kwargs={"temperature": 0, "do_sample": False},
system_prompt=system_prompt,
query_wrapper_prompt=query_wrapper_prompt,
tokenizer_name=model_id,
model_name=model_id,
device_map="auto",
tokenizer_kwargs={"max_length": 8192} # 4096
)
```
```
from pandasai import PandasAI
import pandas as pd
langchain_llm = LangchainLLM(langchain_llm=llm)
pandas_ai = PandasAI(llm=langchain_llm)
df = pd.read_csv("data/deneme.csv")
result = pandas_ai.run(df, "question??")
```
### Error Message and Stack Trace (if applicable)
```
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
Cell In[17], line 10
6 pandas_ai = PandasAI(llm=langchain_llm)
8 df = pd.read_csv("data/deneme.csv")
---> 10 result = pandas_ai.run(df, "question??")
12 result
File /usr/local/lib/python3.10/dist-packages/pandasai/__init__.py:298, in PandasAI.run(self, data_frame, prompt, is_conversational_answer, show_code, anonymize_df, use_error_correction_framework)
278 """
279 Run the PandasAI to make Dataframes Conversational.
280
(...)
293
294 """
296 self._start_time = time.time()
--> 298 self.log(f"Running PandasAI with {self._llm.type} LLM...")
300 self._prompt_id = str(uuid.uuid4())
301 self.log(f"Prompt ID: {self._prompt_id}")
Cell In[16], line 60, in LangchainLLM.type(self)
58 @property
59 def type(self) -> str:
---> 60 return f"langchain_{self.langchain_llm._llm_type}"
AttributeError: 'HuggingFaceLLM' object has no attribute '_llm_type'
```
### Description
I noticed that `PandasAI` is generally used with OpenAI's LLM. Am I getting errors because I use it with HuggingFace? How can I resolve this issue?
### System Info
. | AttributeError: 'HuggingFaceLLM' object has no attribute '_llm_type' | https://api.github.com/repos/langchain-ai/langchain/issues/25178/comments | 0 | 2024-08-08T13:43:57Z | 2024-08-08T13:46:41Z | https://github.com/langchain-ai/langchain/issues/25178 | 2,455,853,854 | 25,178 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
from langchain.chains import OpenAIModerationChain
from langchain_core.prompts import ChatPromptTemplate
from langchain_openai import OpenAI
api_key = "..."
model = OpenAI(openai_api_key=api_key)
prompt = ChatPromptTemplate.from_messages([("human", "the sky is")])
moderate = OpenAIModerationChain(openai_api_key=api_key)
chain = prompt | model | moderate
print(chain.invoke({}))
```
### Error Message and Stack Trace (if applicable)
```
Traceback (most recent call last):
File "/app/demo.py", line 8, in <module>
moderate = OpenAIModerationChain(openai_api_key=api_key)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/langchain_core/load/serializable.py", line 113, in __init__
super().__init__(*args, **kwargs)
File "/usr/local/lib/python3.12/site-packages/pydantic/v1/main.py", line 339, in __init__
values, fields_set, validation_error = validate_model(__pydantic_self__.__class__, data)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/pydantic/v1/main.py", line 1048, in validate_model
input_data = validator(cls_, input_data)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/langchain/chains/moderation.py", line 69, in validate_environment
values["client"] = openai.OpenAI()
^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/openai/_client.py", line 105, in __init__
raise OpenAIError(
openai.OpenAIError: The api_key client option must be set either by passing api_key to the client or by setting the OPENAI_API_KEY environment variable
```
### Description
It appears that the fix for #13685 may be incomplete. openai >= v1.0.0 moved to prefer explicit instantiation of clients, but `OpenAIModerationChain.validate_environment` still assigns the API key to `openai.api_key` regardless of the version of openai that's installed.
https://github.com/langchain-ai/langchain/blob/d77c7c4236df8e56fbe3acc8e0a71b57b48f1678/libs/langchain/langchain/chains/moderation.py#L58
Using `OpenAIModerationChain(openai_api_key="...")` instead of using the `OPENAI_API_KEY` env var results in `openai.OpenAI` throwing an exception because it expects the key to be passed to the `api_key` parameter if the env var isn't present.
https://github.com/langchain-ai/langchain/blob/d77c7c4236df8e56fbe3acc8e0a71b57b48f1678/libs/langchain/langchain/chains/moderation.py#L69-L70
Looking at [where the key is checked](https://github.com/openai/openai-python/blob/90dd21531efe351b72ce0a72150048b6c7f640e0/src/openai/_client.py#L102-L112) in `openai.OpenAI`, it looks like the same is also true of the organization.
### System Info
```
System Information
------------------
> OS: Linux
> OS Version: #1 SMP Fri Mar 29 23:14:13 UTC 2024
> Python Version: 3.12.5 (main, Aug 7 2024, 19:13:36) [GCC 12.2.0]
Package Information
-------------------
> langchain_core: 0.2.29
> langchain: 0.2.12
> langsmith: 0.1.98
> langchain_openai: 0.1.20
> langchain_text_splitters: 0.2.2
Optional packages not installed
-------------------------------
> langgraph
> langserve
Other Dependencies
------------------
> aiohttp: 3.10.1
> async-timeout: Installed. No version info available.
> jsonpatch: 1.33
> numpy: 1.26.4
> openai: 1.40.1
> orjson: 3.10.6
> packaging: 24.1
> pydantic: 2.8.2
> PyYAML: 6.0.2
> requests: 2.32.3
> SQLAlchemy: 2.0.32
> tenacity: 8.5.0
> tiktoken: 0.7.0
> typing-extensions: 4.12.2
``` | OpenAIModerationChain doesn't handle API key correctly for openai>=v1.0.0 | https://api.github.com/repos/langchain-ai/langchain/issues/25176/comments | 0 | 2024-08-08T12:33:39Z | 2024-08-08T13:11:30Z | https://github.com/langchain-ai/langchain/issues/25176 | 2,455,684,951 | 25,176 |
[
"langchain-ai",
"langchain"
] | ### URL
https://python.langchain.com/v0.2/docs/integrations/document_loaders/aws_s3_directory/
### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
can't able to read folders inside bucket
e.g. I have bucket name "demo" inside demo i have two folders folder1 and folder2 i want read load document from folder1 only then how can i do that currently it is directly throwing error
Process finished with exit code -1073741819 (0xC0000005)
| DOC: <Issue related to /v0.2/docs/integrations/document_loaders/aws_s3_directory/> | https://api.github.com/repos/langchain-ai/langchain/issues/25170/comments | 0 | 2024-08-08T11:08:43Z | 2024-08-08T13:29:17Z | https://github.com/langchain-ai/langchain/issues/25170 | 2,455,520,007 | 25,170 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
I have attached two notebooks one for redis cache semantic search and another for Azure Cosmos db for Mongo db v Core for caching.
[semantic_caching.zip](https://github.com/user-attachments/files/16537170/semantic_caching.zip)
### Error Message and Stack Trace (if applicable)
_No response_
### Description
I am trying to use the semantic caching feature of the langchain with Azure Cosmos db for Mongodb vCore for quicker response. I tried the same example that is given in the langchain [documentation](https://python.langchain.com/v0.2/docs/integrations/llm_caching/#azure-cosmos-db-semantic-cache). In my code if i ask it "Tell me a joke" it is returning response from the cache in very less time. But when the question is changed to "What to do when bored?" i am expecting the langchain to hit the LLM instead of returning the response from the cache. But it is returning the same cached response for "Tell me a joke". I have attached the code and its output.
I have tried the same with the Redis Semantic caching and i see the same response.
### System Info
System Information
------------------
> OS: Linux
> OS Version: #1 SMP Fri Mar 29 23:14:13 UTC 2024
> Python Version: 3.10.14 (main, May 6 2024, 19:42:50) [GCC 11.2.0]
Package Information
-------------------
> langchain_core: 0.2.28
> langchain: 0.2.12
> langchain_community: 0.2.11
> langsmith: 0.1.98
> langchain_openai: 0.1.20
> langchain_text_splitters: 0.2.2
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph
> langserve | Azure Cosmos DB Semantic Cache and Redis Semantic Cache not working as expected when the prompt is different | https://api.github.com/repos/langchain-ai/langchain/issues/25161/comments | 0 | 2024-08-08T03:27:41Z | 2024-08-08T03:30:08Z | https://github.com/langchain-ai/langchain/issues/25161 | 2,454,793,900 | 25,161 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
llm = ChatGroq(base_url="http://10.xx.xx.127:8080",temperature=0, groq_api_key="gsk_...", model_name="llama3-8b-8192")
input = "how to restart linux server"
agent_executor.invoke({"input": input})
### Error Message and Stack Trace (if applicable)
errInfo:: how to restart linux server
> Entering new AgentExecutor chain...
Traceback (most recent call last):
File "/Usr-xxx/pyProj/testLangchainCompileAssist.py", line 101, in <module>
agent_executor.invoke({"input": input})
File "/Usr-xxx/anaconda3/lib/python3.11/site-packages/langchain/chains/base.py", line 166, in invoke
raise e
File "/Usr-xxx/anaconda3/lib/python3.11/site-packages/langchain/chains/base.py", line 156, in invoke
self._call(inputs, run_manager=run_manager)
File "/Usr-xxx/anaconda3/lib/python3.11/site-packages/langchain/agents/agent.py", line 1636, in _call
next_step_output = self._take_next_step(
^^^^^^^^^^^^^^^^^^^^^
File "/Usr-xxx/anaconda3/lib/python3.11/site-packages/langchain/agents/agent.py", line 1342, in _take_next_step
[
File "/Usr-xxx/anaconda3/lib/python3.11/site-packages/langchain/agents/agent.py", line 1342, in <listcomp>
[
File "/Usr-xxx/anaconda3/lib/python3.11/site-packages/langchain/agents/agent.py", line 1370, in _iter_next_step
output = self.agent.plan(
^^^^^^^^^^^^^^^^
File "/Usr-xxx/anaconda3/lib/python3.11/site-packages/langchain/agents/agent.py", line 580, in plan
for chunk in self.runnable.stream(inputs, config={"callbacks": callbacks}):
File "/Usr-xxx/anaconda3/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 3253, in stream
yield from self.transform(iter([input]), config, **kwargs)
File "/Usr-xxx/anaconda3/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 3240, in transform
yield from self._transform_stream_with_config(
File "/Usr-xxx/anaconda3/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 2053, in _transform_stream_with_config
chunk: Output = context.run(next, iterator) # type: ignore
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Usr-xxx/anaconda3/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 3202, in _transform
for output in final_pipeline:
File "/Usr-xxx/anaconda3/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 1271, in transform
for ichunk in input:
File "/Usr-xxx/anaconda3/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 5267, in transform
yield from self.bound.transform(
File "/Usr-xxx/anaconda3/lib/python3.11/site-packages/langchain_core/runnables/base.py", line 1289, in transform
yield from self.stream(final, config, **kwargs)
File "/Usr-xxx/anaconda3/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 373, in stream
raise e
File "/Usr-xxx/anaconda3/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py", line 353, in stream
for chunk in self._stream(messages, stop=stop, **kwargs):
File "/Usr-xxx/anaconda3/lib/python3.11/site-packages/langchain_groq/chat_models.py", line 507, in _stream
response = self.client.create(
^^^^^^^^^^^^^^^^^^^
File "/Usr-xxx/anaconda3/lib/python3.11/site-packages/groq/resources/chat/completions.py", line 289, in create
return self._post(
^^^^^^^^^^^
File "/Usr-xxx/anaconda3/lib/python3.11/site-packages/groq/_base_client.py", line 1225, in post
return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Usr-xxx/anaconda3/lib/python3.11/site-packages/groq/_base_client.py", line 920, in request
return self._request(
^^^^^^^^^^^^^^
File "/Usr-xxx/anaconda3/lib/python3.11/site-packages/groq/_base_client.py", line 1018, in _request
raise self._make_status_error_from_response(err.response) from None
groq.NotFoundError: Error code: 404 - {'message': 'Route POST:/openai/v1/chat/completions not found', 'error': 'Not Found', 'statusCode': 404}
### Description
run jan and api server on 10.xxxx.127:8080, and can visit the apis on my mac
get: http://10.xxxx.127:8080/v1/models
rsp: {"object":"list","data":[{"sources":[{"filename":"aya-23-35B-Q4_K_M.gguf",.....
get:http://localhost:8080/v1/models/llama3-8b-8192
rsp: {"sources":[{"url":"https://groq.com"}],"id":"llama3-8b-8192","object":"model","name":"Groq Llama 3 8b","version":"1.1","description":"Groq Llama 3 8b with supercharged speed!","format":"api","settings":{},"parameters":{"max_tokens":8192,"temperature":0.7,"top_p":0.95,"stream":true,"stop":[],"frequency_penalty":0,"presence_penalty":0},"metadata":{"author":"Meta","tags":["General","Big Context Length"]},"engine":"groq"}
get: http://10.xxx.127:8080/openai/v1/chat/completions
rsp: {"message":"Route GET:/openai/v1/chat/completions not found","error":"Not Found","statusCode":404}
### System Info
pip freeze | grep langchain
langchain==0.2.7
langchain-cli==0.0.25
langchain-community==0.2.7
langchain-core==0.2.25
langchain-experimental==0.0.62
langchain-google-community==1.0.6
langchain-groq==0.1.6
langchain-ollama==0.1.0
langchain-openai==0.1.15
langchain-text-splitters==0.2.2
langchainhub==0.1.20
Python 3.11.3 (main, Apr 19 2023, 18:49:55) [Clang 14.0.6 ] on darwin
| groq.NotFoundError: Error code: 404 , Route POST:/openai/v1/chat/completions not found | https://api.github.com/repos/langchain-ai/langchain/issues/25160/comments | 0 | 2024-08-08T01:45:25Z | 2024-08-08T01:48:02Z | https://github.com/langchain-ai/langchain/issues/25160 | 2,454,689,480 | 25,160 |
[
"langchain-ai",
"langchain"
] | ### Privileged issue
- [X] I am a LangChain maintainer, or was asked directly by a LangChain maintainer to create an issue here.
### Issue Content
We want to start enforcing that all public interfaces have proper Google-style docstrings. This is both to improve our code readability and the quality of our API reference.
We can do this by turning on the lint rules shown here: https://github.com/langchain-ai/langchain/pull/23187
For each package in the repo we should turn this on and fix all the resulting lint errors. | Add docstring linting | https://api.github.com/repos/langchain-ai/langchain/issues/25154/comments | 0 | 2024-08-07T21:14:54Z | 2024-08-07T21:17:22Z | https://github.com/langchain-ai/langchain/issues/25154 | 2,454,362,912 | 25,154 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
The `update()` function in the Redis cache implementation does not overwrite an existing key with the same prompt:
https://github.com/langchain-ai/langchain/blob/a4086119f8e97adaeae337ceaaffbd413dd1795e/libs/community/langchain_community/cache.py#L727
It invokes [add_texts()](https://github.com/langchain-ai/langchain/blob/master/libs/community/langchain_community/vectorstores/redis/base.py#L733-L739) which creates a new key when `keys` or `ids` are not passed as kwards to `add_texts()`:
https://github.com/langchain-ai/langchain/blob/a4086119f8e97adaeae337ceaaffbd413dd1795e/libs/community/langchain_community/vectorstores/redis/base.py#L717
As a result, for the same prompt being cached, there are multiple hset keys being added, each with a different UUID:
https://github.com/langchain-ai/langchain/blob/a4086119f8e97adaeae337ceaaffbd413dd1795e/libs/community/langchain_community/vectorstores/redis/base.py#L733
This is problematic because when the cache is updated for the same prompt with a different value, it can use a stale copy of the text to serve as a cached response.
Ideally, I would expect the same prompt to overwrite the key in the hset versus creating a new key.
### Error Message and Stack Trace (if applicable)
_No response_
### Description
See above.
### System Info
```
langchain==0.2.6
langchain-community==0.2.6
langchain-core==0.2.11
langchain-openai==0.1.14
langchain-postgres==0.0.9
langchain-text-splitters==0.2.2
langchainhub==0.1.20
``` | Redis cache update() does not overwrite existing key/prompt | https://api.github.com/repos/langchain-ai/langchain/issues/25147/comments | 0 | 2024-08-07T17:41:57Z | 2024-08-07T18:39:14Z | https://github.com/langchain-ai/langchain/issues/25147 | 2,454,004,324 | 25,147 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
from langgraph.prebuilt import create_react_agent
agent_executor = create_react_agent(model, tools)
```
### Error Message and Stack Trace (if applicable)
```python
---------------------------------------------------------------------------
ImportError Traceback (most recent call last)
Cell In[10], line 1
----> 1 from langgraph.prebuilt import create_react_agent
3 agent_executor = create_react_agent(model, tools)
File ~/Documents/courses/Langchain0_2/.venv/lib/python3.12/site-packages/langgraph/prebuilt/__init__.py:2
1 """langgraph.prebuilt exposes a higher-level API for creating and executing agents and tools."""
----> 2 from langgraph.prebuilt.chat_agent_executor import create_react_agent
3 from langgraph.prebuilt.tool_executor import ToolExecutor, ToolInvocation
4 from langgraph.prebuilt.tool_node import InjectedState, ToolNode, tools_condition
File ~/Documents/courses/Langchain0_2/.venv/lib/python3.12/site-packages/langgraph/prebuilt/chat_agent_executor.py:28
26 from langgraph.managed import IsLastStep
27 from langgraph.prebuilt.tool_executor import ToolExecutor
---> 28 from langgraph.prebuilt.tool_node import ToolNode
31 # We create the AgentState that we will pass around
32 # This simply involves a list of messages
33 # We want steps to return messages to append to the list
34 # So we annotate the messages attribute with operator.add
35 class AgentState(TypedDict):
File ~/Documents/courses/Langchain0_2/.venv/lib/python3.12/site-packages/langgraph/prebuilt/tool_node.py:20
18 from langchain_core.runnables import RunnableConfig
19 from langchain_core.runnables.config import get_config_list, get_executor_for_config
---> 20 from langchain_core.tools import BaseTool, InjectedToolArg
21 from langchain_core.tools import tool as create_tool
22 from typing_extensions import get_args
ImportError: cannot import name 'InjectedToolArg' from 'langchain_core.tools'
```
### Description
When executing the following import from langgraph it fails as it can't find the 'InjectedToolArg' from 'langchain_core.tools'.
Based on searches in the Docs, I have tried to
* reinstall langchain, l
* langchain_core, and
* langgraph,
but the same error keeps appearing.
Note: when installing langgraph, langchain_core is upgraded to 0.2.18, however, that gives an error already during installation. To resolve that, I downgrade to 0.2.13.
### System Info
langchain_core.sys_info:
> langsmith: 0.1.85
> langchain_chroma: 0.1.2
> langchain_groq: 0.1.6
> langchain_objectbox: 0.1.0
> langchain_openai: 0.1.15
> langchain_text_splitters: 0.2.2
> langchainhub: 0.1.20
> langgraph: 0.2.1
> langserve: 0.2.2
pip freeze | grep langchain:
langchain==0.2.7
langchain-chroma==0.1.2
langchain-community==0.2.7
langchain-core==0.2.13
langchain-groq==0.1.6
langchain-objectbox==0.1.0
langchain-openai==0.1.15
langchain-text-splitters==0.2.2
langchainhub==0.1.20
System:
macOS Sonoma 14.5
Python:
3.12.4
| Not possible to import 'InjectedToolArg' from langchain_core.tools | https://api.github.com/repos/langchain-ai/langchain/issues/25144/comments | 4 | 2024-08-07T16:45:43Z | 2024-08-07T20:40:17Z | https://github.com/langchain-ai/langchain/issues/25144 | 2,453,910,363 | 25,144 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
The goal of my code is to create a RAG with Elasticserch and make request to it. Here is the way to reproduce the error .
Here is the creation of my RAG :
```python
embeddings = OllamaEmbeddings(
model="mxbai-embed-large
)
vectorstore = ElasticsearchStore(
embedding=embeddings,
index_name=collection, # Replace with your actual index name
es_url=ES_ENDPOINT,
vector_query_field ="vector",
query_field="content",
strategy=DenseVectorStrategy(hybrid=False)
#es_user="your_user", # Replace with your actual user
#es_password="your_password" # Replace with your actual password
)
llm = getLLM()
metadata_field_info, document_content_description, examples = METADATAS[collection]
# Create SelfQueryRetriever
self_query_retriever = SelfQueryRetriever.from_llm(
llm=llm,#_MultiQueryRetriever,
vectorstore=vectorstore,
document_contents=document_content_description,
metadata_field_info=metadata_field_info,
enable_limit=True,
chain_kwargs={
"examples": examples
},
search_kwargs={
'k': 5, # Limite le nombre de documents récupérés
'num_candidates': 5,
},
verbose=True
)
return self_query_retriever
```
With METADATAS :
```python
DOCUMENTS_METADATA = (
[
AttributeInfo(
name="name",
description="The name of the candidate, resource, or the enterprise related to an opportunity.",
type="string",
),
AttributeInfo(
name="document_source",
description="The origin of the document, such as a resume, skill set, criteria, job description, or any specific role-related document for opportunities or candidates.",
type="string",
),
],
"""
This dataset comprises a diverse array of documents indexed in Elasticsearch. Resource and candidate are the same not as opportunity.
The documents can vary widely, encompassing resumes, skill sets, text files, and detailed descriptions related to both candidates and opportunities.
You will need to search in each documents.
For instance, documents may include resumes of candidates, skill requirements, criteria for job roles, or detailed job descriptions for various opportunities.
""",
[
(
"Can you provide detailed information about Henry James, including his skills and experience?",
{
"query": "information, skills, experience, Henry James",
"filter": {
"bool": {
"should": [
{"match": {"name": "Henry James"}}
],
"minimum_should_match": 1
}
}
}
),
]
```
Here is my Elastic index for uses :

I am using an Ollama LLM :
```python
def getLLM(system = ""):
model_name = "llama3.1"#"qwen2"
llm = Ollama(
model=model_name,
temperature=0.01,
#keep_alive=-1,
num_ctx=50000,
system=system,
)
return llm
```
Now you can ask what you want to produce the error. If what you are asking is to complex it will make this error message :
```python
print("Création du chain LLM...")
llm = getLLM()
print("Mise en place du système d'historique de chat")
history_aware_retriever = create_history_aware_retriever(
llm,
retriever,
contextualize_q_prompt,
)
qa_prompt = ChatPromptTemplate.from_messages(
[
("system", QA_SYTEM_PROMPT),
MessagesPlaceholder("chat_history"),
("user", context),
("human", "{input}"),
]
)
print("Création du chain RAG")
question_answer_chain = create_stuff_documents_chain(llm, qa_prompt)
rag_chain = create_retrieval_chain(history_aware_retriever, question_answer_chain)
conversational_rag_chain = RunnableWithMessageHistory(
rag_chain,
get_session_history,
input_messages_key="input",
history_messages_key="chat_history",
output_messages_key="answer",
)
print("Chain LLM créé.")
conversational_rag_chain.invoke("What can you tell me about Yeo?")
```
### Error Message and Stack Trace (if applicable)
WARNING:langchain_core.callbacks.manager:Error in RootListenersTracer.on_chain_end callback: KeyError('answer')
WARNING:langchain_core.callbacks.manager:Error in callback coroutine: KeyError('answer')
{'input': 'Que peux-tu me dire sur Yeo ?', 'chat_history': []}
{'query': 'Yeo', 'filter': {'bool': {'should': [{'match': {'matadata.name': 'Yeo'}}], 'minimum_should_match': 1}}, 'limit': None}
Traceback (most recent call last): Traceback (most recent call last):
File "/home/skikk/maia/.venv/lib/python3.12/site-packages/langchain/chains/query_constructor/base.py", line 54, in parse
if parsed["query"] is None or len(parsed["query"]) == 0:
~~~~~~^^^^^^^^^
TypeError: string indices must be integers, not 'str'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/skikk/maia/api/LLM/Chat.py", line 23, in answer_stream
for chunk in llm.stream(
File "/home/skikk/maia/.venv/lib/python3.12/site-packages/langchain_core/runnables/base.py", line 5231, in stream
yield from self.bound.stream(
File "/home/skikk/maia/.venv/lib/python3.12/site-packages/langchain_core/runnables/base.py", line 5231, in stream
yield from self.bound.stream(
File "/home/skikk/maia/.venv/lib/python3.12/site-packages/langchain_core/runnables/base.py", line 3253, in stream
yield from self.transform(iter([input]), config, **kwargs)
File "/home/skikk/maia/.venv/lib/python3.12/site-packages/langchain_core/runnables/base.py", line 3240, in transform
yield from self._transform_stream_with_config(
File "/home/skikk/maia/.venv/lib/python3.12/site-packages/langchain_core/runnables/base.py", line 2053, in _transform_stream_with_config
chunk: Output = context.run(next, iterator) # type: ignore
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/skikk/maia/.venv/lib/python3.12/site-packages/langchain_core/runnables/base.py", line 3202, in _transform
for output in final_pipeline:
File "/home/skikk/maia/.venv/lib/python3.12/site-packages/langchain_core/runnables/base.py", line 1289, in transform
yield from self.stream(final, config, **kwargs)
File "/home/skikk/maia/.venv/lib/python3.12/site-packages/langchain_core/runnables/branch.py", line 364, in stream
for chunk in self.default.stream(
File "/home/skikk/maia/.venv/lib/python3.12/site-packages/langchain_core/runnables/base.py", line 5231, in stream
yield from self.bound.stream(
File "/home/skikk/maia/.venv/lib/python3.12/site-packages/langchain_core/runnables/base.py", line 3253, in stream
yield from self.transform(iter([input]), config, **kwargs)
File "/home/skikk/maia/.venv/lib/python3.12/site-packages/langchain_core/runnables/base.py", line 3240, in transform
yield from self._transform_stream_with_config(
File "/home/skikk/maia/.venv/lib/python3.12/site-packages/langchain_core/runnables/base.py", line 2053, in _transform_stream_with_config
chunk: Output = context.run(next, iterator) # type: ignore
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/skikk/maia/.venv/lib/python3.12/site-packages/langchain_core/runnables/base.py", line 3202, in _transform
for output in final_pipeline:
File "/home/skikk/maia/.venv/lib/python3.12/site-packages/langchain_core/runnables/passthrough.py", line 577, in transform
yield from self._transform_stream_with_config(
File "/home/skikk/maia/.venv/lib/python3.12/site-packages/langchain_core/runnables/base.py", line 2053, in _transform_stream_with_config
chunk: Output = context.run(next, iterator) # type: ignore
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/skikk/maia/.venv/lib/python3.12/site-packages/langchain_core/runnables/passthrough.py", line 556, in _transform
for chunk in for_passthrough:
File "/home/skikk/maia/.venv/lib/python3.12/site-packages/langchain_core/utils/iter.py", line 66, in tee_peer
item = next(iterator)
^^^^^^^^^^^^^^
File "/home/skikk/maia/.venv/lib/python3.12/site-packages/langchain_core/runnables/passthrough.py", line 577, in transform
yield from self._transform_stream_with_config(
File "/home/skikk/maia/.venv/lib/python3.12/site-packages/langchain_core/runnables/base.py", line 2053, in _transform_stream_with_config
chunk: Output = context.run(next, iterator) # type: ignore
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/skikk/maia/.venv/lib/python3.12/site-packages/langchain_core/runnables/passthrough.py", line 567, in _transform
yield cast(Dict[str, Any], first_map_chunk_future.result())
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.12/concurrent/futures/_base.py", line 456, in result
return self.__get_result()
^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.12/concurrent/futures/_base.py", line 401, in __get_result
raise self._exception
File "/usr/lib/python3.12/concurrent/futures/thread.py", line 58, in run
result = self.fn(*self.args, **self.kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/skikk/maia/.venv/lib/python3.12/site-packages/langchain_core/runnables/base.py", line 3666, in transform
yield from self._transform_stream_with_config(
File "/home/skikk/maia/.venv/lib/python3.12/site-packages/langchain_core/runnables/base.py", line 2053, in _transform_stream_with_config
chunk: Output = context.run(next, iterator) # type: ignore
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/skikk/maia/.venv/lib/python3.12/site-packages/langchain_core/runnables/base.py", line 3651, in _transform
chunk = AddableDict({step_name: future.result()})
^^^^^^^^^^^^^^^
File "/usr/lib/python3.12/concurrent/futures/_base.py", line 449, in result
return self.__get_result()
^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.12/concurrent/futures/_base.py", line 401, in __get_result
raise self._exception
File "/usr/lib/python3.12/concurrent/futures/thread.py", line 58, in run
result = self.fn(*self.args, **self.kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/skikk/maia/.venv/lib/python3.12/site-packages/langchain_core/runnables/base.py", line 5267, in transform
yield from self.bound.transform(
File "/home/skikk/maia/.venv/lib/python3.12/site-packages/langchain_core/runnables/base.py", line 1289, in transform
yield from self.stream(final, config, **kwargs)
File "/home/skikk/maia/.venv/lib/python3.12/site-packages/langchain_core/runnables/branch.py", line 344, in stream
for chunk in runnable.stream(
File "/home/skikk/maia/.venv/lib/python3.12/site-packages/langchain_core/runnables/base.py", line 3253, in stream
yield from self.transform(iter([input]), config, **kwargs)
File "/home/skikk/maia/.venv/lib/python3.12/site-packages/langchain_core/runnables/base.py", line 3240, in transform
yield from self._transform_stream_with_config(
File "/home/skikk/maia/.venv/lib/python3.12/site-packages/langchain_core/runnables/base.py", line 2053, in _transform_stream_with_config
chunk: Output = context.run(next, iterator) # type: ignore
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/skikk/maia/.venv/lib/python3.12/site-packages/langchain_core/runnables/base.py", line 3202, in _transform
for output in final_pipeline:
File "/home/skikk/maia/.venv/lib/python3.12/site-packages/langchain_core/runnables/base.py", line 1289, in transform
yield from self.stream(final, config, **kwargs)
File "/home/skikk/maia/.venv/lib/python3.12/site-packages/langchain_core/runnables/base.py", line 854, in stream
yield self.invoke(input, config, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/skikk/maia/.venv/lib/python3.12/site-packages/langchain_core/retrievers.py", line 221, in invoke
raise e
File "/home/skikk/maia/.venv/lib/python3.12/site-packages/langchain_core/retrievers.py", line 214, in invoke
result = self._get_relevant_documents(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/skikk/maia/.venv/lib/python3.12/site-packages/langchain/retrievers/self_query/base.py", line 263, in _get_relevant_documents
structured_query = self.query_constructor.invoke(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/skikk/maia/.venv/lib/python3.12/site-packages/langchain_core/runnables/base.py", line 5060, in invoke
return self.bound.invoke(
^^^^^^^^^^^^^^^^^^
File "/home/skikk/maia/.venv/lib/python3.12/site-packages/langchain_core/runnables/base.py", line 2875, in invoke
input = step.invoke(input, config)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/skikk/maia/.venv/lib/python3.12/site-packages/langchain_core/output_parsers/base.py", line 192, in invoke
return self._call_with_config(
^^^^^^^^^^^^^^^^^^^^^^^
File "/home/skikk/maia/.venv/lib/python3.12/site-packages/langchain_core/runnables/base.py", line 1784, in _call_with_config
context.run(
File "/home/skikk/maia/.venv/lib/python3.12/site-packages/langchain_core/runnables/config.py", line 404, in call_func_with_variable_args
return func(input, **kwargs) # type: ignore[call-arg]
^^^^^^^^^^^^^^^^^^^^^
File "/home/skikk/maia/.venv/lib/python3.12/site-packages/langchain_core/output_parsers/base.py", line 193, in <lambda>
lambda inner_input: self.parse_result([Generation(text=inner_input)]),
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/skikk/maia/.venv/lib/python3.12/site-packages/langchain_core/output_parsers/base.py", line 237, in parse_result
return self.parse(result[0].text)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/skikk/maia/.venv/lib/python3.12/site-packages/langchain/chains/query_constructor/base.py", line 66, in parse
raise OutputParserException(
langchain_core.exceptions.OutputParserException: Parsing text
```json
{
"query": "Yeo",
"filter": {
"bool": {
"should": [
{
"match": {
"matadata.name": "Yeo"
}
}
],
"minimum_should_match": 1
}
},
"limit": null
}
```
raised following error:
string indices must be integers, not 'str'
### Description
It is impossible to make complex request without modify the code.
To create a RAG that call on your metadata with elasticsearch in langchain you need to use the method "SelfQueryRetriever".
When you using it by the default way with simple request it is working :
```log
INFO: Generated text to structured : Here is the structured request for the user query "What about skills of Yeo ?":
json
{
"query": "skills, Yeo",
"filter": "and(eq(\"name\", \"Yeo\"))"
}
This matches the schema provided earlier. The `query` field contains the text string "skills, Yeo", and the `filter` field contains a logical condition statement using the `eq` comparator to match the attribute `name` with the value `"Yeo"`.
INFO: Query intern RAG to Elastic: skills, Yeo
INFO: Filter intern RAG to Elastic: and(eq("name", "Yeo"))
INFO:langchain.retrievers.self_query.base:Generated Query: query='skills, Yeo' filter=Comparison(comparator=<Comparator.EQ: 'eq'>, attribute='name', value='Yeo') limit=None
```
Here you can watch that the part "filter=Comparison(comparator=<Comparator.EQ: 'eq'>" is not empty because it is using the function "ast_parse" in the "langchain/chains/query_constructor/base.py" [file](https://github.com/langchain-ai/langchain/tree/master/libs/langchain/langchain/chains/query_constructor) :
```python
parsed["filter"] = self.ast_parse(parsed["filter"])
```
In the function "parse" :
```python
def parse(self, text: str) -> StructuredQuery:
try:
expected_keys = ["query", "filter"]
allowed_keys = ["query", "filter", "limit"]
parsed = parse_and_check_json_markdown(text, expected_keys)
if parsed["query"] is None or len(parsed["query"]) == 0:
parsed["query"] = " "
if parsed["filter"] == "NO_FILTER" or not parsed["filter"]:
parsed["filter"] = None
else:
parsed["filter"] = self.ast_parse(parsed["filter"])
if not parsed.get("limit"):
parsed.pop("limit", None)
return StructuredQuery(
**{k: v for k, v in parsed.items() if k in allowed_keys}
)
except Exception as e:
raise OutputParserException(
f"Parsing text\n{text}\n raised following error:\n{e}"
)
```
It is created the error cause my parsed["filter"] is not anymore a str.
So to resolve the problem I created an other parse function for the dict :
```python
def parse_dict_to_filter_directive(self, filter_dict) -> Optional[FilterDirective]:
if "bool" in filter_dict:
bool_filter = filter_dict["bool"]
if "should" in bool_filter:
return Operation(
operator=Operator.OR,
arguments=[self.parse_dict_to_filter_directive(cond) for cond in bool_filter["should"]]
)
# Add handling for other bool conditions like 'must' (AND), 'must_not' (NOT) here
elif "match" in filter_dict:
return Comparison(
comparator=Comparator.EQ, # Simplification, actual implementation might need mapping
attribute=list(filter_dict["match"].keys())[0],
value=list(filter_dict["match"].values())[0]
)
elif "terms" in filter_dict:
# Handling 'terms' as multiple EQ conditions wrapped in an OR operation
attribute, values = next(iter(filter_dict["terms"].items()))
return Operation(
operator=Operator.OR,
arguments=[
Comparison(comparator=Comparator.EQ, attribute=attribute, value=value)
for value in values
]
)
# Extend this function to handle other cases as needed
return None
```
Then I modify the parse function :
```python
def parse(self, text: str) -> StructuredQuery:
try:
print("INFO: Generated text to structured :", text)
expected_keys = ["query", "filter"]
allowed_keys = ["query", "filter", "limit"]
parsed = parse_and_check_json_markdown(text, expected_keys)
print("INFO: Query intern RAG to Elastic: ",parsed["query"])
print("INFO: Filter intern RAG to Elastic: ", parsed["filter"])
if parsed["query"] is None or len(parsed["query"]) == 0:
parsed["query"] = " "
if parsed["filter"] == "NO_FILTER" or not parsed["filter"]:
parsed["filter"] = None
elif isinstance(parsed["filter"], dict):
parsed["filter"] = self.parse_dict_to_filter_directive(parsed["filter"])
else:
parsed["filter"] = self.ast_parse(parsed["filter"])
if not parsed.get("limit"):
parsed.pop("limit", None)
return StructuredQuery(
**{k: v for k, v in parsed.items() if k in allowed_keys}
)
except Exception as e:
raise OutputParserException(
f"Parsing text\n{text}\n raised following error:\n{e}"
)
```
I let the print to have backlog of the request.
I would like to know if there is another way than changing the langchain code. Should it be possible to create a parse function in a parent "SelfQueryRetriever" ?
### System Info
System Information
------------------
> OS: Linux
> OS Version: #39-Ubuntu SMP PREEMPT_DYNAMIC Fri Jul 5 21:49:14 UTC 2024
> Python Version: 3.12.3 (main, Jul 31 2024, 17:43:48) [GCC 13.2.0]
Package Information
-------------------
> langchain_core: 0.2.24
> langchain: 0.2.11
> langchain_community: 0.2.6
> langsmith: 0.1.83
> langchain_elasticsearch: 0.2.2
> langchain_openai: 0.1.19
> langchain_text_splitters: 0.2.2
> langchainhub: 0.1.20
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph
> langserve | Complex request to Elasticsearch provide "langchain_core.exceptions.OutputParserException: Parsing text" when using "from langchain.retrievers.self_query.base import SelfQueryRetriever" | https://api.github.com/repos/langchain-ai/langchain/issues/25141/comments | 0 | 2024-08-07T14:27:48Z | 2024-08-07T14:30:33Z | https://github.com/langchain-ai/langchain/issues/25141 | 2,453,637,838 | 25,141 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
import os
from langchain_google_vertexai import ChatVertexAI
from langchain_community.tools.tavily_search import TavilySearchResults
from langgraph.prebuilt import create_react_agent
from langgraph.checkpoint.sqlite import SqliteSaver
# The following environment variables have to be set
# os.environ["GOOGLE_API_KEY"] = loaded from .zshrc
# os.environ["LANGCHAIN_API_KEY"] = loaded from .zshrc
# os.environ["TAVILY_API_KEY"] = loaded from .zshrc
model = ChatVertexAI(model="gemini-1.5-flash")
search = TavilySearchResults(max_results=2)
tools = [search]
memory = SqliteSaver.from_conn_string(":memory:")
agent_executor = create_react_agent(model, tools, checkpointer=memory)
```
### Error Message and Stack Trace (if applicable)
---------------------------------------------------------------------------
ValidationError Traceback (most recent call last)
Cell In[3], [line 18](vscode-notebook-cell:?execution_count=3&line=18)
[14](vscode-notebook-cell:?execution_count=3&line=14) tools = [search]
[16](vscode-notebook-cell:?execution_count=3&line=16) memory = SqliteSaver.from_conn_string(":memory:")
---> [18](vscode-notebook-cell:?execution_count=3&line=18) agent_executor = create_react_agent(model, tools, checkpointer=memory)
File ~/Development/.venv/lib/python3.12/site-packages/langgraph/_api/deprecation.py:80, in deprecated_parameter.<locals>.decorator.<locals>.wrapper(*args, **kwargs)
[72](https://file+.vscode-resource.vscode-cdn.net/Users/florentremis/Development/~/Development/.venv/lib/python3.12/site-packages/langgraph/_api/deprecation.py:72) if arg_name in kwargs:
[73](https://file+.vscode-resource.vscode-cdn.net/Users/florentremis/Development/~/Development/.venv/lib/python3.12/site-packages/langgraph/_api/deprecation.py:73) warnings.warn(
[74](https://file+.vscode-resource.vscode-cdn.net/Users/florentremis/Development/~/Development/.venv/lib/python3.12/site-packages/langgraph/_api/deprecation.py:74) f"Parameter '{arg_name}' in function '{func.__name__}' is "
[75](https://file+.vscode-resource.vscode-cdn.net/Users/florentremis/Development/~/Development/.venv/lib/python3.12/site-packages/langgraph/_api/deprecation.py:75) f"deprecated as of version {since} and will be removed in version {removal}. "
(...)
[78](https://file+.vscode-resource.vscode-cdn.net/Users/florentremis/Development/~/Development/.venv/lib/python3.12/site-packages/langgraph/_api/deprecation.py:78) stacklevel=2,
[79](https://file+.vscode-resource.vscode-cdn.net/Users/florentremis/Development/~/Development/.venv/lib/python3.12/site-packages/langgraph/_api/deprecation.py:79) )
---> [80](https://file+.vscode-resource.vscode-cdn.net/Users/florentremis/Development/~/Development/.venv/lib/python3.12/site-packages/langgraph/_api/deprecation.py:80) return func(*args, **kwargs)
File ~/Development/.venv/lib/python3.12/site-packages/langgraph/prebuilt/chat_agent_executor.py:511, in create_react_agent(model, tools, state_schema, messages_modifier, state_modifier, checkpointer, interrupt_before, interrupt_after, debug)
[506](https://file+.vscode-resource.vscode-cdn.net/Users/florentremis/Development/~/Development/.venv/lib/python3.12/site-packages/langgraph/prebuilt/chat_agent_executor.py:506) workflow.add_edge("tools", "agent")
[508](https://file+.vscode-resource.vscode-cdn.net/Users/florentremis/Development/~/Development/.venv/lib/python3.12/site-packages/langgraph/prebuilt/chat_agent_executor.py:508) # Finally, we compile it!
[509](https://file+.vscode-resource.vscode-cdn.net/Users/florentremis/Development/~/Development/.venv/lib/python3.12/site-packages/langgraph/prebuilt/chat_agent_executor.py:509) # This compiles it into a LangChain Runnable,
[510](https://file+.vscode-resource.vscode-cdn.net/Users/florentremis/Development/~/Development/.venv/lib/python3.12/site-packages/langgraph/prebuilt/chat_agent_executor.py:510) # meaning you can use it as you would any other runnable
--> [511](https://file+.vscode-resource.vscode-cdn.net/Users/florentremis/Development/~/Development/.venv/lib/python3.12/site-packages/langgraph/prebuilt/chat_agent_executor.py:511) return workflow.compile(
[512](https://file+.vscode-resource.vscode-cdn.net/Users/florentremis/Development/~/Development/.venv/lib/python3.12/site-packages/langgraph/prebuilt/chat_agent_executor.py:512) checkpointer=checkpointer,
[513](https://file+.vscode-resource.vscode-cdn.net/Users/florentremis/Development/~/Development/.venv/lib/python3.12/site-packages/langgraph/prebuilt/chat_agent_executor.py:513) interrupt_before=interrupt_before,
[514](https://file+.vscode-resource.vscode-cdn.net/Users/florentremis/Development/~/Development/.venv/lib/python3.12/site-packages/langgraph/prebuilt/chat_agent_executor.py:514) interrupt_after=interrupt_after,
[515](https://file+.vscode-resource.vscode-cdn.net/Users/florentremis/Development/~/Development/.venv/lib/python3.12/site-packages/langgraph/prebuilt/chat_agent_executor.py:515) debug=debug,
[516](https://file+.vscode-resource.vscode-cdn.net/Users/florentremis/Development/~/Development/.venv/lib/python3.12/site-packages/langgraph/prebuilt/chat_agent_executor.py:516) )
File ~/Development/.venv/lib/python3.12/site-packages/langgraph/graph/state.py:431, in StateGraph.compile(self, checkpointer, interrupt_before, interrupt_after, debug)
[411](https://file+.vscode-resource.vscode-cdn.net/Users/florentremis/Development/~/Development/.venv/lib/python3.12/site-packages/langgraph/graph/state.py:411) output_channels = (
[412](https://file+.vscode-resource.vscode-cdn.net/Users/florentremis/Development/~/Development/.venv/lib/python3.12/site-packages/langgraph/graph/state.py:412) "__root__"
[413](https://file+.vscode-resource.vscode-cdn.net/Users/florentremis/Development/~/Development/.venv/lib/python3.12/site-packages/langgraph/graph/state.py:413) if len(self.schemas[self.output]) == 1
(...)
[419](https://file+.vscode-resource.vscode-cdn.net/Users/florentremis/Development/~/Development/.venv/lib/python3.12/site-packages/langgraph/graph/state.py:419) ]
[420](https://file+.vscode-resource.vscode-cdn.net/Users/florentremis/Development/~/Development/.venv/lib/python3.12/site-packages/langgraph/graph/state.py:420) )
[421](https://file+.vscode-resource.vscode-cdn.net/Users/florentremis/Development/~/Development/.venv/lib/python3.12/site-packages/langgraph/graph/state.py:421) stream_channels = (
[422](https://file+.vscode-resource.vscode-cdn.net/Users/florentremis/Development/~/Development/.venv/lib/python3.12/site-packages/langgraph/graph/state.py:422) "__root__"
[423](https://file+.vscode-resource.vscode-cdn.net/Users/florentremis/Development/~/Development/.venv/lib/python3.12/site-packages/langgraph/graph/state.py:423) if len(self.channels) == 1 and "__root__" in self.channels
(...)
[428](https://file+.vscode-resource.vscode-cdn.net/Users/florentremis/Development/~/Development/.venv/lib/python3.12/site-packages/langgraph/graph/state.py:428) ]
[429](https://file+.vscode-resource.vscode-cdn.net/Users/florentremis/Development/~/Development/.venv/lib/python3.12/site-packages/langgraph/graph/state.py:429) )
--> [431](https://file+.vscode-resource.vscode-cdn.net/Users/florentremis/Development/~/Development/.venv/lib/python3.12/site-packages/langgraph/graph/state.py:431) compiled = CompiledStateGraph(
[432](https://file+.vscode-resource.vscode-cdn.net/Users/florentremis/Development/~/Development/.venv/lib/python3.12/site-packages/langgraph/graph/state.py:432) builder=self,
[433](https://file+.vscode-resource.vscode-cdn.net/Users/florentremis/Development/~/Development/.venv/lib/python3.12/site-packages/langgraph/graph/state.py:433) config_type=self.config_schema,
[434](https://file+.vscode-resource.vscode-cdn.net/Users/florentremis/Development/~/Development/.venv/lib/python3.12/site-packages/langgraph/graph/state.py:434) nodes={},
[435](https://file+.vscode-resource.vscode-cdn.net/Users/florentremis/Development/~/Development/.venv/lib/python3.12/site-packages/langgraph/graph/state.py:435) channels={**self.channels, START: EphemeralValue(self.input)},
[436](https://file+.vscode-resource.vscode-cdn.net/Users/florentremis/Development/~/Development/.venv/lib/python3.12/site-packages/langgraph/graph/state.py:436) input_channels=START,
[437](https://file+.vscode-resource.vscode-cdn.net/Users/florentremis/Development/~/Development/.venv/lib/python3.12/site-packages/langgraph/graph/state.py:437) stream_mode="updates",
[438](https://file+.vscode-resource.vscode-cdn.net/Users/florentremis/Development/~/Development/.venv/lib/python3.12/site-packages/langgraph/graph/state.py:438) output_channels=output_channels,
[439](https://file+.vscode-resource.vscode-cdn.net/Users/florentremis/Development/~/Development/.venv/lib/python3.12/site-packages/langgraph/graph/state.py:439) stream_channels=stream_channels,
[440](https://file+.vscode-resource.vscode-cdn.net/Users/florentremis/Development/~/Development/.venv/lib/python3.12/site-packages/langgraph/graph/state.py:440) checkpointer=checkpointer,
[441](https://file+.vscode-resource.vscode-cdn.net/Users/florentremis/Development/~/Development/.venv/lib/python3.12/site-packages/langgraph/graph/state.py:441) interrupt_before_nodes=interrupt_before,
[442](https://file+.vscode-resource.vscode-cdn.net/Users/florentremis/Development/~/Development/.venv/lib/python3.12/site-packages/langgraph/graph/state.py:442) interrupt_after_nodes=interrupt_after,
[443](https://file+.vscode-resource.vscode-cdn.net/Users/florentremis/Development/~/Development/.venv/lib/python3.12/site-packages/langgraph/graph/state.py:443) auto_validate=False,
[444](https://file+.vscode-resource.vscode-cdn.net/Users/florentremis/Development/~/Development/.venv/lib/python3.12/site-packages/langgraph/graph/state.py:444) debug=debug,
[445](https://file+.vscode-resource.vscode-cdn.net/Users/florentremis/Development/~/Development/.venv/lib/python3.12/site-packages/langgraph/graph/state.py:445) )
[447](https://file+.vscode-resource.vscode-cdn.net/Users/florentremis/Development/~/Development/.venv/lib/python3.12/site-packages/langgraph/graph/state.py:447) compiled.attach_node(START, None)
[448](https://file+.vscode-resource.vscode-cdn.net/Users/florentremis/Development/~/Development/.venv/lib/python3.12/site-packages/langgraph/graph/state.py:448) for key, node in self.nodes.items():
File ~/Development/.venv/lib/python3.12/site-packages/pydantic/v1/main.py:341, in BaseModel.__init__(__pydantic_self__, **data)
[339](https://file+.vscode-resource.vscode-cdn.net/Users/florentremis/Development/~/Development/.venv/lib/python3.12/site-packages/pydantic/v1/main.py:339) values, fields_set, validation_error = validate_model(__pydantic_self__.__class__, data)
[340](https://file+.vscode-resource.vscode-cdn.net/Users/florentremis/Development/~/Development/.venv/lib/python3.12/site-packages/pydantic/v1/main.py:340) if validation_error:
--> [341](https://file+.vscode-resource.vscode-cdn.net/Users/florentremis/Development/~/Development/.venv/lib/python3.12/site-packages/pydantic/v1/main.py:341) raise validation_error
[342](https://file+.vscode-resource.vscode-cdn.net/Users/florentremis/Development/~/Development/.venv/lib/python3.12/site-packages/pydantic/v1/main.py:342) try:
[343](https://file+.vscode-resource.vscode-cdn.net/Users/florentremis/Development/~/Development/.venv/lib/python3.12/site-packages/pydantic/v1/main.py:343) object_setattr(__pydantic_self__, '__dict__', values)
ValidationError: 1 validation error for CompiledStateGraph
checkpointer
instance of BaseCheckpointSaver expected (type=type_error.arbitrary_type; expected_arbitrary_type=BaseCheckpointSaver)
### Description
I'm trying to follow the Agent tutorial [here](https://python.langchain.com/v0.2/docs/tutorials/agents/).
When I reach the "Adding in Memory" section, I get an error message when creating the agent.
It's validation error complaining that BaseCheckpointSaver was expected. The surprising thing is that SqliteSaver does inherit from BaseCheckpointSaver so I don't understand what the problem is.
I've tried replacing SqliteSaver with the async version but I get the same error.
### System Info
langchain==0.2.12
langchain-chroma==0.1.2
langchain-community==0.2.11
langchain-core==0.2.28
langchain-google-vertexai==1.0.8
langchain-text-splitters==0.2.2
Platform: MacOS
Python 3.12.3 | Validation Error in Agent Tutorial when calling create_react_agent with SqliteSaver checkpointer | https://api.github.com/repos/langchain-ai/langchain/issues/25137/comments | 3 | 2024-08-07T12:21:02Z | 2024-08-09T11:40:35Z | https://github.com/langchain-ai/langchain/issues/25137 | 2,453,348,729 | 25,137 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
from langchain_community.llms import MLXPipeline
from langchain_community.chat_models.mlx import ChatMLX
from langchain.agents import AgentExecutor, load_tools
from langchain_core.prompts.chat import (
ChatPromptTemplate,
HumanMessagePromptTemplate,
SystemMessagePromptTemplate,
)
from langchain.tools.render import render_text_description
from langchain.agents.format_scratchpad import format_log_to_str
from langchain.agents.output_parsers import (
ReActJsonSingleInputOutputParser,
)
system = '''a'''
human = '''{input}
{agent_scratchpad}
'''
def get_custom_prompt():
messages = [
SystemMessagePromptTemplate.from_template(system),
HumanMessagePromptTemplate.from_template(human),
]
input_variables = ["agent_scratchpad", "input", "tool_names", "tools"]
return ChatPromptTemplate(input_variables=input_variables, messages=messages)
llm = MLXPipeline.from_model_id(
model_id="mlx-community/Meta-Llama-3-8B-Instruct-4bit",
)
chat_model = ChatMLX(llm=llm)
prompt = get_custom_prompt()
prompt = prompt.partial(
tools=render_text_description([]),
tool_names=", ".join([t.name for t in []]),
)
chat_model_with_stop = chat_model.bind(stop=["\nObservation"])
agent = (
{
"input": lambda x: x["input"],
"agent_scratchpad": lambda x: format_log_to_str(x["intermediate_steps"]),
}
| prompt
| chat_model_with_stop
| ReActJsonSingleInputOutputParser()
)
# instantiate AgentExecutor
agent_executor = AgentExecutor(agent=agent, tools=[], verbose=True)
agent_executor.invoke(
{
"input": "What is your name?"
}
)
```
### Error Message and Stack Trace (if applicable)
File "/Users/==/.pyenv/versions/hack/lib/python3.11/site-packages/langchain_community/chat_models/mlx.py", line 184, in _stream
text = self.tokenizer.decode(token.item())
^^^^^^^^^^
AttributeError: 'int' object has no attribute 'item'
Uncaught exception. Entering post mortem debugging
### Description
Hi there,
I assume this bug is similar to this issue https://github.com/langchain-ai/langchain/issues/20561. Why is that? Because if you locally apply changes from this patch https://github.com/langchain-ai/langchain/commit/ad48f77e5733a0fd6e027d7fe6feecf6bed035e1 starting from line 174 to langchain_community/chat_models/mlx.py, the bug disappears.
Best wishes
### System Info
langchain==0.2.12
langchain-community==0.2.11
langchain-core==0.2.28
langchain-experimental==0.0.64
langchain-huggingface==0.0.3
langchain-text-splitters==0.2.2
mac
3.11.9 | Mistype issue using MLX Chat Model via MLXPipeline | https://api.github.com/repos/langchain-ai/langchain/issues/25134/comments | 0 | 2024-08-07T09:27:26Z | 2024-08-07T09:30:04Z | https://github.com/langchain-ai/langchain/issues/25134 | 2,453,012,391 | 25,134 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
"""
You will be asked a question about a dataframe and you will determine the necessary function that should be run to give a response to the question. You won't answer the question, you will only state the name of a function or respond with NONE as I will explain you later.
I will explain you a few functions which you can use if the user asks you to analyze the data. The methods will provide you the necessary analysis and prediction. You must state the method's name and the required parameters to use it. Each method has a dataframe as it's first parameter which will be given to you so you can just state DF for that parameter. Also, if the user's question doesn't state specific metrics, you can pass ['ALL'] as the list of metrics. Your answer must only contain the function name with the parameters.
The first method is: create_prophet_predictions(df, metrics_for_forecasting, periods=28),
It takes 3 arguments, 1st is a dataframe, 2nd is a list of metrics which we want to get the forecast for, and the 3rd, an optional period argument that represents the number of days which we want to extend our dataframe for.
It returns an extended version of the provided initial dataframe by adding the future prediction results. It returns the initial dataframe without making additions if it fails to forecast so there is no error raised in any case. You will use this method if the user wishes to learn about the state of his campaigns' future. The user doesn't have to state a period, you can just choose 2 weeks or a month to demonstrate.
The second method is: calculate_statistics(df, metrics),
It takes 2 arguments, 1st is a dataframe, and 2nd is a list of metrics.
It returns a dictionary of different statistics for each metric provided in the 2nd parameter.
The returned dictionary looks like this:
{'metric': [], 'mean': [], 'median': [], 'std_dev': [], 'variance': [], 'skewness': [], 'kurtosis': [], 'min': [], 'max': [], '25th_percentile': [], '75th_percentile': [], 'trend_slope': [], 'trend_intercept': [], 'r_value': [], 'p_value': [], 'std_err': []}
If any of the keys of this dictionary is asked in the question, this method should be used. Also, if the user asks an overall analysis of his campaigns, this method should be used with metrics parameter of the function as ['ALL'] to comment on specific metrics. These statistics provide a comprehensive overview of the central tendency, dispersion, distribution shape, and trend characteristics of the data, as well as the relationship between variables in regression analysis and some simple statistics like mean, min and max can help you answer questions.
The third method is: feature_importance_analysis(df, target_column, size_column, feature_columns, is_regression=True),
It takes 5 parameters. 1st parameter is the dataframe, 2nd parameter is the column name of the target variable, 3rd parameter is the name of the column which contains the size of our target column, and it is used to adjust the dataframe, 4th parameter is the feature_columns list and it should be the list of features which we want to analyze the importance of, and the 5th parameter is the boolean value representing if our model is a regression model or classification model (True = regression, False = classification)
It uses machine learning algorithms and calculates feature importance of some features provided by you. It also gives information about our audience size and target size. And lastly it gives the single and combined shap values of the given features to determine the contributions of each of them to the feature importance analysis. If the question contains the phrases "audience size" or "target size" or "importance" or if the user wants to know why do the model thinks that some features will impact our results more significantly, it is a very high chance that you will use this function.
```python
analysis_examples = [
{
"question": "Can you analyze my top performing 10 Google Ads campaigns in terms of CTR?",
"answer": "calculate_statistics(DF, ['ALL'])"
},
{
"question": "Can you give me the projection of my campaign's cost and cpm results for the next week?",
"answer": "create_prophet_predictions(DF, ['cost', 'cpm'], 7)"
},
{
"question": "Which metric in my last google ads campaign serves a key role?",
"answer": "feature_importance_analysis(DF, 'revenue', 'cost', ['ctr', 'roas', 'cpc', 'clicks', 'impressions'], True)"
},
{
"question": "Can you give me the projection of my campaign's cost and cpm results for the next week?",
"answer": "create_prophet_predictions(DF, ['cost', 'cpm'], 7)"
},
{
"question": "What is the mean of the cost values of my top performing 10 campaigns based on ROAS values?",
"answer": "calculate_statistics(DF, ['cost'])"
},
]
analysis_example_prompt = ChatPromptTemplate.from_messages(
[
("human", "{question}"),
("ai", "{answer}"),
]
)
analysis_few_shot_prompt = FewShotChatMessagePromptTemplate(
example_prompt=analysis_example_prompt,
examples=analysis_examples,
)
with open("/analysis/guidance_statistics_funcs.txt", "r") as f:
guidance_text = f.read()
analysis_final_prompt = ChatPromptTemplate.from_messages(
[
("system", guidance_text),
analysis_few_shot_prompt,
("human", "{input}"),
]
)
analysis_chain = analysis_final_prompt | ChatOpenAI(model="gpt-4o-mini", temperature=0) | StrOutputParser()
response = analysis_chain.invoke({"input": analysis_sentence})
```
### Error Message and Stack Trace (if applicable)
ErrorMessage: 'Input to ChatPromptTemplate is missing variables {"\'metric\'"}. Expected: ["\'metric\'", \'input\'] Received: [\'input\']'
I couldn't provide the whole stack trace since I run it on a web app. But the exception is raised in the invoke process.
### Description
from langchain_core.prompts import ChatPromptTemplate
The error is caused by my prompt, specifically the guidance text which I passed as the "system" message to the ChatPromptTemplate. I described a dictionary structure to the LLM which a function I will use returns, but the curly braces I provided somehow caused an injection-like problem, causing my chain to expect more inputs than I provided. When I deleted the first key of the dictionary in my prompt, this time it expected the 2nd key of the dictionary as an input to the chain. And once I deleted the curly braces in my system prompt, the issue resolved. So I am certain that this problem is caused by the ChatPromptTemplate object.
### System Info
System Information
------------------
> OS: Darwin
> OS Version: Darwin Kernel Version 23.5.0: Wed May 1 20:19:05 PDT 2024; root:xnu-10063.121.3~5/RELEASE_ARM64_T8112
> Python Version: 3.11.9 (v3.11.9:de54cf5be3, Apr 2 2024, 07:12:50) [Clang 13.0.0 (clang-1300.0.29.30)]
Package Information
-------------------
> langchain_core: 0.2.28
> langchain: 0.2.10
> langchain_community: 0.2.7
> langsmith: 0.1.82
> langchain_chroma: 0.1.1
> langchain_experimental: 0.0.62
> langchain_openai: 0.1.20
> langchain_text_splitters: 0.2.1
> langchainhub: 0.1.20
> langgraph: 0.1.5
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langserve
| Prompt Template Injection??? | https://api.github.com/repos/langchain-ai/langchain/issues/25132/comments | 2 | 2024-08-07T08:50:27Z | 2024-08-07T17:00:44Z | https://github.com/langchain-ai/langchain/issues/25132 | 2,452,933,304 | 25,132 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
from langchain_nvidia_ai_endpoints import ChatNVIDIA
from langchain_community.utilities.sql_database import SQLDatabase
from langchain_community.agent_toolkits.sql.base import create_sql_agent
from langchain_community.agent_toolkits.sql.toolkit import SQLDatabaseToolkit
from langchain.agents.agent_types import AgentType
import os
import logging
client = ChatNVIDIA(
model="meta/llama-3.1-405b-instruct",
api_key="api_key",
temperature=0.2,
top_p=0.7,
max_tokens=1024,
)
inventory_db_path = os.path.expanduser('~/database.db')
db = SQLDatabase.from_uri(f"sqlite:///{inventory_db_path}")
toolkit = SQLDatabaseToolkit(db=db, llm=client)
agent_executor = create_sql_agent(
llm=client,
toolkit=toolkit,
verbose=True,
)
def handle_conversation(context, user_input):
try:
result = agent_executor.run(user_input)
return result
except Exception as e:
logging.error(f"Exception in handle_conversation: {e}")
return "Error: Exception occurred while processing the request."
### Error Message and Stack Trace (if applicable)
Action: sql_db_schema
Action Input: inventory, inband_ping
ObservDEBUG:urllib3.connectionpool:Starting new HTTPS connection (1): integrate.api.nvidia.com:443
DEBUG:urllib3.connectionpool:https://integrate.api.nvidia.com:443 "POST /v1/chat/completions HTTP/11" 200 None
Error: table_names {'inband_ping\nObserv'} not found in databaseIt looks like you're having trouble getting the schema of the 'inventory' table. Let me try a different approach.
### Description
Currently running langchain==0.2.11, with llama3.1 against sqlite db to query data from the tables but running into an issue where the llm is using nObserv while searching for table in the database. I tried using different LLM models (llama, mistral) and running into the same issue
### System Info
pip freeze | grep langchain
langchain==0.2.11
langchain-community==0.0.20
langchain-core==0.2.28
langchain-nvidia-ai-endpoints==0.2.0
langchain-ollama==0.1.1
langchain-text-splitters==0.2.2
python==3.12.4 | Langchain sqlagent - Error: table_names {'inventory\nObserv'} not found in database | https://api.github.com/repos/langchain-ai/langchain/issues/25122/comments | 2 | 2024-08-07T00:32:28Z | 2024-08-10T12:20:00Z | https://github.com/langchain-ai/langchain/issues/25122 | 2,451,897,631 | 25,122 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
retriever = SelfQueryRetriever.from_llm(
llm, vectorstore, document_content_description, metadata_field_info, verbose=True
)
### Error Message and Stack Trace (if applicable)
contain also doesn't work
ValueError: Received disallowed comparator contain. Allowed comparators are [<Comparator.EQ: 'eq'>, <Comparator.NE: 'ne'>, <Comparator.GT: 'gt'>, <Comparator.GTE: 'gte'>, <Comparator.LT: 'lt'>, <Comparator.LTE: 'lte'>]
### Description
Allowed operators in SelfQueryRetriever not allowing contain and in.
### System Info
Python 3.12
langchain 0.2.12
chroma 0.5.5 | SelfQueryRetriever alloowed operators does not allow contain Chroma | https://api.github.com/repos/langchain-ai/langchain/issues/25120/comments | 0 | 2024-08-06T22:51:47Z | 2024-08-06T22:54:22Z | https://github.com/langchain-ai/langchain/issues/25120 | 2,451,813,576 | 25,120 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
The following code should instantiate a new `Chroma` instance from the supplied `List` of `Document`s:
```python
from langchain_chroma import Chroma
from langchain_community.embeddings import GPT4AllEmbeddings
from langchain_core.documents.base import Document
vectorstore = Chroma.from_documents(
documents=[Document(page_content="text", metadata={"source": "local"})],
embedding=GPT4AllEmbeddings(model_name='all-MiniLM-L6-v2.gguf2.f16.gguf'),
)
```
### Error Message and Stack Trace (if applicable)
```
ValidationError Traceback (most recent call last)
Cell In[10], line 7
2 from langchain_community.embeddings import GPT4AllEmbeddings
3 from langchain_core.documents.base import Document
5 vectorstore = Chroma.from_documents(
6 documents=[Document(page_content="text", metadata={"source": "local"})],
----> 7 embedding=GPT4AllEmbeddings(model_name='all-MiniLM-L6-v2.gguf2.f16.gguf'),
8 )
File ~/src/rag/.venv/lib/python3.10/site-packages/pydantic/v1/main.py:341, in BaseModel.__init__(__pydantic_self__, **data)
339 values, fields_set, validation_error = validate_model(__pydantic_self__.__class__, data)
340 if validation_error:
--> 341 raise validation_error
342 try:
343 object_setattr(__pydantic_self__, '__dict__', values)
ValidationError: 1 validation error for GPT4AllEmbeddings
__root__
gpt4all.gpt4all.Embed4All() argument after ** must be a mapping, not NoneType (type=type_error)
```
### Description
The code fragment above is based on the [Document Loading section of the Using local models tutorial](https://python.langchain.com/v0.1/docs/use_cases/question_answering/local_retrieval_qa/#document-loading).
The issue is that #21238 updated `GPT4AllEmbeddings.validate_environment()` to pass `gpt4all_kwargs` through to the `Embed4All` constructor, but did not consider existing (or new) code that does not supply a value for `gpt4all_kwargs` when creating a `GPT4AllEmbeddings`.
The workaround is to set `gpt4all_kwargs` to an empty dict when creating a `GPT4AllEmbeddings`:
```python
vectorstore = Chroma.from_documents(
documents=[Document(page_content="text", metadata={"source": "local"})],
embedding=GPT4AllEmbeddings(model_name='all-MiniLM-L6-v2.gguf2.f16.gguf', gpt4all_kwargs={}),
)
```
The fix, which I shall provide shortly as a PR, is for `GPT4AllEmbeddings.validate_environment()` to pass an empty dict to the `Embed4All` constructor if the incoming `gpt4all_kwargs` is not set:
```python
values["client"] = Embed4All(
model_name=values.get("model_name"),
n_threads=values.get("n_threads"),
device=values.get("device"),
**(values.get("gpt4all_kwargs") or {}),
)
```
### System Info
```
System Information
------------------
> OS: Darwin
> OS Version: Darwin Kernel Version 23.5.0: Wed May 1 20:12:58 PDT 2024; root:xnu-10063.121.3~5/RELEASE_ARM64_T6000
> Python Version: 3.10.14 (main, Mar 20 2024, 14:43:31) [Clang 15.0.0 (clang-1500.1.0.2.5)]
Package Information
-------------------
> langchain_core: 0.2.28
> langchain: 0.2.12
> langchain_community: 0.2.11
> langsmith: 0.1.84
> langchain_chroma: 0.1.2
> langchain_openai: 0.1.16
> langchain_text_splitters: 0.2.2
> langchainhub: 0.1.20
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph
> langserve
``` | Instantiating GPT4AllEmbeddings with no gpt4all_kwargs argument raises a ValidationError | https://api.github.com/repos/langchain-ai/langchain/issues/25119/comments | 0 | 2024-08-06T22:46:45Z | 2024-08-06T22:49:25Z | https://github.com/langchain-ai/langchain/issues/25119 | 2,451,808,971 | 25,119 |
[
"langchain-ai",
"langchain"
] | ### URL
https://js.langchain.com/v0.2/docs/how_to/message_history/
### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
There is currently no documentation available on how to implement tool calling with message history in LangChain, or it cannot be found. The existing documentation(https://js.langchain.com/v0.2/docs/how_to/message_history/) provides examples of adding message history, but it does not cover integrating tool calling.
I suggest adding a section that demonstrates how to:
Implement tool calling within the context of a message history.
Configure tools to work seamlessly with historical messages.
Use practical examples to illustrate the setup and usage.
This addition would be highly beneficial for users looking to leverage both features together.
### Idea or request for content:
_No response_ | DOC: Guide for Implementing Tool Calling with Message History in LangChain | https://api.github.com/repos/langchain-ai/langchain/issues/25099/comments | 0 | 2024-08-06T12:28:48Z | 2024-08-06T12:31:25Z | https://github.com/langchain-ai/langchain/issues/25099 | 2,450,764,229 | 25,099 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```py
import pandas as pd
from langchain_core.messages import ToolMessage
class Foo:
def __init__(self, bar: str) -> None:
self.bar = bar
foo = Foo("bar")
msg = ToolMessage(content="OK", artifact=foo, tool_call_id="123")
py_dict = msg.to_json() # ok, it's a dictionary,
data_frame = pd.DataFrame({"name": ["Alice", "Bob"], "age": [17, 19]})
msg = ToolMessage(content="Error", artifact=data_frame, tool_call_id="456")
py_dict = msg.to_json() # error, because DataFrame cannot be evaluated as a bool().
```
### Error Message and Stack Trace (if applicable)
```plain
Traceback (most recent call last):
File "/home/gbaian10/work/my_gpt/issue.py", line 16, in <module>
py_dict = msg.to_json() # error, because DataFrame cannot be evaluated as a bool().
File "/home/gbaian10/.local/lib/python3.10/site-packages/langchain_core/load/serializable.py", line 182, in to_json
lc_kwargs = {
File "/home/gbaian10/.local/lib/python3.10/site-packages/langchain_core/load/serializable.py", line 186, in <dictcomp>
and _is_field_useful(self, k, v)
File "/home/gbaian10/.local/lib/python3.10/site-packages/langchain_core/load/serializable.py", line 260, in _is_field_useful
return field.required is True or value or field.get_default() != value
File "/home/gbaian10/.local/lib/python3.10/site-packages/pandas/core/generic.py", line 1577, in __nonzero__
raise ValueError(
ValueError: The truth value of a DataFrame is ambiguous. Use a.empty, a.bool(), a.item(), a.any() or a.all().
```
### Description

The issue is caused by directly evaluating the truth value of the object, leading to an exception.
I think pandas df should be a rare exception, as most objects should be able to be evaluated for their value using bool().
But I think it should be possible to access Python objects within the ToolMessage artifact. Right?
### System Info
langchain==0.2.12
langchain-core==0.2.28
pandas==2.2.2
platform==linux
python-version==3.10.12 | An error might occur during execution in _is_field_useful within Serializable | https://api.github.com/repos/langchain-ai/langchain/issues/25095/comments | 1 | 2024-08-06T08:56:23Z | 2024-08-06T15:49:42Z | https://github.com/langchain-ai/langchain/issues/25095 | 2,450,329,128 | 25,095 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
from langchain.vectorstores import Chroma
for doc in docs:
vectordb = Chroma.from_documents(
documents=doc,
embedding=bge_embeddings)
Each round, I initialize the vectordb, why the next round will appear the history document, such as:
1)for the first round, i feed the document to chroma, and the output is 'Document(page_content='工程预算总表(表一)建设项目名称....)
2)for the second round, i feed another document to chroma, and the ouput is '[Document(page_content='设计预算总表的总价值的除税价为452900.05元。......'), Document(page_content='工程预算总表(表一)名称....]'
for the second round, i initialize the vectordb, why will appear the first document content?
### Error Message and Stack Trace (if applicable)
_No response_
### Description
langchain 0.0.354
### System Info
ubuntu 22
pytorch2.3
python 3.8 | There is a bug for Chroma. | https://api.github.com/repos/langchain-ai/langchain/issues/25089/comments | 4 | 2024-08-06T02:25:23Z | 2024-08-06T16:09:45Z | https://github.com/langchain-ai/langchain/issues/25089 | 2,449,800,643 | 25,089 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
The following code raise LLM not supported:
```python
from langchain_aws.chat_models.bedrock import ChatBedrock
from langchain_community.llms.loading import load_llm_from_config
llm = ChatBedrock(
model_id='anthropic.claude-3-5-sonnet-20240620-v1:0',
model_kwargs={"temperature": 0.0}
)
config = llm.dict()
load_llm_from_config(config)
```
Something similar happens with `langchain_aws.chat_models.bedrock.ChatBedrockConverse`
### Error Message and Stack Trace (if applicable)
ValueError: Loading amazon_bedrock_chat LLM not supported
### Description
I am trying to use `load_llm_from_config` for a `ChatBedrock` LLM. It seems that `langchain_community.llms.get_type_to_cls_dict` does not include `amazon_bedrock_chat`. Moreover the dictionary representation does not allows the initialization of the class as it is.
```python
from langchain_aws.chat_models.bedrock import ChatBedrock
llm = ChatBedrock(
model_id='anthropic.claude-3-5-sonnet-20240620-v1:0',
model_kwargs={"temperature": 0.0}
)
config = llm.dict()
llm_cls = config.pop("_type")
ChatBedrock(**config)
```
Raises
```
ValidationError: 5 validation errors for ChatBedrock
guardrailIdentifier
extra fields not permitted (type=value_error.extra)
guardrailVersion
extra fields not permitted (type=value_error.extra)
stream
extra fields not permitted (type=value_error.extra)
temperature
extra fields not permitted (type=value_error.extra)
trace
extra fields not permitted (type=value_error.extra)
```
# Possible solutions
Change dict representation of ChatBedrock and implement `amazon_bedrock_chat` in `get_type_to_cls_dict`.
Moreover, it seems that langchain-aws is moving to `ChatBedrockConverse`, which will need an additional implementation.
### System Info
Python 3.10.14
langchain-community: 0.2.10
langchain-aws: 0.1.13 | Outdated Bedrock LLM when using load model from config | https://api.github.com/repos/langchain-ai/langchain/issues/25086/comments | 2 | 2024-08-05T22:36:09Z | 2024-08-05T22:53:01Z | https://github.com/langchain-ai/langchain/issues/25086 | 2,449,589,170 | 25,086 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
loader = AzureAIDocumentIntelligenceLoader(api_endpoint=endpoint, api_key=key, file_path=file_path,
api_model="prebuilt-layout", api_version=api_version, analysis_features = ["ocrHighResolution"]
)
documents = loader.load()
for document in documents:
print(f"Page Content: {document.page_content}")
### Error Message and Stack Trace (if applicable)
_No response_
### Description
While trying the same document on Azure portal (https://documentintelligence.ai.azure.com/studio/layout) with ocrHighResolution enabled, I am getting the correct OCR result. When the feature is disabled, I am seeing obvious mistakes in result. In case of langchain, I am getting the same mistakes whether I pass the analysis feature or not with ocrHighResolution.
### System Info
langchain==0.2.11
langchain-community==0.2.10
langchain-core==0.2.23
langchain-text-splitters==0.2.2
Platform: MacBook Pro (M2 Pro)
Python version: 3.11.5 | AzureAIDocumentIntelligenceLoader analysis feature ocrHighResolution not making any difference | https://api.github.com/repos/langchain-ai/langchain/issues/25081/comments | 3 | 2024-08-05T22:10:07Z | 2024-08-06T15:00:11Z | https://github.com/langchain-ai/langchain/issues/25081 | 2,449,562,037 | 25,081 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
The following code:
```python
from typing import Any, Type
from langchain.pydantic_v1 import BaseModel, Field
from langchain_core.tools import BaseTool
from langchain_core.utils.function_calling import convert_to_openai_tool
name = "testing"
description = "testing"
def run(some_param, **kwargs):
pass
class ToolSchema(BaseModel, extra="allow"):
some_param: str = Field(default="", description="some_param")
class RunTool(BaseTool):
name = name
description = description
args_schema: Type[BaseModel] = ToolSchema
def _run(
self,
some_param: str = "",
) -> Any:
return run(
some_param=some_param,
**self.metadata,
)
convert_to_openai_tool(RunTool())
```
### Error Message and Stack Trace (if applicable)
```
Traceback (most recent call last):
File "/home/freire/dev/navi-ai-api/app/playground.py", line 34, in <module>
convert_to_openai_tool(RunTool())
File "/home/freire/dev/navi-ai-api/venv/lib/python3.10/site-packages/langchain_core/utils/function_calling.py", line 392, in convert_to_openai_tool
function = convert_to_openai_function(tool)
File "/home/freire/dev/navi-ai-api/venv/lib/python3.10/site-packages/langchain_core/utils/function_calling.py", line 363, in convert_to_openai_function
return cast(Dict, format_tool_to_openai_function(function))
File "/home/freire/dev/navi-ai-api/venv/lib/python3.10/site-packages/langchain_core/_api/deprecation.py", line 168, in warning_emitting_wrapper
return wrapped(*args, **kwargs)
File "/home/freire/dev/navi-ai-api/venv/lib/python3.10/site-packages/langchain_core/utils/function_calling.py", line 282, in format_tool_to_openai_function
if tool.tool_call_schema:
File "/home/freire/dev/navi-ai-api/venv/lib/python3.10/site-packages/langchain_core/tools.py", line 398, in tool_call_schema
return _create_subset_model(
File "/home/freire/dev/navi-ai-api/venv/lib/python3.10/site-packages/langchain_core/utils/pydantic.py", line 252, in _create_subset_model
return _create_subset_model_v1(
File "/home/freire/dev/navi-ai-api/venv/lib/python3.10/site-packages/langchain_core/utils/pydantic.py", line 184, in _create_subset_model_v1
field = model.__fields__[field_name]
KeyError: 'extra_data'
```
### Description
After updating langchain-core to 0.2.27+
Working fine on 0.2.26 or if i remove the `extra="allow"` option
### System Info
```
langchain==0.2.12
langchain-cli==0.0.28
langchain-community==0.2.10
langchain-core==0.2.26
langchain-openai==0.1.20
langchain-text-splitters==0.2.2
langchainhub==0.1.20
```
Linux
python3.10 (tested also on 3.12) | extra="allow" not working after langchain-core==0.2.27 | https://api.github.com/repos/langchain-ai/langchain/issues/25072/comments | 0 | 2024-08-05T20:21:58Z | 2024-08-05T20:47:55Z | https://github.com/langchain-ai/langchain/issues/25072 | 2,449,415,532 | 25,072 |
[
"langchain-ai",
"langchain"
] | ### URL
langchain/cookbook /baby_agi.ipynb
### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
Is it really possible to reproduce results with an empty vectore as proposed?
I tried with an empty and with a populated vectorstore but I still cant reproduce or get some results at all !!
Tried langchain 0.2 latest and 0.1.10 but required version is not specified.
[baby_agi_help.md](https://github.com/user-attachments/files/16501356/baby_agi_help.md)
Thank you!
### Idea or request for content:
_No response_ | DOC: Could not reproduce notebook output | https://api.github.com/repos/langchain-ai/langchain/issues/25068/comments | 1 | 2024-08-05T18:17:38Z | 2024-08-05T22:59:17Z | https://github.com/langchain-ai/langchain/issues/25068 | 2,449,193,060 | 25,068 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
``` python
class CitationSubModel(TypedDict):
number: int = Field(description="An integer numbering the citation i.e. 1st citation, 2nd citation.")
id: str = Field(description="The identifiying document name or number for a document.")
class FinalAnswerModel(TypedDict):
answer: Annotated[str, ..., "The answer to the user question using the citations"]
citations: Annotated[List[CitationSubModel], ..., "A dictionary that includes the numbering and the id references for the citations to be used to answer"]
model_answer = get_model(state.get("deployment_id","gpt-4o-global"), streaming=True)
model_answer = model_answer.with_structured_output(FinalAnswerModel)
# Create chain using chat_history, prompt_template and the model. Parse results through a simple string parser.
chain = (RunnablePassthrough.assign(chat_history=RunnableLambda(memory.load_memory_variables) | itemgetter("chat_history")) | prompt_template | model_answer)
for chunk in chain.stream({"context": context, "task": state["task"], "citation": citation_instruction, "coding_instructions": coding_instructions}):
print(chunk)
```
### Error Message and Stack Trace (if applicable)
``` python
File "/var/task/chatbot_workflow.py", line 858, in solve
for chunk in chain.stream({"context": context, "task": state["task"], "citation": citation_instruction, "coding_instructions": coding_instructions}):
File "/opt/python/lib/python3.10/site-packages/langchain_core/runnables/base.py", line 3253, in stream
yield from self.transform(iter([input]), config, **kwargs)
File "/opt/python/lib/python3.10/site-packages/langchain_core/runnables/base.py", line 3240, in transform
yield from self._transform_stream_with_config(
File "/opt/python/lib/python3.10/site-packages/langchain_core/runnables/base.py", line 2053, in _transform_stream_with_config
chunk: Output = context.run(next, iterator) # type: ignore
File "/opt/python/lib/python3.10/site-packages/langchain_core/runnables/base.py", line 3202, in _transform
for output in final_pipeline:
File "/opt/python/lib/python3.10/site-packages/langchain_core/runnables/base.py", line 1271, in transform
for ichunk in input:
File "/opt/python/lib/python3.10/site-packages/langchain_core/runnables/base.py", line 5267, in transform
yield from self.bound.transform(
File "/opt/python/lib/python3.10/site-packages/langchain_core/runnables/base.py", line 1289, in transform
yield from self.stream(final, config, **kwargs)
File "/opt/python/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py", line 373, in stream
raise e
File "/opt/python/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py", line 353, in stream
for chunk in self._stream(messages, stop=stop, **kwargs):
File "/opt/python/lib/python3.10/site-packages/langchain_aws/chat_models/bedrock.py", line 439, in _stream
result = self._generate(
File "/opt/python/lib/python3.10/site-packages/langchain_aws/chat_models/bedrock.py", line 518, in _generate
for chunk in self._stream(messages, stop, run_manager, **kwargs):
File "/opt/python/lib/python3.10/site-packages/langchain_aws/chat_models/bedrock.py", line 439, in _stream
result = self._generate(
File "/opt/python/lib/python3.10/site-packages/langchain_aws/chat_models/bedrock.py", line 518, in _generate
for chunk in self._stream(messages, stop, run_manager, **kwargs):
File "/opt/python/lib/python3.10/site-packages/langchain_aws/chat_models/bedrock.py", line 439, in _stream
### .
### . Error Message Repeats many times
### .
File "/opt/python/lib/python3.10/site-packages/langchain_aws/chat_models/bedrock.py", line 439, in _stream
result = self._generate(
File "/opt/python/lib/python3.10/site-packages/langchain_aws/chat_models/bedrock.py", line 518, in _generate
for chunk in self._stream(messages, stop, run_manager, **kwargs):
File "/opt/python/lib/python3.10/site-packages/langchain_aws/chat_models/bedrock.py", line 439, in _stream
result = self._generate(
File "/opt/python/lib/python3.10/site-packages/langchain_aws/chat_models/bedrock.py", line 514, in _generate
self._get_provider(), "stop_reason"
File "/opt/python/lib/python3.10/site-packages/langchain_aws/llms/bedrock.py", line 594, in _get_provider
if self.model_id.startswith("arn"):
RecursionError: maximum recursion depth exceeded while calling a Python object
Stack (most recent call last):
File "/var/lang/lib/python3.10/threading.py", line 973, in _bootstrap
self._bootstrap_inner()
File "/var/lang/lib/python3.10/threading.py", line 1016, in _bootstrap_inner
self.run()
File "/var/lang/lib/python3.10/threading.py", line 953, in run
self._target(*self._args, **self._kwargs)
File "/var/lang/lib/python3.10/concurrent/futures/thread.py", line 83, in _worker
work_item.run()
File "/var/lang/lib/python3.10/concurrent/futures/thread.py", line 58, in run
result = self.fn(*self.args, **self.kwargs)
File "/opt/python/lib/python3.10/site-packages/langchain_core/runnables/config.py", line 587, in wrapper
return func(*args, **kwargs)
File "/var/task/chatbot_workflow.py", line 204, in wrap_func
result = func(*args, **kwargs)
File "/var/task/chatbot_workflow.py", line 917, in solve
logging.exception(f"Error Streaming Chunk: {e}", exc_info=True, stack_info=True)
```
### Description
### Description
I have a custom function that calls get_model that returns either an chatazureopenai or chatbedrock object. I then pass the pydantic object to the chat object and stream the response. This works perfectly fine with ChatAzureOpenai but it fails with ChatBedrock.
I am trying to stream the response in a structured output. I have created a TypedDict pydantic class and I am trying to stream as it is generating.
I am getting the following error in a loop as my code is using langgraph and it starts iterating until it hits the max recursion limit.
Not sure what is causing the issue from this error trace. Can anyone help?
### System Info
System Information
------------------
AWS Lambda ARM
Python Version: 3.10
Package Information
-------------------
langchain_core: 0.2.27
langchain: 0.2.11
langchain_community: 0.2.5
langchain_aws: 0.1.13
langchain_openai: 0.1.20
langchainhub: 0.1.14
langgraph: 0.1.19 | ChatBedrock: I am unable to stream when using with_structured_output. I can either stream or I can use with_structured_output. | https://api.github.com/repos/langchain-ai/langchain/issues/25056/comments | 3 | 2024-08-05T14:42:11Z | 2024-08-06T14:28:53Z | https://github.com/langchain-ai/langchain/issues/25056 | 2,448,733,898 | 25,056 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
from langchain_ollama import ChatOllama
from langchain_ollama.embeddings import OllamaEmbeddings
llama3_1 = ChatOllama(
headers=headers,
base_url="some_host",
model="llama3.1",
temperature=0.001,
)
from langchain.prompts import ChatPromptTemplate
chat_prompt = ChatPromptTemplate.from_messages([
("system", "you are a helpful search assistant"),
])
chain = chat_prompt | llama3_1
chain.invoke({})
# embeddings
embeddings = (
OllamaEmbeddings(
headers=headers,
base_url="my_public_host",
model="mxbai-embed-large",
)
)
### Error Message and Stack Trace (if applicable)
# for chat
Error 401
# for embeddings
ValidationError: 2 validation errors for OllamaEmbeddings
base_url
extra fields not permitted (type=value_error.extra)
headers
extra fields not permitted (type=value_error.extra)
### Description
After I migrated to the new ChatOllama module (langchain-chatollama), I am unable to set headers or auth.
I am hosting ollama behind ngrok publicly, and I need to authenticate the calls.
When using the langchain_community ChatOllama integration, I was able to set those.
This seems to be like base_url, which was added in the latest version.
If you know of any env var that I can use to fix this (like OLLAMA_HOST) to set auth headers, I'd be very thankful :)
### System Info
langchain==0.2.12
langchain-community==0.2.11
langchain-core==0.2.28
langchain-ollama==0.1.1
langchain-openai==0.1.19
langchain-postgres==0.0.9
langchain-text-splitters==0.2.2
langchainhub==0.1.20
platform: mac
python: 3.12.3 | Unable to set authenticiation (headers or auth) like I used to do in the community ollama integration | https://api.github.com/repos/langchain-ai/langchain/issues/25055/comments | 1 | 2024-08-05T13:28:24Z | 2024-08-05T21:31:33Z | https://github.com/langchain-ai/langchain/issues/25055 | 2,448,566,533 | 25,055 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
from langchain.chains import create_history_aware_retriever, create_retrieval_chain
condense_question_system_template = (
"Given a chat history and the latest user question "
"which might reference context in the chat history, "
"formulate a standalone question which can be understood "
"without the chat history. Do NOT answer the question, "
"just reformulate it if needed and otherwise return it as is."
)
condense_question_prompt = ChatPromptTemplate.from_messages(
[
("system", condense_question_system_template),
("placeholder", "{chat_history}"),
("human", "{input}"),
]
)
history_aware_retriever = create_history_aware_retriever(
llm, vectorstore.as_retriever(), condense_question_prompt
)
system_prompt = (
"You are an assistant for question-answering tasks. "
"Use the following pieces of retrieved context to answer "
"the question. If you don't know the answer, say that you "
"don't know. Use three sentences maximum and keep the "
"answer concise."
"\n\n"
"{context}"
)
qa_prompt = ChatPromptTemplate.from_messages(
[
("system", system_prompt),
("placeholder", "{chat_history}"),
("human", "{input}"),
]
)
qa_chain = create_stuff_documents_chain(llm, qa_prompt)
convo_qa_chain = create_retrieval_chain(history_aware_retriever, qa_chain)
convo_qa_chain.invoke(
{
"input": "What are autonomous agents?",
"chat_history": [],
}
)
### Error Message and Stack Trace (if applicable)
No error message.
### Description
Im migrating my code which using LEGACY method: ConversationalRetrievalChain.from_llm to LCEL method (create_history_aware_retriever, create_stuff_documents_chain and create_retrieval_chain)
In my current design i'm returning the streaming output using AsyncFinalIteratorCallbackHandler().
When I check the result, I observed that the condensed_question that being generated also being part of the returned. It will first return the condensed question in stream then return the actual answer in one-shot at last.
### System Info
langchain=0.2.10 | ConversationRetrievalChain LCEL method in the data when streaming using AsyncFinalIteratorCallbackHandler() | https://api.github.com/repos/langchain-ai/langchain/issues/25045/comments | 1 | 2024-08-05T03:43:26Z | 2024-08-06T01:58:51Z | https://github.com/langchain-ai/langchain/issues/25045 | 2,447,529,411 | 25,045 |
[
"langchain-ai",
"langchain"
] | ### URL
https://api.python.langchain.com/en/latest/embeddings/langchain_huggingface.embeddings.huggingface.HuggingFaceEmbeddings.html#langchain_huggingface.embeddings.huggingface.HuggingFaceEmbeddings
### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
Currently, the documentation of the cache_folder keyword only mentions the SENTENCE_TRANSFORMERS_HOME environment variable. It appears that the HF_HOME variable is also considered and takes precedence over SENTENCE_TRANSFORMERS_HOME if no cache_folder keyword is provided. I tested this on Linux with current versions of all involved modules.
### Idea or request for content:
The documentation should be amended to include handling of HF_HOME. | DOC: HuggingFaceEmbeddings support for HF_HOME environment variable | https://api.github.com/repos/langchain-ai/langchain/issues/25038/comments | 1 | 2024-08-04T13:46:36Z | 2024-08-05T21:11:12Z | https://github.com/langchain-ai/langchain/issues/25038 | 2,447,142,608 | 25,038 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```py
from dotenv import load_dotenv
from langchain_core.tools import tool
from langchain_openai import ChatOpenAI
from pydantic.v1 import BaseModel, Field
load_dotenv()
class SampleModel(BaseModel):
numbers: list[int] = Field(min_items=2, max_items=4)
@tool(args_schema=SampleModel)
def foo() -> None:
"""bar"""
return
ChatOpenAI().bind_tools([foo])
```
### Error Message and Stack Trace (if applicable)
ValueError: On field "numbers" the following field constraints are set but not enforced: min_items, max_items.
### Description
In from langchain_core.utils.pydantic import _create_subset_model_v1
```py
def _create_subset_model_v1(
name: str,
model: Type[BaseModel],
field_names: list,
*,
descriptions: Optional[dict] = None,
fn_description: Optional[str] = None,
) -> Type[BaseModel]:
"""Create a pydantic model with only a subset of model's fields."""
from langchain_core.pydantic_v1 import create_model
fields = {}
for field_name in field_names:
field = model.__fields__[field_name]
t = (
# this isn't perfect but should work for most functions
field.outer_type_
if field.required and not field.allow_none
else Optional[field.outer_type_]
)
if descriptions and field_name in descriptions:
field.field_info.description = descriptions[field_name]
fields[field_name] = (t, field.field_info)
rtn = create_model(name, **fields) # type: ignore
rtn.__doc__ = textwrap.dedent(fn_description or model.__doc__ or "")
return rtn
```
As the comment explains, the issue lies in the process of obtaining t.
The pydantic v2 version has the issue raised in #25031 .
### System Info
langchain==0.2.12
langchain-core==0.2.28
langchain-openai==0.1.20
pydantic==2.8.2
platform==windows
python-version==3.12.4 | The tool schema can't apply `min_items` or `max_items` when using BaseModel Field in Pydantic V1. | https://api.github.com/repos/langchain-ai/langchain/issues/25036/comments | 3 | 2024-08-04T11:37:43Z | 2024-08-05T21:14:08Z | https://github.com/langchain-ai/langchain/issues/25036 | 2,447,095,448 | 25,036 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
from pydantic import BaseModel, Field
from typing import List
from langchain_core.tools import tool
from langchain_anthropic import ChatAnthropic
from langchain_core.messages import AIMessage
from dotenv import load_dotenv
load_dotenv()
class SampleModel(BaseModel):
numbers: List[int] = Field(description="Favorite numbers", min_length=10, max_length=15)
@tool(args_schema=SampleModel)
def choose_numbers():
"""Choose your favorite numbers"""
pass
model = ChatAnthropic(model_name="claude-3-haiku-20240307", temperature=0)
model = model.bind_tools([choose_numbers], tool_choice="choose_numbers")
result: AIMessage = model.invoke("Hello world!")
print(result.tool_calls[0]["args"])
# Output: {'numbers': [7, 13, 42]}
```
### Error Message and Stack Trace (if applicable)
_No response_
### Description
When additional metadata is specified on Pydantic fields, e.g. `min_length` or `max_length`, these aren't serialized into the final output that's sent out to the model. In the attached screenshot from Langsmith, although it carries over the description, the `length` params are missing.
https://smith.langchain.com/public/62060327-e5be-4156-93cb-6960078ec7fb/r
<img width="530" alt="image" src="https://github.com/user-attachments/assets/4fa2a6b6-105b-43c2-957f-f56b84fef10b">
### System Info
```
langchain==0.2.11
langchain-anthropic==0.1.21
langchain-community==0.2.10
langchain-core==0.2.24
langchain-google-genai==1.0.8
langchain-google-vertexai==1.0.7
langchain-openai==0.1.19
langchain-text-splitters==0.2.2
```` | Pydantic field metadata not being serialized in tool calls | https://api.github.com/repos/langchain-ai/langchain/issues/25031/comments | 6 | 2024-08-04T01:34:53Z | 2024-08-05T20:50:16Z | https://github.com/langchain-ai/langchain/issues/25031 | 2,446,706,381 | 25,031 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
The following code throws an import error!
```
from langchain_community.document_loaders import DirectoryLoader
loader = DirectoryLoader("path-to-directory-where-pdfs-are-present")
docs = loader.load()
```
while the below one seems to work fine ---
```
from langchain_community.document_loaders import DirectoryLoader
loader = PubMedLoader("...")
docs = loader.load()
```
### Error Message and Stack Trace (if applicable)
```
Error loading file /home/ec2-user/sandbox/IntelliFix/satya/CT_sample/CT/Frontier-Maxima/ServiceManuals/5229839-100.pdf
Traceback (most recent call last):
File "/home/ec2-user/sandbox/IntelliFix/satya/rag_eval.py", line 5, in <module>
docs = loader.load()
^^^^^^^^^^^^^
File "/home/ec2-user/anaconda3/envs/langchain/lib/python3.12/site-packages/langchain_community/document_loaders/directory.py", line 117, in load
return list(self.lazy_load())
^^^^^^^^^^^^^^^^^^^^^^
File "/home/ec2-user/anaconda3/envs/langchain/lib/python3.12/site-packages/langchain_community/document_loaders/directory.py", line 182, in lazy_load
yield from self._lazy_load_file(i, p, pbar)
File "/home/ec2-user/anaconda3/envs/langchain/lib/python3.12/site-packages/langchain_community/document_loaders/directory.py", line 220, in _lazy_load_file
raise e
File "/home/ec2-user/anaconda3/envs/langchain/lib/python3.12/site-packages/langchain_community/document_loaders/directory.py", line 210, in _lazy_load_file
for subdoc in loader.lazy_load():
File "/home/ec2-user/anaconda3/envs/langchain/lib/python3.12/site-packages/langchain_community/document_loaders/unstructured.py", line 88, in lazy_load
elements = self._get_elements()
^^^^^^^^^^^^^^^^^^^^
File "/home/ec2-user/anaconda3/envs/langchain/lib/python3.12/site-packages/langchain_community/document_loaders/unstructured.py", line 168, in _get_elements
from unstructured.partition.auto import partition
File "/home/ec2-user/anaconda3/envs/langchain/lib/python3.12/site-packages/unstructured/partition/auto.py", line 78, in <module>
from unstructured.partition.pdf import partition_pdf
File "/home/ec2-user/anaconda3/envs/langchain/lib/python3.12/site-packages/unstructured/partition/pdf.py", line 54, in <module>
from unstructured.partition.pdf_image.analysis.bbox_visualisation import (
File "/home/ec2-user/anaconda3/envs/langchain/lib/python3.12/site-packages/unstructured/partition/pdf_image/analysis/bbox_visualisation.py", line 16, in <module>
from unstructured_inference.inference.layout import DocumentLayout
File "/home/ec2-user/anaconda3/envs/langchain/lib/python3.12/site-packages/unstructured_inference/inference/layout.py", line 15, in <module>
from unstructured_inference.inference.layoutelement import (
File "/home/ec2-user/anaconda3/envs/langchain/lib/python3.12/site-packages/unstructured_inference/inference/layoutelement.py", line 7, in <module>
from layoutparser.elements.layout import TextBlock
File "/home/ec2-user/anaconda3/envs/langchain/lib/python3.12/site-packages/layoutparser/elements/__init__.py", line 16, in <module>
from .layout_elements import (
File "/home/ec2-user/anaconda3/envs/langchain/lib/python3.12/site-packages/layoutparser/elements/layout_elements.py", line 25, in <module>
from cv2 import getPerspectiveTransform as _getPerspectiveTransform
ImportError: libGL.so.1: cannot open shared object file: No such file or directory
```
### Description
The `DirectoryLoader` might have some incompatible issues. I am trying to load a folder with PDFs but am facing libGL1 not found.
I've tried uninstalling/installing OpenCV/OpenCV headless following stackoverflow discussions but nothing seems to work!
On the other hand, other loaders like `PubMedLoader` and `WebBaseLoader` seems to work fine (not sure if they are hitting this endpoint!)
**P.S:** Raised an [issue](https://github.com/opencv/opencv/issues/25988) at OpenCV to understand if the issue is at their end.
### System Info
I work on an EC2 instance which has linux background. My Python version is 3.12.4. Other relevant packages that might be useful are:
```
langchain==0.2.11
langchain-community==0.2.1
langchain-core==0.2.26
langchain-openai==0.1.20
langchain-text-splitters==0.2.0
langchainhub==0.1.20
opencv-contrib-python-headless==4.8.0.76
opencv-python==4.10.0.84
opencv-python-headless==4.8.0.76
python-dateutil==2.9.0.post0
python-docx==1.1.2
python-dotenv==1.0.1
python-iso639==2024.4.27
python-magic==0.4.27
python-multipart==0.0.9
python-oxmsg==0.0.1
python-pptx==0.6.23
unstructured==0.15.0
unstructured-client==0.25.1
unstructured-inference==0.7.36
unstructured.pytesseract==0.3.12
``` | [DirectoryLoader] ImportError: libGL.so.1: cannot open shared object file: No such file or directory | https://api.github.com/repos/langchain-ai/langchain/issues/25029/comments | 0 | 2024-08-03T23:11:39Z | 2024-08-03T23:17:51Z | https://github.com/langchain-ai/langchain/issues/25029 | 2,446,648,952 | 25,029 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
from langchain_community.embeddings import OllamaEmbeddings
from langchain_ollama import OllamaLLM
embeddings_model = OllamaEmbeddings(base_url = "http://192.168.11.98:9000", model="nomic-embed-text:v1.5", num_ctx=4096)
embeddings_model.embed_query("Test")
## LLM Model
llm_model = OllamaLLM(base_url = "http://192.168.11.98:9000",model="llama3.1:8b",num_ctx = 2048)
llm_model.invoke("Test")
```
```Dockerfile
FROM ubuntu
# Install Prequisites
RUN apt-get update && apt-get install -y build-essential cmake gfortran libcurl4-openssl-dev libssl-dev libxml2-dev python3-dev python3-pip python3-venv
RUN pip install langchain langchain-core langchain-community langchain-experimental langchain-chroma langchain_ollama pandas --break-system-packages
```
### Error Message and Stack Trace (if applicable)
>>> from langchain_community.embeddings import OllamaEmbeddings
>>> from langchain_ollama import OllamaLLM
>>> embeddings_model = OllamaEmbeddings(base_url = "http://192.168.11.98:9000", model="nomic-embed-text:v1.5", num_ctx=4096)
>>> embeddings_model.embed_query("Test")
[0.8171377182006836, 0.7424322366714478, -3.6913845539093018, -0.5350275635719299, 1.98311185836792, -0.08007726818323135, 0.7974349856376648, -0.5946609377861023, 1.4877475500106812, -0.8044648766517639, 0.38856828212738037, 1.0630642175674438, 0.6806553602218628, -0.9530377984046936, -1.4606661796569824, -0.2956351637840271, -0.9512965083122253]
>>>
>>> ## LLM Model
>>> llm_model = OllamaLLM(base_url = "http://192.168.11.98:9000",model="llama3.1:8b",num_ctx = 2048)
>>> llm_model.invoke("Test")
Traceback (most recent call last):
File "/root/.virtualenvs/aaveLLM/lib/python3.10/site-packages/httpx/_transports/default.py", line 69, in map_httpcore_exceptions
yield
File "/root/.virtualenvs/aaveLLM/lib/python3.10/site-packages/httpx/_transports/default.py", line 233, in handle_request
resp = self._pool.handle_request(req)
File "/root/.virtualenvs/aaveLLM/lib/python3.10/site-packages/httpcore/_sync/connection_pool.py", line 216, in handle_request
raise exc from None
File "/root/.virtualenvs/aaveLLM/lib/python3.10/site-packages/httpcore/_sync/connection_pool.py", line 196, in handle_request
response = connection.handle_request(
File "/root/.virtualenvs/aaveLLM/lib/python3.10/site-packages/httpcore/_sync/connection.py", line 99, in handle_request
raise exc
File "/root/.virtualenvs/aaveLLM/lib/python3.10/site-packages/httpcore/_sync/connection.py", line 76, in handle_request
stream = self._connect(request)
File "/root/.virtualenvs/aaveLLM/lib/python3.10/site-packages/httpcore/_sync/connection.py", line 122, in _connect
stream = self._network_backend.connect_tcp(**kwargs)
File "/root/.virtualenvs/aaveLLM/lib/python3.10/site-packages/httpcore/_backends/sync.py", line 205, in connect_tcp
with map_exceptions(exc_map):
File "/usr/lib/python3.10/contextlib.py", line 153, in __exit__
self.gen.throw(typ, value, traceback)
File "/root/.virtualenvs/aaveLLM/lib/python3.10/site-packages/httpcore/_exceptions.py", line 14, in map_exceptions
raise to_exc(exc) from exc
httpcore.ConnectError: [Errno 111] Connection refused
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/root/.virtualenvs/aaveLLM/lib/python3.10/site-packages/langchain_core/language_models/llms.py", line 346, in invoke
self.generate_prompt(
File "/root/.virtualenvs/aaveLLM/lib/python3.10/site-packages/langchain_core/language_models/llms.py", line 703, in generate_prompt
return self.generate(prompt_strings, stop=stop, callbacks=callbacks, **kwargs)
File "/root/.virtualenvs/aaveLLM/lib/python3.10/site-packages/langchain_core/language_models/llms.py", line 882, in generate
output = self._generate_helper(
File "/root/.virtualenvs/aaveLLM/lib/python3.10/site-packages/langchain_core/language_models/llms.py", line 740, in _generate_helper
raise e
File "/root/.virtualenvs/aaveLLM/lib/python3.10/site-packages/langchain_core/language_models/llms.py", line 727, in _generate_helper
self._generate(
File "/root/.virtualenvs/aaveLLM/lib/python3.10/site-packages/langchain_ollama/llms.py", line 268, in _generate
final_chunk = self._stream_with_aggregation(
File "/root/.virtualenvs/aaveLLM/lib/python3.10/site-packages/langchain_ollama/llms.py", line 236, in _stream_with_aggregation
for stream_resp in self._create_generate_stream(prompt, stop, **kwargs):
File "/root/.virtualenvs/aaveLLM/lib/python3.10/site-packages/langchain_ollama/llms.py", line 186, in _create_generate_stream
yield from ollama.generate(
File "/root/.virtualenvs/aaveLLM/lib/python3.10/site-packages/ollama/_client.py", line 79, in _stream
with self._client.stream(method, url, **kwargs) as r:
File "/usr/lib/python3.10/contextlib.py", line 135, in __enter__
return next(self.gen)
File "/root/.virtualenvs/aaveLLM/lib/python3.10/site-packages/httpx/_client.py", line 870, in stream
response = self.send(
File "/root/.virtualenvs/aaveLLM/lib/python3.10/site-packages/httpx/_client.py", line 914, in send
response = self._send_handling_auth(
File "/root/.virtualenvs/aaveLLM/lib/python3.10/site-packages/httpx/_client.py", line 942, in _send_handling_auth
response = self._send_handling_redirects(
File "/root/.virtualenvs/aaveLLM/lib/python3.10/site-packages/httpx/_client.py", line 979, in _send_handling_redirects
response = self._send_single_request(request)
File "/root/.virtualenvs/aaveLLM/lib/python3.10/site-packages/httpx/_client.py", line 1015, in _send_single_request
response = transport.handle_request(request)
File "/root/.virtualenvs/aaveLLM/lib/python3.10/site-packages/httpx/_transports/default.py", line 232, in handle_request
with map_httpcore_exceptions():
File "/usr/lib/python3.10/contextlib.py", line 153, in __exit__
self.gen.throw(typ, value, traceback)
File "/root/.virtualenvs/aaveLLM/lib/python3.10/site-packages/httpx/_transports/default.py", line 86, in map_httpcore_exceptions
raise mapped_exc(message) from exc
httpx.ConnectError: [Errno 111] Connection refused
### Description
I have the following code inside a docker container python script I am trying to run.
While the embedding model works fine, The LLM model returns Connection refused
Both works fine from outside the container though and inside the container as well when run through say curl
```
root@1fec10f8d40e:/# curl http://192.168.11.98:9000/api/generate -d '{
"model": "llama3.1:8b",
"prompt": "Test",
"stream": false
}'
{"model":"llama3.1:8b","created_at":"2024-08-04T03:49:46.282365097Z","response":"It looks like you want to test me. I'm happy to play along!\n\nHow would you like to proceed? Would you like to:\n\nA) Ask a simple question\nB) Provide a statement and ask for feedback\nC) Engage in a conversation on a specific topic\nD) Something else (please specify)\n\nLet me know, and we can get started!","done":true,"done_reason":"stop","context":[128006,882,128007,271,2323,128009,128006,78191,128007,271,2181,5992,1093,499,1390,311,1296,757,13,358,2846,6380,311,1514,3235,2268,4438,1053,499,1093,311,10570,30,19418,499,1093,311,1473,32,8,21069,264,4382,3488,198,33,8,40665,264,5224,323,2610,369,11302,198,34,8,3365,425,304,264,10652,389,264,3230,8712,198,35,8,25681,775,320,31121,14158,696,10267,757,1440,11,323,584,649,636,3940,0],"total_duration":2073589200,"load_duration":55691013,"prompt_eval_count":11,"prompt_eval_duration":32157000,"eval_count":76,"eval_duration":1943850000}
```
I have checked the model names etc and they are correct and since it works outside the python langchain environment.
The issue appears when the OllamaLLM is run inside container environment.
I have attached the Dockerfile, Cleaned it out for reproducing the issue. Attaching to docker with `docker run -it image bash` to run the python code and the error appears
### System Info
pip freeze | grep langchai
langchain==0.2.12
langchain-chroma==0.1.2
langchain-community==0.2.11
langchain-core==0.2.28
langchain-experimental==0.0.64
langchain-ollama==0.1.1
langchain-text-splitters==0.2.2
| OllamaLLM Connection refused from within docker container while OllamaEmbeddings works The base_url is custom and same for both. | https://api.github.com/repos/langchain-ai/langchain/issues/25022/comments | 0 | 2024-08-03T17:14:03Z | 2024-08-04T03:52:38Z | https://github.com/langchain-ai/langchain/issues/25022 | 2,446,508,964 | 25,022 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```
from langchain_community.document_transformers import BeautifulSoupTransformer
from langchain_core.documents import Document
text="""<a href="https://google.com/"><span>google</span></a>"""
b = BeautifulSoupTransformer()
docs = b.transform_documents(
[Document(text)],
tags_to_extract=["p", "li", "div", "a", "span", "h1", "h2", "h3", "h4", "h5", "h6"],
remove_comments=True
)
print(docs[0].page_content)
```
### Error Message and Stack Trace (if applicable)
_No response_
### Description
Instead of seeing the same format as when extracting a `<a href="https://google.com/">google</a>` namely `google (https://google.com/)` we get just `google` because of the interior tags
### System Info
System Information
------------------
> OS: Windows
> OS Version: 10.0.19045
> Python Version: 3.12.3 (tags/v3.12.3:f6650f9, Apr 9 2024, 14:05:25) [MSC v.1938 64 bit (AMD64)]
Package Information
-------------------
> langchain_core: 0.2.28
> langchain: 0.2.12
> langchain_community: 0.2.11
> langsmith: 0.1.96
> langchain_text_splitters: 0.2.2
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph
> langserve | BeautifulSoup transformer fails to treat links with internal tags the same way | https://api.github.com/repos/langchain-ai/langchain/issues/25018/comments | 0 | 2024-08-03T10:49:51Z | 2024-08-03T10:52:20Z | https://github.com/langchain-ai/langchain/issues/25018 | 2,446,293,298 | 25,018 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```
from typing import Literal
from langchain_groq import ChatGroq
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.pydantic_v1 import BaseModel, Field
# Data model
class RouteQuery(BaseModel):
"""Route a user query to the most relevant prompt template."""
datasource: Literal["expert_prompt", "summarize_prompt", "normal_QA"] = Field(
...,
description="Given a user question choose which prompt would be most relevant for append to the PromptTemplate",
)
# LLM with function call
llm = ChatGroq(model_name="llama3-groq-8b-8192-tool-use-preview", temperature=0,api_key= "API") ## Replace to real LLMs (Cohere / Groq / OpenAI)
structured_llm = llm.with_structured_output(RouteQuery)
# Prompt
system = """You are an expert at routing a user question to the appropriate prompt template.
Based on the question is referring to, route it to the relevant prompt template. If you can't route , return the RAG_prompt"""
prompt = ChatPromptTemplate.from_messages(
[
("system", system),
("human", "{question}"),
]
)
# Define router
router = prompt | structured_llm
question = """
Giải thích và so sánh thí nghiệm khe đôi của Young trong cơ học cổ điển và cơ học lượng tử. Làm thế nào mà hiện tượng giao thoa lại được giải thích trong cơ học lượng tử?
"""
result = router.invoke({"question": question})
print(result)
```
### Error Message and Stack Trace (if applicable)
---------------------------------------------------------------------------
InternalServerError Traceback (most recent call last)
Cell In[122], [line 1](vscode-notebook-cell:?execution_count=122&line=1)
----> [1](vscode-notebook-cell:?execution_count=122&line=1) router = llm_router.route_prompt("Giải thích hiện tượng biến mất của đạo hàm khi thực hiện huấn luyện mạng RNN")
Cell In[114], [line 20](vscode-notebook-cell:?execution_count=114&line=20)
[18](vscode-notebook-cell:?execution_count=114&line=18) def route_prompt(self, question) :
[19](vscode-notebook-cell:?execution_count=114&line=19) router = self._format_prompt(question)
---> [20](vscode-notebook-cell:?execution_count=114&line=20) result = router.invoke({"question": question})
[22](vscode-notebook-cell:?execution_count=114&line=22) return result.datasource
File ~/miniconda3/envs/end-project/lib/python3.11/site-packages/langchain_core/runnables/base.py:2875, in RunnableSequence.invoke(self, input, config, **kwargs)
[2873](https://file+.vscode-resource.vscode-cdn.net/home/justtuananh/AI4TUAN/End-Project/Eval_rag/RAG_langchain_DuyTa/~/miniconda3/envs/end-project/lib/python3.11/site-packages/langchain_core/runnables/base.py:2873) input = step.invoke(input, config, **kwargs)
[2874](https://file+.vscode-resource.vscode-cdn.net/home/justtuananh/AI4TUAN/End-Project/Eval_rag/RAG_langchain_DuyTa/~/miniconda3/envs/end-project/lib/python3.11/site-packages/langchain_core/runnables/base.py:2874) else:
-> [2875](https://file+.vscode-resource.vscode-cdn.net/home/justtuananh/AI4TUAN/End-Project/Eval_rag/RAG_langchain_DuyTa/~/miniconda3/envs/end-project/lib/python3.11/site-packages/langchain_core/runnables/base.py:2875) input = step.invoke(input, config)
[2876](https://file+.vscode-resource.vscode-cdn.net/home/justtuananh/AI4TUAN/End-Project/Eval_rag/RAG_langchain_DuyTa/~/miniconda3/envs/end-project/lib/python3.11/site-packages/langchain_core/runnables/base.py:2876) # finish the root run
[2877](https://file+.vscode-resource.vscode-cdn.net/home/justtuananh/AI4TUAN/End-Project/Eval_rag/RAG_langchain_DuyTa/~/miniconda3/envs/end-project/lib/python3.11/site-packages/langchain_core/runnables/base.py:2877) except BaseException as e:
File ~/miniconda3/envs/end-project/lib/python3.11/site-packages/langchain_core/runnables/base.py:5060, in RunnableBindingBase.invoke(self, input, config, **kwargs)
[5054](https://file+.vscode-resource.vscode-cdn.net/home/justtuananh/AI4TUAN/End-Project/Eval_rag/RAG_langchain_DuyTa/~/miniconda3/envs/end-project/lib/python3.11/site-packages/langchain_core/runnables/base.py:5054) def invoke(
[5055](https://file+.vscode-resource.vscode-cdn.net/home/justtuananh/AI4TUAN/End-Project/Eval_rag/RAG_langchain_DuyTa/~/miniconda3/envs/end-project/lib/python3.11/site-packages/langchain_core/runnables/base.py:5055) self,
[5056](https://file+.vscode-resource.vscode-cdn.net/home/justtuananh/AI4TUAN/End-Project/Eval_rag/RAG_langchain_DuyTa/~/miniconda3/envs/end-project/lib/python3.11/site-packages/langchain_core/runnables/base.py:5056) input: Input,
[5057](https://file+.vscode-resource.vscode-cdn.net/home/justtuananh/AI4TUAN/End-Project/Eval_rag/RAG_langchain_DuyTa/~/miniconda3/envs/end-project/lib/python3.11/site-packages/langchain_core/runnables/base.py:5057) config: Optional[RunnableConfig] = None,
[5058](https://file+.vscode-resource.vscode-cdn.net/home/justtuananh/AI4TUAN/End-Project/Eval_rag/RAG_langchain_DuyTa/~/miniconda3/envs/end-project/lib/python3.11/site-packages/langchain_core/runnables/base.py:5058) **kwargs: Optional[Any],
[5059](https://file+.vscode-resource.vscode-cdn.net/home/justtuananh/AI4TUAN/End-Project/Eval_rag/RAG_langchain_DuyTa/~/miniconda3/envs/end-project/lib/python3.11/site-packages/langchain_core/runnables/base.py:5059) ) -> Output:
-> [5060](https://file+.vscode-resource.vscode-cdn.net/home/justtuananh/AI4TUAN/End-Project/Eval_rag/RAG_langchain_DuyTa/~/miniconda3/envs/end-project/lib/python3.11/site-packages/langchain_core/runnables/base.py:5060) return self.bound.invoke(
[5061](https://file+.vscode-resource.vscode-cdn.net/home/justtuananh/AI4TUAN/End-Project/Eval_rag/RAG_langchain_DuyTa/~/miniconda3/envs/end-project/lib/python3.11/site-packages/langchain_core/runnables/base.py:5061) input,
[5062](https://file+.vscode-resource.vscode-cdn.net/home/justtuananh/AI4TUAN/End-Project/Eval_rag/RAG_langchain_DuyTa/~/miniconda3/envs/end-project/lib/python3.11/site-packages/langchain_core/runnables/base.py:5062) self._merge_configs(config),
[5063](https://file+.vscode-resource.vscode-cdn.net/home/justtuananh/AI4TUAN/End-Project/Eval_rag/RAG_langchain_DuyTa/~/miniconda3/envs/end-project/lib/python3.11/site-packages/langchain_core/runnables/base.py:5063) **{**self.kwargs, **kwargs},
[5064](https://file+.vscode-resource.vscode-cdn.net/home/justtuananh/AI4TUAN/End-Project/Eval_rag/RAG_langchain_DuyTa/~/miniconda3/envs/end-project/lib/python3.11/site-packages/langchain_core/runnables/base.py:5064) )
File ~/miniconda3/envs/end-project/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:274, in BaseChatModel.invoke(self, input, config, stop, **kwargs)
[263](https://file+.vscode-resource.vscode-cdn.net/home/justtuananh/AI4TUAN/End-Project/Eval_rag/RAG_langchain_DuyTa/~/miniconda3/envs/end-project/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:263) def invoke(
[264](https://file+.vscode-resource.vscode-cdn.net/home/justtuananh/AI4TUAN/End-Project/Eval_rag/RAG_langchain_DuyTa/~/miniconda3/envs/end-project/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:264) self,
[265](https://file+.vscode-resource.vscode-cdn.net/home/justtuananh/AI4TUAN/End-Project/Eval_rag/RAG_langchain_DuyTa/~/miniconda3/envs/end-project/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:265) input: LanguageModelInput,
(...)
[269](https://file+.vscode-resource.vscode-cdn.net/home/justtuananh/AI4TUAN/End-Project/Eval_rag/RAG_langchain_DuyTa/~/miniconda3/envs/end-project/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:269) **kwargs: Any,
[270](https://file+.vscode-resource.vscode-cdn.net/home/justtuananh/AI4TUAN/End-Project/Eval_rag/RAG_langchain_DuyTa/~/miniconda3/envs/end-project/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:270) ) -> BaseMessage:
[271](https://file+.vscode-resource.vscode-cdn.net/home/justtuananh/AI4TUAN/End-Project/Eval_rag/RAG_langchain_DuyTa/~/miniconda3/envs/end-project/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:271) config = ensure_config(config)
[272](https://file+.vscode-resource.vscode-cdn.net/home/justtuananh/AI4TUAN/End-Project/Eval_rag/RAG_langchain_DuyTa/~/miniconda3/envs/end-project/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:272) return cast(
[273](https://file+.vscode-resource.vscode-cdn.net/home/justtuananh/AI4TUAN/End-Project/Eval_rag/RAG_langchain_DuyTa/~/miniconda3/envs/end-project/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:273) ChatGeneration,
--> [274](https://file+.vscode-resource.vscode-cdn.net/home/justtuananh/AI4TUAN/End-Project/Eval_rag/RAG_langchain_DuyTa/~/miniconda3/envs/end-project/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:274) self.generate_prompt(
[275](https://file+.vscode-resource.vscode-cdn.net/home/justtuananh/AI4TUAN/End-Project/Eval_rag/RAG_langchain_DuyTa/~/miniconda3/envs/end-project/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:275) [self._convert_input(input)],
[276](https://file+.vscode-resource.vscode-cdn.net/home/justtuananh/AI4TUAN/End-Project/Eval_rag/RAG_langchain_DuyTa/~/miniconda3/envs/end-project/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:276) stop=stop,
[277](https://file+.vscode-resource.vscode-cdn.net/home/justtuananh/AI4TUAN/End-Project/Eval_rag/RAG_langchain_DuyTa/~/miniconda3/envs/end-project/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:277) callbacks=config.get("callbacks"),
[278](https://file+.vscode-resource.vscode-cdn.net/home/justtuananh/AI4TUAN/End-Project/Eval_rag/RAG_langchain_DuyTa/~/miniconda3/envs/end-project/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:278) tags=config.get("tags"),
[279](https://file+.vscode-resource.vscode-cdn.net/home/justtuananh/AI4TUAN/End-Project/Eval_rag/RAG_langchain_DuyTa/~/miniconda3/envs/end-project/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:279) metadata=config.get("metadata"),
[280](https://file+.vscode-resource.vscode-cdn.net/home/justtuananh/AI4TUAN/End-Project/Eval_rag/RAG_langchain_DuyTa/~/miniconda3/envs/end-project/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:280) run_name=config.get("run_name"),
[281](https://file+.vscode-resource.vscode-cdn.net/home/justtuananh/AI4TUAN/End-Project/Eval_rag/RAG_langchain_DuyTa/~/miniconda3/envs/end-project/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:281) run_id=config.pop("run_id", None),
[282](https://file+.vscode-resource.vscode-cdn.net/home/justtuananh/AI4TUAN/End-Project/Eval_rag/RAG_langchain_DuyTa/~/miniconda3/envs/end-project/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:282) **kwargs,
[283](https://file+.vscode-resource.vscode-cdn.net/home/justtuananh/AI4TUAN/End-Project/Eval_rag/RAG_langchain_DuyTa/~/miniconda3/envs/end-project/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:283) ).generations[0][0],
[284](https://file+.vscode-resource.vscode-cdn.net/home/justtuananh/AI4TUAN/End-Project/Eval_rag/RAG_langchain_DuyTa/~/miniconda3/envs/end-project/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:284) ).message
File ~/miniconda3/envs/end-project/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:714, in BaseChatModel.generate_prompt(self, prompts, stop, callbacks, **kwargs)
[706](https://file+.vscode-resource.vscode-cdn.net/home/justtuananh/AI4TUAN/End-Project/Eval_rag/RAG_langchain_DuyTa/~/miniconda3/envs/end-project/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:706) def generate_prompt(
[707](https://file+.vscode-resource.vscode-cdn.net/home/justtuananh/AI4TUAN/End-Project/Eval_rag/RAG_langchain_DuyTa/~/miniconda3/envs/end-project/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:707) self,
[708](https://file+.vscode-resource.vscode-cdn.net/home/justtuananh/AI4TUAN/End-Project/Eval_rag/RAG_langchain_DuyTa/~/miniconda3/envs/end-project/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:708) prompts: List[PromptValue],
(...)
[711](https://file+.vscode-resource.vscode-cdn.net/home/justtuananh/AI4TUAN/End-Project/Eval_rag/RAG_langchain_DuyTa/~/miniconda3/envs/end-project/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:711) **kwargs: Any,
[712](https://file+.vscode-resource.vscode-cdn.net/home/justtuananh/AI4TUAN/End-Project/Eval_rag/RAG_langchain_DuyTa/~/miniconda3/envs/end-project/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:712) ) -> LLMResult:
[713](https://file+.vscode-resource.vscode-cdn.net/home/justtuananh/AI4TUAN/End-Project/Eval_rag/RAG_langchain_DuyTa/~/miniconda3/envs/end-project/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:713) prompt_messages = [p.to_messages() for p in prompts]
--> [714](https://file+.vscode-resource.vscode-cdn.net/home/justtuananh/AI4TUAN/End-Project/Eval_rag/RAG_langchain_DuyTa/~/miniconda3/envs/end-project/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:714) return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs)
File ~/miniconda3/envs/end-project/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:571, in BaseChatModel.generate(self, messages, stop, callbacks, tags, metadata, run_name, run_id, **kwargs)
[569](https://file+.vscode-resource.vscode-cdn.net/home/justtuananh/AI4TUAN/End-Project/Eval_rag/RAG_langchain_DuyTa/~/miniconda3/envs/end-project/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:569) if run_managers:
[570](https://file+.vscode-resource.vscode-cdn.net/home/justtuananh/AI4TUAN/End-Project/Eval_rag/RAG_langchain_DuyTa/~/miniconda3/envs/end-project/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:570) run_managers[i].on_llm_error(e, response=LLMResult(generations=[]))
--> [571](https://file+.vscode-resource.vscode-cdn.net/home/justtuananh/AI4TUAN/End-Project/Eval_rag/RAG_langchain_DuyTa/~/miniconda3/envs/end-project/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:571) raise e
[572](https://file+.vscode-resource.vscode-cdn.net/home/justtuananh/AI4TUAN/End-Project/Eval_rag/RAG_langchain_DuyTa/~/miniconda3/envs/end-project/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:572) flattened_outputs = [
[573](https://file+.vscode-resource.vscode-cdn.net/home/justtuananh/AI4TUAN/End-Project/Eval_rag/RAG_langchain_DuyTa/~/miniconda3/envs/end-project/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:573) LLMResult(generations=[res.generations], llm_output=res.llm_output) # type: ignore[list-item]
[574](https://file+.vscode-resource.vscode-cdn.net/home/justtuananh/AI4TUAN/End-Project/Eval_rag/RAG_langchain_DuyTa/~/miniconda3/envs/end-project/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:574) for res in results
[575](https://file+.vscode-resource.vscode-cdn.net/home/justtuananh/AI4TUAN/End-Project/Eval_rag/RAG_langchain_DuyTa/~/miniconda3/envs/end-project/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:575) ]
[576](https://file+.vscode-resource.vscode-cdn.net/home/justtuananh/AI4TUAN/End-Project/Eval_rag/RAG_langchain_DuyTa/~/miniconda3/envs/end-project/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:576) llm_output = self._combine_llm_outputs([res.llm_output for res in results])
File ~/miniconda3/envs/end-project/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:561, in BaseChatModel.generate(self, messages, stop, callbacks, tags, metadata, run_name, run_id, **kwargs)
[558](https://file+.vscode-resource.vscode-cdn.net/home/justtuananh/AI4TUAN/End-Project/Eval_rag/RAG_langchain_DuyTa/~/miniconda3/envs/end-project/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:558) for i, m in enumerate(messages):
[559](https://file+.vscode-resource.vscode-cdn.net/home/justtuananh/AI4TUAN/End-Project/Eval_rag/RAG_langchain_DuyTa/~/miniconda3/envs/end-project/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:559) try:
[560](https://file+.vscode-resource.vscode-cdn.net/home/justtuananh/AI4TUAN/End-Project/Eval_rag/RAG_langchain_DuyTa/~/miniconda3/envs/end-project/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:560) results.append(
--> [561](https://file+.vscode-resource.vscode-cdn.net/home/justtuananh/AI4TUAN/End-Project/Eval_rag/RAG_langchain_DuyTa/~/miniconda3/envs/end-project/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:561) self._generate_with_cache(
[562](https://file+.vscode-resource.vscode-cdn.net/home/justtuananh/AI4TUAN/End-Project/Eval_rag/RAG_langchain_DuyTa/~/miniconda3/envs/end-project/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:562) m,
[563](https://file+.vscode-resource.vscode-cdn.net/home/justtuananh/AI4TUAN/End-Project/Eval_rag/RAG_langchain_DuyTa/~/miniconda3/envs/end-project/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:563) stop=stop,
[564](https://file+.vscode-resource.vscode-cdn.net/home/justtuananh/AI4TUAN/End-Project/Eval_rag/RAG_langchain_DuyTa/~/miniconda3/envs/end-project/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:564) run_manager=run_managers[i] if run_managers else None,
[565](https://file+.vscode-resource.vscode-cdn.net/home/justtuananh/AI4TUAN/End-Project/Eval_rag/RAG_langchain_DuyTa/~/miniconda3/envs/end-project/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:565) **kwargs,
[566](https://file+.vscode-resource.vscode-cdn.net/home/justtuananh/AI4TUAN/End-Project/Eval_rag/RAG_langchain_DuyTa/~/miniconda3/envs/end-project/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:566) )
[567](https://file+.vscode-resource.vscode-cdn.net/home/justtuananh/AI4TUAN/End-Project/Eval_rag/RAG_langchain_DuyTa/~/miniconda3/envs/end-project/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:567) )
[568](https://file+.vscode-resource.vscode-cdn.net/home/justtuananh/AI4TUAN/End-Project/Eval_rag/RAG_langchain_DuyTa/~/miniconda3/envs/end-project/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:568) except BaseException as e:
[569](https://file+.vscode-resource.vscode-cdn.net/home/justtuananh/AI4TUAN/End-Project/Eval_rag/RAG_langchain_DuyTa/~/miniconda3/envs/end-project/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:569) if run_managers:
File ~/miniconda3/envs/end-project/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:793, in BaseChatModel._generate_with_cache(self, messages, stop, run_manager, **kwargs)
[791](https://file+.vscode-resource.vscode-cdn.net/home/justtuananh/AI4TUAN/End-Project/Eval_rag/RAG_langchain_DuyTa/~/miniconda3/envs/end-project/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:791) else:
[792](https://file+.vscode-resource.vscode-cdn.net/home/justtuananh/AI4TUAN/End-Project/Eval_rag/RAG_langchain_DuyTa/~/miniconda3/envs/end-project/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:792) if inspect.signature(self._generate).parameters.get("run_manager"):
--> [793](https://file+.vscode-resource.vscode-cdn.net/home/justtuananh/AI4TUAN/End-Project/Eval_rag/RAG_langchain_DuyTa/~/miniconda3/envs/end-project/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:793) result = self._generate(
[794](https://file+.vscode-resource.vscode-cdn.net/home/justtuananh/AI4TUAN/End-Project/Eval_rag/RAG_langchain_DuyTa/~/miniconda3/envs/end-project/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:794) messages, stop=stop, run_manager=run_manager, **kwargs
[795](https://file+.vscode-resource.vscode-cdn.net/home/justtuananh/AI4TUAN/End-Project/Eval_rag/RAG_langchain_DuyTa/~/miniconda3/envs/end-project/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:795) )
[796](https://file+.vscode-resource.vscode-cdn.net/home/justtuananh/AI4TUAN/End-Project/Eval_rag/RAG_langchain_DuyTa/~/miniconda3/envs/end-project/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:796) else:
[797](https://file+.vscode-resource.vscode-cdn.net/home/justtuananh/AI4TUAN/End-Project/Eval_rag/RAG_langchain_DuyTa/~/miniconda3/envs/end-project/lib/python3.11/site-packages/langchain_core/language_models/chat_models.py:797) result = self._generate(messages, stop=stop, **kwargs)
File ~/miniconda3/envs/end-project/lib/python3.11/site-packages/langchain_groq/chat_models.py:472, in ChatGroq._generate(self, messages, stop, run_manager, **kwargs)
[467](https://file+.vscode-resource.vscode-cdn.net/home/justtuananh/AI4TUAN/End-Project/Eval_rag/RAG_langchain_DuyTa/~/miniconda3/envs/end-project/lib/python3.11/site-packages/langchain_groq/chat_models.py:467) message_dicts, params = self._create_message_dicts(messages, stop)
[468](https://file+.vscode-resource.vscode-cdn.net/home/justtuananh/AI4TUAN/End-Project/Eval_rag/RAG_langchain_DuyTa/~/miniconda3/envs/end-project/lib/python3.11/site-packages/langchain_groq/chat_models.py:468) params = {
[469](https://file+.vscode-resource.vscode-cdn.net/home/justtuananh/AI4TUAN/End-Project/Eval_rag/RAG_langchain_DuyTa/~/miniconda3/envs/end-project/lib/python3.11/site-packages/langchain_groq/chat_models.py:469) **params,
[470](https://file+.vscode-resource.vscode-cdn.net/home/justtuananh/AI4TUAN/End-Project/Eval_rag/RAG_langchain_DuyTa/~/miniconda3/envs/end-project/lib/python3.11/site-packages/langchain_groq/chat_models.py:470) **kwargs,
[471](https://file+.vscode-resource.vscode-cdn.net/home/justtuananh/AI4TUAN/End-Project/Eval_rag/RAG_langchain_DuyTa/~/miniconda3/envs/end-project/lib/python3.11/site-packages/langchain_groq/chat_models.py:471) }
--> [472](https://file+.vscode-resource.vscode-cdn.net/home/justtuananh/AI4TUAN/End-Project/Eval_rag/RAG_langchain_DuyTa/~/miniconda3/envs/end-project/lib/python3.11/site-packages/langchain_groq/chat_models.py:472) response = self.client.create(messages=message_dicts, **params)
[473](https://file+.vscode-resource.vscode-cdn.net/home/justtuananh/AI4TUAN/End-Project/Eval_rag/RAG_langchain_DuyTa/~/miniconda3/envs/end-project/lib/python3.11/site-packages/langchain_groq/chat_models.py:473) return self._create_chat_result(response)
File ~/miniconda3/envs/end-project/lib/python3.11/site-packages/groq/resources/chat/completions.py:289, in Completions.create(self, messages, model, frequency_penalty, function_call, functions, logit_bias, logprobs, max_tokens, n, parallel_tool_calls, presence_penalty, response_format, seed, stop, stream, temperature, tool_choice, tools, top_logprobs, top_p, user, extra_headers, extra_query, extra_body, timeout)
[148](https://file+.vscode-resource.vscode-cdn.net/home/justtuananh/AI4TUAN/End-Project/Eval_rag/RAG_langchain_DuyTa/~/miniconda3/envs/end-project/lib/python3.11/site-packages/groq/resources/chat/completions.py:148) def create(
[149](https://file+.vscode-resource.vscode-cdn.net/home/justtuananh/AI4TUAN/End-Project/Eval_rag/RAG_langchain_DuyTa/~/miniconda3/envs/end-project/lib/python3.11/site-packages/groq/resources/chat/completions.py:149) self,
[150](https://file+.vscode-resource.vscode-cdn.net/home/justtuananh/AI4TUAN/End-Project/Eval_rag/RAG_langchain_DuyTa/~/miniconda3/envs/end-project/lib/python3.11/site-packages/groq/resources/chat/completions.py:150) *,
(...)
[177](https://file+.vscode-resource.vscode-cdn.net/home/justtuananh/AI4TUAN/End-Project/Eval_rag/RAG_langchain_DuyTa/~/miniconda3/envs/end-project/lib/python3.11/site-packages/groq/resources/chat/completions.py:177) timeout: float | httpx.Timeout | None | NotGiven = NOT_GIVEN,
[178](https://file+.vscode-resource.vscode-cdn.net/home/justtuananh/AI4TUAN/End-Project/Eval_rag/RAG_langchain_DuyTa/~/miniconda3/envs/end-project/lib/python3.11/site-packages/groq/resources/chat/completions.py:178) ) -> ChatCompletion | Stream[ChatCompletionChunk]:
[179](https://file+.vscode-resource.vscode-cdn.net/home/justtuananh/AI4TUAN/End-Project/Eval_rag/RAG_langchain_DuyTa/~/miniconda3/envs/end-project/lib/python3.11/site-packages/groq/resources/chat/completions.py:179) """
[180](https://file+.vscode-resource.vscode-cdn.net/home/justtuananh/AI4TUAN/End-Project/Eval_rag/RAG_langchain_DuyTa/~/miniconda3/envs/end-project/lib/python3.11/site-packages/groq/resources/chat/completions.py:180) Creates a model response for the given chat conversation.
[181](https://file+.vscode-resource.vscode-cdn.net/home/justtuananh/AI4TUAN/End-Project/Eval_rag/RAG_langchain_DuyTa/~/miniconda3/envs/end-project/lib/python3.11/site-packages/groq/resources/chat/completions.py:181)
(...)
[287](https://file+.vscode-resource.vscode-cdn.net/home/justtuananh/AI4TUAN/End-Project/Eval_rag/RAG_langchain_DuyTa/~/miniconda3/envs/end-project/lib/python3.11/site-packages/groq/resources/chat/completions.py:287) timeout: Override the client-level default timeout for this request, in seconds
[288](https://file+.vscode-resource.vscode-cdn.net/home/justtuananh/AI4TUAN/End-Project/Eval_rag/RAG_langchain_DuyTa/~/miniconda3/envs/end-project/lib/python3.11/site-packages/groq/resources/chat/completions.py:288) """
--> [289](https://file+.vscode-resource.vscode-cdn.net/home/justtuananh/AI4TUAN/End-Project/Eval_rag/RAG_langchain_DuyTa/~/miniconda3/envs/end-project/lib/python3.11/site-packages/groq/resources/chat/completions.py:289) return self._post(
[290](https://file+.vscode-resource.vscode-cdn.net/home/justtuananh/AI4TUAN/End-Project/Eval_rag/RAG_langchain_DuyTa/~/miniconda3/envs/end-project/lib/python3.11/site-packages/groq/resources/chat/completions.py:290) "/openai/v1/chat/completions",
[291](https://file+.vscode-resource.vscode-cdn.net/home/justtuananh/AI4TUAN/End-Project/Eval_rag/RAG_langchain_DuyTa/~/miniconda3/envs/end-project/lib/python3.11/site-packages/groq/resources/chat/completions.py:291) body=maybe_transform(
[292](https://file+.vscode-resource.vscode-cdn.net/home/justtuananh/AI4TUAN/End-Project/Eval_rag/RAG_langchain_DuyTa/~/miniconda3/envs/end-project/lib/python3.11/site-packages/groq/resources/chat/completions.py:292) {
[293](https://file+.vscode-resource.vscode-cdn.net/home/justtuananh/AI4TUAN/End-Project/Eval_rag/RAG_langchain_DuyTa/~/miniconda3/envs/end-project/lib/python3.11/site-packages/groq/resources/chat/completions.py:293) "messages": messages,
[294](https://file+.vscode-resource.vscode-cdn.net/home/justtuananh/AI4TUAN/End-Project/Eval_rag/RAG_langchain_DuyTa/~/miniconda3/envs/end-project/lib/python3.11/site-packages/groq/resources/chat/completions.py:294) "model": model,
[295](https://file+.vscode-resource.vscode-cdn.net/home/justtuananh/AI4TUAN/End-Project/Eval_rag/RAG_langchain_DuyTa/~/miniconda3/envs/end-project/lib/python3.11/site-packages/groq/resources/chat/completions.py:295) "frequency_penalty": frequency_penalty,
[296](https://file+.vscode-resource.vscode-cdn.net/home/justtuananh/AI4TUAN/End-Project/Eval_rag/RAG_langchain_DuyTa/~/miniconda3/envs/end-project/lib/python3.11/site-packages/groq/resources/chat/completions.py:296) "function_call": function_call,
[297](https://file+.vscode-resource.vscode-cdn.net/home/justtuananh/AI4TUAN/End-Project/Eval_rag/RAG_langchain_DuyTa/~/miniconda3/envs/end-project/lib/python3.11/site-packages/groq/resources/chat/completions.py:297) "functions": functions,
[298](https://file+.vscode-resource.vscode-cdn.net/home/justtuananh/AI4TUAN/End-Project/Eval_rag/RAG_langchain_DuyTa/~/miniconda3/envs/end-project/lib/python3.11/site-packages/groq/resources/chat/completions.py:298) "logit_bias": logit_bias,
[299](https://file+.vscode-resource.vscode-cdn.net/home/justtuananh/AI4TUAN/End-Project/Eval_rag/RAG_langchain_DuyTa/~/miniconda3/envs/end-project/lib/python3.11/site-packages/groq/resources/chat/completions.py:299) "logprobs": logprobs,
[300](https://file+.vscode-resource.vscode-cdn.net/home/justtuananh/AI4TUAN/End-Project/Eval_rag/RAG_langchain_DuyTa/~/miniconda3/envs/end-project/lib/python3.11/site-packages/groq/resources/chat/completions.py:300) "max_tokens": max_tokens,
[301](https://file+.vscode-resource.vscode-cdn.net/home/justtuananh/AI4TUAN/End-Project/Eval_rag/RAG_langchain_DuyTa/~/miniconda3/envs/end-project/lib/python3.11/site-packages/groq/resources/chat/completions.py:301) "n": n,
[302](https://file+.vscode-resource.vscode-cdn.net/home/justtuananh/AI4TUAN/End-Project/Eval_rag/RAG_langchain_DuyTa/~/miniconda3/envs/end-project/lib/python3.11/site-packages/groq/resources/chat/completions.py:302) "parallel_tool_calls": parallel_tool_calls,
[303](https://file+.vscode-resource.vscode-cdn.net/home/justtuananh/AI4TUAN/End-Project/Eval_rag/RAG_langchain_DuyTa/~/miniconda3/envs/end-project/lib/python3.11/site-packages/groq/resources/chat/completions.py:303) "presence_penalty": presence_penalty,
[304](https://file+.vscode-resource.vscode-cdn.net/home/justtuananh/AI4TUAN/End-Project/Eval_rag/RAG_langchain_DuyTa/~/miniconda3/envs/end-project/lib/python3.11/site-packages/groq/resources/chat/completions.py:304) "response_format": response_format,
[305](https://file+.vscode-resource.vscode-cdn.net/home/justtuananh/AI4TUAN/End-Project/Eval_rag/RAG_langchain_DuyTa/~/miniconda3/envs/end-project/lib/python3.11/site-packages/groq/resources/chat/completions.py:305) "seed": seed,
[306](https://file+.vscode-resource.vscode-cdn.net/home/justtuananh/AI4TUAN/End-Project/Eval_rag/RAG_langchain_DuyTa/~/miniconda3/envs/end-project/lib/python3.11/site-packages/groq/resources/chat/completions.py:306) "stop": stop,
[307](https://file+.vscode-resource.vscode-cdn.net/home/justtuananh/AI4TUAN/End-Project/Eval_rag/RAG_langchain_DuyTa/~/miniconda3/envs/end-project/lib/python3.11/site-packages/groq/resources/chat/completions.py:307) "stream": stream,
[308](https://file+.vscode-resource.vscode-cdn.net/home/justtuananh/AI4TUAN/End-Project/Eval_rag/RAG_langchain_DuyTa/~/miniconda3/envs/end-project/lib/python3.11/site-packages/groq/resources/chat/completions.py:308) "temperature": temperature,
[309](https://file+.vscode-resource.vscode-cdn.net/home/justtuananh/AI4TUAN/End-Project/Eval_rag/RAG_langchain_DuyTa/~/miniconda3/envs/end-project/lib/python3.11/site-packages/groq/resources/chat/completions.py:309) "tool_choice": tool_choice,
[310](https://file+.vscode-resource.vscode-cdn.net/home/justtuananh/AI4TUAN/End-Project/Eval_rag/RAG_langchain_DuyTa/~/miniconda3/envs/end-project/lib/python3.11/site-packages/groq/resources/chat/completions.py:310) "tools": tools,
[311](https://file+.vscode-resource.vscode-cdn.net/home/justtuananh/AI4TUAN/End-Project/Eval_rag/RAG_langchain_DuyTa/~/miniconda3/envs/end-project/lib/python3.11/site-packages/groq/resources/chat/completions.py:311) "top_logprobs": top_logprobs,
[312](https://file+.vscode-resource.vscode-cdn.net/home/justtuananh/AI4TUAN/End-Project/Eval_rag/RAG_langchain_DuyTa/~/miniconda3/envs/end-project/lib/python3.11/site-packages/groq/resources/chat/completions.py:312) "top_p": top_p,
[313](https://file+.vscode-resource.vscode-cdn.net/home/justtuananh/AI4TUAN/End-Project/Eval_rag/RAG_langchain_DuyTa/~/miniconda3/envs/end-project/lib/python3.11/site-packages/groq/resources/chat/completions.py:313) "user": user,
[314](https://file+.vscode-resource.vscode-cdn.net/home/justtuananh/AI4TUAN/End-Project/Eval_rag/RAG_langchain_DuyTa/~/miniconda3/envs/end-project/lib/python3.11/site-packages/groq/resources/chat/completions.py:314) },
[315](https://file+.vscode-resource.vscode-cdn.net/home/justtuananh/AI4TUAN/End-Project/Eval_rag/RAG_langchain_DuyTa/~/miniconda3/envs/end-project/lib/python3.11/site-packages/groq/resources/chat/completions.py:315) completion_create_params.CompletionCreateParams,
[316](https://file+.vscode-resource.vscode-cdn.net/home/justtuananh/AI4TUAN/End-Project/Eval_rag/RAG_langchain_DuyTa/~/miniconda3/envs/end-project/lib/python3.11/site-packages/groq/resources/chat/completions.py:316) ),
[317](https://file+.vscode-resource.vscode-cdn.net/home/justtuananh/AI4TUAN/End-Project/Eval_rag/RAG_langchain_DuyTa/~/miniconda3/envs/end-project/lib/python3.11/site-packages/groq/resources/chat/completions.py:317) options=make_request_options(
[318](https://file+.vscode-resource.vscode-cdn.net/home/justtuananh/AI4TUAN/End-Project/Eval_rag/RAG_langchain_DuyTa/~/miniconda3/envs/end-project/lib/python3.11/site-packages/groq/resources/chat/completions.py:318) extra_headers=extra_headers, extra_query=extra_query, extra_body=extra_body, timeout=timeout
[319](https://file+.vscode-resource.vscode-cdn.net/home/justtuananh/AI4TUAN/End-Project/Eval_rag/RAG_langchain_DuyTa/~/miniconda3/envs/end-project/lib/python3.11/site-packages/groq/resources/chat/completions.py:319) ),
[320](https://file+.vscode-resource.vscode-cdn.net/home/justtuananh/AI4TUAN/End-Project/Eval_rag/RAG_langchain_DuyTa/~/miniconda3/envs/end-project/lib/python3.11/site-packages/groq/resources/chat/completions.py:320) cast_to=ChatCompletion,
[321](https://file+.vscode-resource.vscode-cdn.net/home/justtuananh/AI4TUAN/End-Project/Eval_rag/RAG_langchain_DuyTa/~/miniconda3/envs/end-project/lib/python3.11/site-packages/groq/resources/chat/completions.py:321) stream=stream or False,
[322](https://file+.vscode-resource.vscode-cdn.net/home/justtuananh/AI4TUAN/End-Project/Eval_rag/RAG_langchain_DuyTa/~/miniconda3/envs/end-project/lib/python3.11/site-packages/groq/resources/chat/completions.py:322) stream_cls=Stream[ChatCompletionChunk],
[323](https://file+.vscode-resource.vscode-cdn.net/home/justtuananh/AI4TUAN/End-Project/Eval_rag/RAG_langchain_DuyTa/~/miniconda3/envs/end-project/lib/python3.11/site-packages/groq/resources/chat/completions.py:323) )
File ~/miniconda3/envs/end-project/lib/python3.11/site-packages/groq/_base_client.py:1225, in SyncAPIClient.post(self, path, cast_to, body, options, files, stream, stream_cls)
[1211](https://file+.vscode-resource.vscode-cdn.net/home/justtuananh/AI4TUAN/End-Project/Eval_rag/RAG_langchain_DuyTa/~/miniconda3/envs/end-project/lib/python3.11/site-packages/groq/_base_client.py:1211) def post(
[1212](https://file+.vscode-resource.vscode-cdn.net/home/justtuananh/AI4TUAN/End-Project/Eval_rag/RAG_langchain_DuyTa/~/miniconda3/envs/end-project/lib/python3.11/site-packages/groq/_base_client.py:1212) self,
[1213](https://file+.vscode-resource.vscode-cdn.net/home/justtuananh/AI4TUAN/End-Project/Eval_rag/RAG_langchain_DuyTa/~/miniconda3/envs/end-project/lib/python3.11/site-packages/groq/_base_client.py:1213) path: str,
(...)
[1220](https://file+.vscode-resource.vscode-cdn.net/home/justtuananh/AI4TUAN/End-Project/Eval_rag/RAG_langchain_DuyTa/~/miniconda3/envs/end-project/lib/python3.11/site-packages/groq/_base_client.py:1220) stream_cls: type[_StreamT] | None = None,
[1221](https://file+.vscode-resource.vscode-cdn.net/home/justtuananh/AI4TUAN/End-Project/Eval_rag/RAG_langchain_DuyTa/~/miniconda3/envs/end-project/lib/python3.11/site-packages/groq/_base_client.py:1221) ) -> ResponseT | _StreamT:
[1222](https://file+.vscode-resource.vscode-cdn.net/home/justtuananh/AI4TUAN/End-Project/Eval_rag/RAG_langchain_DuyTa/~/miniconda3/envs/end-project/lib/python3.11/site-packages/groq/_base_client.py:1222) opts = FinalRequestOptions.construct(
[1223](https://file+.vscode-resource.vscode-cdn.net/home/justtuananh/AI4TUAN/End-Project/Eval_rag/RAG_langchain_DuyTa/~/miniconda3/envs/end-project/lib/python3.11/site-packages/groq/_base_client.py:1223) method="post", url=path, json_data=body, files=to_httpx_files(files), **options
[1224](https://file+.vscode-resource.vscode-cdn.net/home/justtuananh/AI4TUAN/End-Project/Eval_rag/RAG_langchain_DuyTa/~/miniconda3/envs/end-project/lib/python3.11/site-packages/groq/_base_client.py:1224) )
-> [1225](https://file+.vscode-resource.vscode-cdn.net/home/justtuananh/AI4TUAN/End-Project/Eval_rag/RAG_langchain_DuyTa/~/miniconda3/envs/end-project/lib/python3.11/site-packages/groq/_base_client.py:1225) return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls))
File ~/miniconda3/envs/end-project/lib/python3.11/site-packages/groq/_base_client.py:920, in SyncAPIClient.request(self, cast_to, options, remaining_retries, stream, stream_cls)
[911](https://file+.vscode-resource.vscode-cdn.net/home/justtuananh/AI4TUAN/End-Project/Eval_rag/RAG_langchain_DuyTa/~/miniconda3/envs/end-project/lib/python3.11/site-packages/groq/_base_client.py:911) def request(
[912](https://file+.vscode-resource.vscode-cdn.net/home/justtuananh/AI4TUAN/End-Project/Eval_rag/RAG_langchain_DuyTa/~/miniconda3/envs/end-project/lib/python3.11/site-packages/groq/_base_client.py:912) self,
[913](https://file+.vscode-resource.vscode-cdn.net/home/justtuananh/AI4TUAN/End-Project/Eval_rag/RAG_langchain_DuyTa/~/miniconda3/envs/end-project/lib/python3.11/site-packages/groq/_base_client.py:913) cast_to: Type[ResponseT],
(...)
[918](https://file+.vscode-resource.vscode-cdn.net/home/justtuananh/AI4TUAN/End-Project/Eval_rag/RAG_langchain_DuyTa/~/miniconda3/envs/end-project/lib/python3.11/site-packages/groq/_base_client.py:918) stream_cls: type[_StreamT] | None = None,
[919](https://file+.vscode-resource.vscode-cdn.net/home/justtuananh/AI4TUAN/End-Project/Eval_rag/RAG_langchain_DuyTa/~/miniconda3/envs/end-project/lib/python3.11/site-packages/groq/_base_client.py:919) ) -> ResponseT | _StreamT:
--> [920](https://file+.vscode-resource.vscode-cdn.net/home/justtuananh/AI4TUAN/End-Project/Eval_rag/RAG_langchain_DuyTa/~/miniconda3/envs/end-project/lib/python3.11/site-packages/groq/_base_client.py:920) return self._request(
[921](https://file+.vscode-resource.vscode-cdn.net/home/justtuananh/AI4TUAN/End-Project/Eval_rag/RAG_langchain_DuyTa/~/miniconda3/envs/end-project/lib/python3.11/site-packages/groq/_base_client.py:921) cast_to=cast_to,
[922](https://file+.vscode-resource.vscode-cdn.net/home/justtuananh/AI4TUAN/End-Project/Eval_rag/RAG_langchain_DuyTa/~/miniconda3/envs/end-project/lib/python3.11/site-packages/groq/_base_client.py:922) options=options,
[923](https://file+.vscode-resource.vscode-cdn.net/home/justtuananh/AI4TUAN/End-Project/Eval_rag/RAG_langchain_DuyTa/~/miniconda3/envs/end-project/lib/python3.11/site-packages/groq/_base_client.py:923) stream=stream,
[924](https://file+.vscode-resource.vscode-cdn.net/home/justtuananh/AI4TUAN/End-Project/Eval_rag/RAG_langchain_DuyTa/~/miniconda3/envs/end-project/lib/python3.11/site-packages/groq/_base_client.py:924) stream_cls=stream_cls,
[925](https://file+.vscode-resource.vscode-cdn.net/home/justtuananh/AI4TUAN/End-Project/Eval_rag/RAG_langchain_DuyTa/~/miniconda3/envs/end-project/lib/python3.11/site-packages/groq/_base_client.py:925) remaining_retries=remaining_retries,
[926](https://file+.vscode-resource.vscode-cdn.net/home/justtuananh/AI4TUAN/End-Project/Eval_rag/RAG_langchain_DuyTa/~/miniconda3/envs/end-project/lib/python3.11/site-packages/groq/_base_client.py:926) )
File ~/miniconda3/envs/end-project/lib/python3.11/site-packages/groq/_base_client.py:1018, in SyncAPIClient._request(self, cast_to, options, remaining_retries, stream, stream_cls)
[1015](https://file+.vscode-resource.vscode-cdn.net/home/justtuananh/AI4TUAN/End-Project/Eval_rag/RAG_langchain_DuyTa/~/miniconda3/envs/end-project/lib/python3.11/site-packages/groq/_base_client.py:1015) err.response.read()
[1017](https://file+.vscode-resource.vscode-cdn.net/home/justtuananh/AI4TUAN/End-Project/Eval_rag/RAG_langchain_DuyTa/~/miniconda3/envs/end-project/lib/python3.11/site-packages/groq/_base_client.py:1017) log.debug("Re-raising status error")
-> [1018](https://file+.vscode-resource.vscode-cdn.net/home/justtuananh/AI4TUAN/End-Project/Eval_rag/RAG_langchain_DuyTa/~/miniconda3/envs/end-project/lib/python3.11/site-packages/groq/_base_client.py:1018) raise self._make_status_error_from_response(err.response) from None
[1020](https://file+.vscode-resource.vscode-cdn.net/home/justtuananh/AI4TUAN/End-Project/Eval_rag/RAG_langchain_DuyTa/~/miniconda3/envs/end-project/lib/python3.11/site-packages/groq/_base_client.py:1020) return self._process_response(
[1021](https://file+.vscode-resource.vscode-cdn.net/home/justtuananh/AI4TUAN/End-Project/Eval_rag/RAG_langchain_DuyTa/~/miniconda3/envs/end-project/lib/python3.11/site-packages/groq/_base_client.py:1021) cast_to=cast_to,
[1022](https://file+.vscode-resource.vscode-cdn.net/home/justtuananh/AI4TUAN/End-Project/Eval_rag/RAG_langchain_DuyTa/~/miniconda3/envs/end-project/lib/python3.11/site-packages/groq/_base_client.py:1022) options=options,
(...)
[1025](https://file+.vscode-resource.vscode-cdn.net/home/justtuananh/AI4TUAN/End-Project/Eval_rag/RAG_langchain_DuyTa/~/miniconda3/envs/end-project/lib/python3.11/site-packages/groq/_base_client.py:1025) stream_cls=stream_cls,
[1026](https://file+.vscode-resource.vscode-cdn.net/home/justtuananh/AI4TUAN/End-Project/Eval_rag/RAG_langchain_DuyTa/~/miniconda3/envs/end-project/lib/python3.11/site-packages/groq/_base_client.py:1026) )
InternalServerError: Error code: 502 - {'error': {'type': 'internal_server_error', 'code': 'service_unavailable'}}
### Description
I'm just try to test model for prompt routing, i use Groq API , and already enter my API key, but the following error raised
Im' also test with Groq packages and it still OK
import os
from groq import Groq
client = Groq(
api_key= "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",
)
chat_completion = client.chat.completions.create(
messages=[
{
"role": "user",
"content": "Explain the importance of fast language models",
}
],
model="llama3-8b-8192",
)
print(chat_completion.choices[0].message.content)
### System Info
langchain==0.2.12
langchain-chroma==0.1.2
langchain-community==0.2.10
langchain-core==0.2.27
langchain-groq==0.1.9
langchain-text-splitters==0.2.2 | Langchain Groq 502 error | https://api.github.com/repos/langchain-ai/langchain/issues/25016/comments | 1 | 2024-08-03T09:23:00Z | 2024-08-05T23:30:49Z | https://github.com/langchain-ai/langchain/issues/25016 | 2,446,260,635 | 25,016 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
from langchain.output_parsers import OutputFixingParser
from langchain_core.output_parsers import PydanticOutputParser
from langchain_core.prompts import PromptTemplate
from langchain_core.pydantic_v1 import BaseModel, Field
from langchain_openai import ChatOpenAI, OpenAI
from langchain.output_parsers import RetryOutputParser
from langchain_core.runnables import RunnableLambda, RunnableParallel
template = """Based on the user question, provide an Action and Action Input for what step should be taken.
{format_instructions}
Question: {query}
Response:"""
class Action(BaseModel):
action: str = Field(description="action to take")
action_input: str = Field(description="input to the action")
parser = PydanticOutputParser(pydantic_object=Action)
prompt = PromptTemplate(
template="Answer the user query.\n{format_instructions}\n{query}\n",
input_variables=["query"],
partial_variables={"format_instructions": parser.get_format_instructions()},
)
completion_chain = prompt | ChatOpenAI(temperature=0) # Should be OpenAI
retry_parser = RetryOutputParser.from_llm(parser=parser, llm=ChatOpenAI(temperature=0)) # Should be OpenAI
main_chain = RunnableParallel(
completion=completion_chain, prompt_value=prompt
) | RunnableLambda(lambda x: retry_parser.parse_with_prompt(**x))
main_chain.invoke({"query": "who is leo di caprios gf?"})
```
### Error Message and Stack Trace (if applicable)
---------------------------------------------------------------------------
ValidationError Traceback (most recent call last)
Cell In[11], [line 35](vscode-notebook-cell:?execution_count=11&line=35)
[29](vscode-notebook-cell:?execution_count=11&line=29) retry_parser = RetryOutputParser.from_llm(parser=parser, llm=ChatOpenAI(temperature=0))
[30](vscode-notebook-cell:?execution_count=11&line=30) main_chain = RunnableParallel(
[31](vscode-notebook-cell:?execution_count=11&line=31) completion=completion_chain, prompt_value=prompt
[32](vscode-notebook-cell:?execution_count=11&line=32) ) | RunnableLambda(lambda x: retry_parser.parse_with_prompt(**x))
---> [35](vscode-notebook-cell:?execution_count=11&line=35) main_chain.invoke({"query": "who is leo di caprios gf?"})
ValidationError: 1 validation error for Generation
text
str type expected (type=type_error.str)
### Description
The code was copied from the official document https://python.langchain.com/v0.2/docs/how_to/output_parser_retry/
The original code example works.
But when I changed the OpenAI model to ChatOpenAI, it failed.
Does the OutputFixingParser only support OpenAI model, not ChatOpenAI?
### System Info
python 3.11.9
langchain 0.2.12
langchain-core 0.2.27 | OutputFixingParser doesn't support ChatOpenAI model (not OpenAI model)? | https://api.github.com/repos/langchain-ai/langchain/issues/24995/comments | 0 | 2024-08-02T19:47:30Z | 2024-08-02T19:50:08Z | https://github.com/langchain-ai/langchain/issues/24995 | 2,445,665,021 | 24,995 |
[
"langchain-ai",
"langchain"
] | ### URL
https://python.langchain.com/v0.2/docs/tutorials/local_rag/
### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
I was following the "Build a Local RAG Application" tutorial from the v0.2 docs, and especially followed the Setup steps for installing all the relevant packages:
```python
# Document loading, retrieval methods and text splitting
%pip install -qU langchain langchain_community
# Local vector store via Chroma
%pip install -qU langchain_chroma
# Local inference and embeddings via Ollama
%pip install -qU langchain_ollama
```
I think I followed every step of the tutorial correctly, yet, when I tried to run the next coming steps in the tutorial, I was thrown a `ModuleNotFoundError: No module named 'bs4'` suggesting that we are missing a pip install BeautifulSoup step.
In particular, running the `.load` method from `langchain_community.document_loaders.WebBaseLoader` raises the `ModuleNotFoundError`. Clearly, this method relies on BeautifulSoup.
So either I am missing some install steps in the Setup or a step to install `BeautifulSoup` is canonically missing from the tutorial which we should add for completeness.
An easy fix, of course, is to simply add `pip install beautifulsoup4` somewhere in the setup stage of the tutorial.
Cheers,
Salman
### Idea or request for content:
_No response_ | DOC: Naively following "Build a Local RAG Application" in v0.2 docs throws a BeautifulSoup import error | https://api.github.com/repos/langchain-ai/langchain/issues/24991/comments | 0 | 2024-08-02T18:31:12Z | 2024-08-02T18:33:53Z | https://github.com/langchain-ai/langchain/issues/24991 | 2,445,543,496 | 24,991 |
[
"langchain-ai",
"langchain"
] | ### Example Code
```python
async for event in agent.astream_events(
input={...},
config={...},
include_tags=[...],
version="v2"
):
print(event)
```
### Error Message and Stack Trace
```
File /opt/miniconda3/envs/ll-engine/lib/python3.11/site-packages/langchain_core/runnables/base.py:5256, in RunnableBindingBase.astream_events(self, input, config, **kwargs)
[5250](https://file+.vscode-resource.vscode-cdn.net/opt/miniconda3/envs/ll-engine/lib/python3.11/site-packages/langchain_core/runnables/base.py:5250) async def astream_events(
[5251](https://file+.vscode-resource.vscode-cdn.net/opt/miniconda3/envs/ll-engine/lib/python3.11/site-packages/langchain_core/runnables/base.py:5251) self,
[5252](https://file+.vscode-resource.vscode-cdn.net/opt/miniconda3/envs/ll-engine/lib/python3.11/site-packages/langchain_core/runnables/base.py:5252) input: Input,
[5253](https://file+.vscode-resource.vscode-cdn.net/opt/miniconda3/envs/ll-engine/lib/python3.11/site-packages/langchain_core/runnables/base.py:5253) config: Optional[RunnableConfig] = None,
[5254](https://file+.vscode-resource.vscode-cdn.net/opt/miniconda3/envs/ll-engine/lib/python3.11/site-packages/langchain_core/runnables/base.py:5254) **kwargs: Optional[Any],
[5255](https://file+.vscode-resource.vscode-cdn.net/opt/miniconda3/envs/ll-engine/lib/python3.11/site-packages/langchain_core/runnables/base.py:5255) ) -> AsyncIterator[StreamEvent]:
-> [5256](https://file+.vscode-resource.vscode-cdn.net/opt/miniconda3/envs/ll-engine/lib/python3.11/site-packages/langchain_core/runnables/base.py:5256) async for item in self.bound.astream_events(
[5257](https://file+.vscode-resource.vscode-cdn.net/opt/miniconda3/envs/ll-engine/lib/python3.11/site-packages/langchain_core/runnables/base.py:5257) input, self._merge_configs(config), **{**self.kwargs, **kwargs}
[5258](https://file+.vscode-resource.vscode-cdn.net/opt/miniconda3/envs/ll-engine/lib/python3.11/site-packages/langchain_core/runnables/base.py:5258) ):
[5259](https://file+.vscode-resource.vscode-cdn.net/opt/miniconda3/envs/ll-engine/lib/python3.11/site-packages/langchain_core/runnables/base.py:5259) yield item
File /opt/miniconda3/envs/ll-engine/lib/python3.11/site-packages/langchain_core/runnables/base.py:1246, in Runnable.astream_events(self, input, config, version, include_names, include_types, include_tags, exclude_names, exclude_types, exclude_tags, **kwargs)
[1241](https://file+.vscode-resource.vscode-cdn.net/opt/miniconda3/envs/ll-engine/lib/python3.11/site-packages/langchain_core/runnables/base.py:1241) raise NotImplementedError(
[1242](https://file+.vscode-resource.vscode-cdn.net/opt/miniconda3/envs/ll-engine/lib/python3.11/site-packages/langchain_core/runnables/base.py:1242) 'Only versions "v1" and "v2" of the schema is currently supported.'
[1243](https://file+.vscode-resource.vscode-cdn.net/opt/miniconda3/envs/ll-engine/lib/python3.11/site-packages/langchain_core/runnables/base.py:1243) )
[1245](https://file+.vscode-resource.vscode-cdn.net/opt/miniconda3/envs/ll-engine/lib/python3.11/site-packages/langchain_core/runnables/base.py:1245) async with aclosing(event_stream):
-> [1246](https://file+.vscode-resource.vscode-cdn.net/opt/miniconda3/envs/ll-engine/lib/python3.11/site-packages/langchain_core/runnables/base.py:1246) async for event in event_stream:
[1247](https://file+.vscode-resource.vscode-cdn.net/opt/miniconda3/envs/ll-engine/lib/python3.11/site-packages/langchain_core/runnables/base.py:1247) yield event
File /opt/miniconda3/envs/ll-engine/lib/python3.11/site-packages/langchain_core/tracers/event_stream.py:985, in _astream_events_implementation_v2(runnable, input, config, include_names, include_types, include_tags, exclude_names, exclude_types, exclude_tags, **kwargs)
[980](https://file+.vscode-resource.vscode-cdn.net/opt/miniconda3/envs/ll-engine/lib/python3.11/site-packages/langchain_core/tracers/event_stream.py:980) first_event_sent = True
[981](https://file+.vscode-resource.vscode-cdn.net/opt/miniconda3/envs/ll-engine/lib/python3.11/site-packages/langchain_core/tracers/event_stream.py:981) # This is a work-around an issue where the inputs into the
[982](https://file+.vscode-resource.vscode-cdn.net/opt/miniconda3/envs/ll-engine/lib/python3.11/site-packages/langchain_core/tracers/event_stream.py:982) # chain are not available until the entire input is consumed.
[983](https://file+.vscode-resource.vscode-cdn.net/opt/miniconda3/envs/ll-engine/lib/python3.11/site-packages/langchain_core/tracers/event_stream.py:983) # As a temporary solution, we'll modify the input to be the input
[984](https://file+.vscode-resource.vscode-cdn.net/opt/miniconda3/envs/ll-engine/lib/python3.11/site-packages/langchain_core/tracers/event_stream.py:984) # that was passed into the chain.
--> [985](https://file+.vscode-resource.vscode-cdn.net/opt/miniconda3/envs/ll-engine/lib/python3.11/site-packages/langchain_core/tracers/event_stream.py:985) event["data"]["input"] = input
[986](https://file+.vscode-resource.vscode-cdn.net/opt/miniconda3/envs/ll-engine/lib/python3.11/site-packages/langchain_core/tracers/event_stream.py:986) first_event_run_id = event["run_id"]
[987](https://file+.vscode-resource.vscode-cdn.net/opt/miniconda3/envs/ll-engine/lib/python3.11/site-packages/langchain_core/tracers/event_stream.py:987) yield event
TypeError: list indices must be integers or slices, not str
```
### Description
I'm trying to switch from **astream_events v1 to astream_events v2** in order to use custom events. The above code works perfectly fine in version v1, but throws the error after changing only the version parameter.
The documentation says no changes are required in order to switch to the new version.
Anyone had this issue and resolved it somehow?
System Information
------------------
> OS: Darwin
> OS Version: Darwin Kernel Version 24.0.0: Sat Jul 13 00:55:20 PDT 2024; root:xnu-11215.0.165.0.4~50/RELEASE_ARM64_T8112
> Python Version: 3.11.8 (main, Feb 26 2024, 15:36:12) [Clang 14.0.6 ]
Package Information
-------------------
> langchain_core: 0.2.25
> langchain: 0.2.11
> langchain_community: 0.2.10
> langsmith: 0.1.79
> langchain_cli: 0.0.21
> langchain_openai: 0.1.19
> langchain_pinecone: 0.1.3
> langchain_text_splitters: 0.2.2
> langgraph: 0.1.17
> langserve: 0.2.2 | astream_events in version v2 throws: "TypeError: list indices must be integers or slices, not str" | https://api.github.com/repos/langchain-ai/langchain/issues/24987/comments | 2 | 2024-08-02T17:41:05Z | 2024-08-02T18:06:25Z | https://github.com/langchain-ai/langchain/issues/24987 | 2,445,472,929 | 24,987 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
from langchain_community.document_loaders import PyPDFLoader
myurl='https://fraser.stlouisfed.org/docs/publications/usbudget/usbudget_1923a.pdf'
loader = PyPDFLoader(myurl)
pages = loader.load()
### Error Message and Stack Trace (if applicable)
....lib/python3.12/site-packages/langchain_community/document_loaders/pdf.py:199: ResourceWarning: unclosed file <_io.BufferedReader name='/var/folders/5n/_zzhgwqd2pqdbk6t3hckrsnh0000gn/T/tmpz1ilifhb/tmp.pdf'>
blob = Blob.from_data(open(self.file_path, "rb").read(), path=self.web_path) # type: ignore[attr-defined]
Object allocated at (most recent call last):
File "/Users/blabla/.local/share/virtualenvs/llm-narrative-restrict-concept-IiXDtsX5/lib/python3.12/site-packages/langchain_community/document_loaders/pdf.py", lineno 199
blob = Blob.from_data(open(self.file_path, "rb").read(), path=self.web_path) # type: ignore[attr-defined]
### Description
I am trying to use PyPDFLoader for importing pdf files from the internet. Sometimes, I get a warning that slows down the reading of PDFs. Dosu suggested that the warning can be fixed by changing the code in the package. See the discussion https://github.com/langchain-ai/langchain/discussions/24972?notification_referrer_id=NT_kwDOAeiAOrQxMTc4MTAyNzI4MzozMjAxNDM5NA#discussioncomment-10223198
### System Info
System Information
------------------
> OS: Darwin
> OS Version: Darwin Kernel Version 23.5.0: Wed May 1 20:16:51 PDT 2024; root:xnu-10063.121.3~5/RELEASE_ARM64_T8103
> Python Version: 3.12.1 (v3.12.1:2305ca5144, Dec 7 2023, 17:23:38) [Clang 13.0.0 (clang-1300.0.29.30)]
Package Information
-------------------
> langchain_core: 0.2.27
> langchain: 0.2.12
> langchain_community: 0.2.10
> langsmith: 0.1.96
> langchain_huggingface: 0.0.3
> langchain_openai: 0.1.20
> langchain_text_splitters: 0.2.2
> langchain_weaviate: 0.0.2
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph
> langserve | Fixwarning for unclosed document when usinf PYPDFLoader for urls | https://api.github.com/repos/langchain-ai/langchain/issues/24973/comments | 0 | 2024-08-02T12:22:05Z | 2024-08-02T12:24:45Z | https://github.com/langchain-ai/langchain/issues/24973 | 2,444,844,534 | 24,973 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```py
from langchain.chat_models import init_chat_model
from langchain.chat_models.base import _check_pkg
_check_pkg("langchain_ollama") # success
_check_pkg("langchain_community") # success
model = init_chat_model("llama3.1:8b", model_provider="ollama")
print(type(model)) # <class 'langchain_community.chat_models.ollama.ChatOllama'>
```
### Description
When I install langchain_ollama and langchain_community at the same time, it will call langchain_community first.
I think this is unreasonable. I installed langchain_ollama just because I want to use it first.
The current code snippet is as follows. When both are present, langchain_community will override langchain_ollama.
```py
elif model_provider == "ollama":
try:
_check_pkg("langchain_ollama")
from langchain_ollama import ChatOllama
except ImportError:
pass
# For backwards compatibility
try:
_check_pkg("langchain_community")
from langchain_community.chat_models import ChatOllama
except ImportError:
# If both langchain-ollama and langchain-community aren't available, raise
# an error related to langchain-ollama
_check_pkg("langchain_ollama")
return ChatOllama(model=model, **kwargs)
```
I think the import langchain_community should be placed in the except ImportError section of langchain_ollama (pass part).
### System Info
langchain==0.2.12
langchain-community==0.2.10
langchain-core==0.2.27
langchain-ollama==0.1.1
platform==linux
python-version==3.10.12
| The import priority of init_chat_model for the ollama package | https://api.github.com/repos/langchain-ai/langchain/issues/24970/comments | 1 | 2024-08-02T11:34:30Z | 2024-08-02T15:34:46Z | https://github.com/langchain-ai/langchain/issues/24970 | 2,444,768,235 | 24,970 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
athena_loader = AthenaLoader(
query=f"SELECT 1",
database="default",
s3_output_uri="s3://fake-bucket/fake-prefix",
profile_name=None,
)
```
This code works but the type hinting is incorrect which results in error warnings from type checkers.
### Error Message and Stack Trace (if applicable)

### Description
The Athena loader has code to handle a non-profile. I think should be an optional kwarg like this:
```python
profile_name: Optional[str] = None,
```
The code here shows that `None` is actually handle and is a valid input.
https://github.com/langchain-ai/langchain/blob/d7688a4328f5d66f3b274db6e7b024a24b15cc8e/libs/community/langchain_community/document_loaders/athena.py#L62-L67
### System Info
System Information
------------------
> OS: Darwin
> OS Version: Darwin Kernel Version 23.5.0: Wed May 1 20:12:58 PDT 2024; root:xnu-10063.121.3~5/RELEASE_ARM64_T6000
> Python Version: 3.11.4 (main, Mar 26 2024, 16:28:52) [Clang 15.0.0 (clang-1500.3.9.4)]
Package Information
-------------------
> langchain_core: 0.2.25
> langchain: 0.2.11
> langchain_community: 0.2.10
> langsmith: 0.1.94
> langchain_openai: 0.1.19
> langchain_postgres: 0.0.9
> langchain_text_splitters: 0.2.2
> langgraph: 0.1.17
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langserve
| AthenaLoader profile type hinting is incorrect | https://api.github.com/repos/langchain-ai/langchain/issues/24957/comments | 1 | 2024-08-02T04:19:24Z | 2024-08-05T19:46:05Z | https://github.com/langchain-ai/langchain/issues/24957 | 2,443,993,898 | 24,957 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
infinity server:
`docker run -it --gpus all -v ~/llms/:/app/.cache -p 8000:8000 michaelf34/infinity:latest v2 --model-id /app/.cache/multilingual-e5-large" --port 8000`
```python3
import asyncio
from langchain_community.embeddings import InfinityEmbeddings
async def main():
infinity_api_url = "http://<URL>:8000"
embeddings = InfinityEmbeddings(
model=".cache/multilingual-e5-large", infinity_api_url=infinity_api_url
)
query = "Where is Paris?"
query_result = await embeddings.aembed_query(query)
asyncio.run(main())
```
### Error Message and Stack Trace (if applicable)
```
Traceback (most recent call last):
File "/home/lu-marchenkov@uc.local/RAG_test/local.py", line 30, in <module>
asyncio.run(main())
File "/usr/lib/python3.9/asyncio/runners.py", line 44, in run
return loop.run_until_complete(main)
File "/usr/lib/python3.9/asyncio/base_events.py", line 642, in run_until_complete
return future.result()
File "/home/lu-marchenkov@uc.local/RAG_test/local.py", line 16, in main
query_result = await embeddings.aembed_query(query)
File "/home/lu-marchenkov@uc.local/.local/lib/python3.9/site-packages/langchain_community/embeddings/infinity.py", line 115, in aembed_query
embeddings = await self.aembed_documents([text])
File "/home/lu-marchenkov@uc.local/.local/lib/python3.9/site-packages/langchain_community/embeddings/infinity.py", line 89, in aembed_documents
embeddings = await self.client.aembed(
File "/home/lu-marchenkov@uc.local/.local/lib/python3.9/site-packages/langchain_community/embeddings/infinity.py", line 315, in aembed
*[
File "/home/lu-marchenkov@uc.local/.local/lib/python3.9/site-packages/langchain_community/embeddings/infinity.py", line 316, in <listcomp>
self._async_request(
TypeError: _async_request() got an unexpected keyword argument 'url'
```
### Description
This is the minima example, in general some applications with FAISS doesn't work as well with the same error.
```python3
db = FAISS.load_local("some_vectorstore",
embeddings,
allow_dangerous_deserialization=True)
retriever = db.as_retriever(search_kwargs={"k" : 2})
result = await retriever.ainvoke(query) #or await db.asimilarity_search(query)
```
Everything works very well with no async.
Thank you!
### System Info
langchain==0.2.10
langchain-community==0.2.9
langchain-core==0.2.22
langchain-openai==0.1.17
langchain-text-splitters==0.2.2
Debian 5.10.197-1 (2023-09-29) x86_64 GNU/Linux
Python 3.9.2 | InfinityEmbeddings do not work properly in a asynchronous mode (aembed falls with error) | https://api.github.com/repos/langchain-ai/langchain/issues/24942/comments | 0 | 2024-08-01T16:45:44Z | 2024-08-02T06:41:07Z | https://github.com/langchain-ai/langchain/issues/24942 | 2,442,912,548 | 24,942 |
[
"langchain-ai",
"langchain"
] | ### URL
https://python.langchain.com/v0.2/docs/tutorials/chatbot/
### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
Initial setup has this:
```
model = AzureChatOpenAI(
azure_endpoint=os.environ["AZURE_OPENAI_ENDPOINT"],
azure_deployment=os.environ["AZURE_OPENAI_DEPLOYMENT_NAME"],
openai_api_version=os.environ["AZURE_OPENAI_API_VERSION"],
)
```
Everything works normally until I get to this section of the documentation:
```
from langchain_core.messages import SystemMessage, trim_messages
trimmer = trim_messages(
max_tokens=65,
strategy="last",
token_counter=model,
include_system=True,
allow_partial=False,
start_on="human",
)
messages = [
SystemMessage(content="you're a good assistant"),
HumanMessage(content="hi! I'm bob"),
AIMessage(content="hi!"),
HumanMessage(content="I like vanilla ice cream"),
AIMessage(content="nice"),
HumanMessage(content="whats 2 + 2"),
AIMessage(content="4"),
HumanMessage(content="thanks"),
AIMessage(content="no problem!"),
HumanMessage(content="having fun?"),
AIMessage(content="yes!"),
]
trimmer.invoke(messages)
```
This fails with an Attribute Error: None has no Attribute startswith
I was able to fix this error by adding the following into my model setup:
```
model = AzureChatOpenAI(
model_name=os.environ["AZURE_OPENAI_MODEL_NAME"],
azure_endpoint=os.environ["AZURE_OPENAI_ENDPOINT"],
azure_deployment=os.environ["AZURE_OPENAI_DEPLOYMENT_NAME"],
openai_api_version=os.environ["AZURE_OPENAI_API_VERSION"],
)
```
### Idea or request for content:
_No response_ | DOC: <Issue related to /v0.2/docs/tutorials/chatbot/> - trimmer failing without model_name being filled in | https://api.github.com/repos/langchain-ai/langchain/issues/24928/comments | 0 | 2024-08-01T15:01:47Z | 2024-08-01T15:04:28Z | https://github.com/langchain-ai/langchain/issues/24928 | 2,442,703,142 | 24,928 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
from langchain.pydantic_v1 import BaseModel, Field
class ModelA(BaseModel):
field_a: str = Field(description='Base class field')
class ModelB(ModelA):
field_b: str = Field(description='Subclass class field')
mytool = tool(func, args_schema=ModelB)
```
### Error Message and Stack Trace (if applicable)
_No response_
### Description
Hi,
I noticed that in the current version of langchain_core, tools using an args_schema have incomplete inputs if the schema is derived from a superclass. That is because as of recently there is a property `tool_call_schema`, which creates a schema with only "non-injected" fields: https://github.com/langchain-ai/langchain/blob/master/libs/core/langchain_core/tools.py#L387
However, it derives the field names to be retained from the `__annotations__` property of the schema, which does not inherit fields of the base class. Hence, all fields from the base class (ModelA in the example above) are deleted. This causes incomplete tool inputs when using schemas that use inheritance.
Is this a regression or should the schemas be used differently?
Thanks!
Valentin
### System Info
System Information
------------------
> OS: Linux
> OS Version: #1 SMP PREEMPT_DYNAMIC Mon Jul 15 21:39:34 UTC 2024
> Python Version: 3.11.4 (main, Jul 30 2024, 10:36:58) [GCC 14.1.1 20240522]
Package Information
-------------------
> langchain_core: 0.2.26
> langchain: 0.2.11
> langchain_community: 0.2.10
> langsmith: 0.1.94
> langchain_openai: 0.1.19
> langchain_text_splitters: 0.2.2
> langserve: 0.2.2 | BaseTool's `tool_call_schema` ignores inherited fields of an `args_schema`, causing incomplete tool inputs | https://api.github.com/repos/langchain-ai/langchain/issues/24925/comments | 2 | 2024-08-01T12:44:58Z | 2024-08-02T19:37:14Z | https://github.com/langchain-ai/langchain/issues/24925 | 2,442,363,033 | 24,925 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
AGENT_PROMPT = """
{tool_names}
Valid action values: "Final Answer" or {tools}
Follow this format, example:
Question: the input question you must answer
Thought: you should always think about what to do
Action(Tool): the action to take
Action Input(Tool Input): the input to the action
Observation: the result of the action
Thought: I now know the final answer
Final Answer: the final answer to the original input question
"""
langchain_llm_client = ChatOpenAI(
model='gpt-4o',
temperature=0.,
api_key=OPENAI_API_KEY,
streaming=True,
max_tokens=None,
)
@tool
async def test():
"""Test tool"""
return f'Test Successfully.\n'
tools = [test]
agent = create_tool_calling_agent(langchain_llm_client, tools, AGENT_PROMPT)
agent_executor = AgentExecutor(
agent=agent,
tools=tools,
verbose=False,
return_intermediate_steps=True
)
async def agent_completion_async(
agent_executor,
message: str,
tools: List = None,
) -> AsyncGenerator:
"""Base on query to decide the tool which should use.
Response with `async` and `streaming`.
"""
tool_names = [tool.name for tool in tools]
async for event in agent_executor.astream_events(
{
"input": messages,
"tools": tools,
"tool_names": tool_names,
"agent_scratchpad": lambda x: format_to_openai_tool_messages(x["intermediate_steps"]),
},
version='v2'
):
kind = event['event']
if kind == "on_chain_start":
if (
event["name"] == "Agent"
):
yield(
f"\n### Start Agent: `{event['name']}`, Agent Input: `{event['data'].get('input')}`\n"
)
elif kind == "on_chat_model_stream":
# llm model response
content = event["data"]["chunk"].content
if content:
yield content
elif kind == "on_tool_start":
yield(
f"\n### Start Tool: `{event['name']}`, Tool Input: `{event['data'].get('input')}`\n"
)
elif kind == "on_tool_end":
yield(
f"\n### Finished Tool: `{event['name']}`, Tool Results: \n"
)
if isinstance(event['data'].get('output'), AsyncGenerator):
async for event_chunk in event['data'].get('output'):
yield event_chunk
else:
yield(
f"`{event['data'].get('output')}`\n"
)
elif kind == "on_chain_end":
if (
event["name"] == "Agent"
):
yield(
f"\n### Finished Agent: `{event['name']}`, Agent Results: \n"
)
yield(
f"{event['data'].get('output')['output']}\n"
)
async def main():
async for response in agent_completion_async(agent_executor, ['use test tool'], tools)
print(response)
```
### Results
```
Question: use test tool
Thought: I should use the test tool to fulfill the user's request.
Action(Tool): test
Action Input(Tool Input): {}
Observation: The test tool has been executed successfully.
Thought: I now know the final answer.
Final Answer: The test tool has been executed successfully.
```
### Error Message and Stack Trace (if applicable)
```
Exception ignored in: <async_generator object AgentExecutorIterator.__aiter__ at 0x0000024953FD6D40>
Traceback (most recent call last):
File "C:\Users\\AppData\Local\Programs\Python\Python311\Lib\site-packages\langchain\agents\agent.py", line 1794, in astream
yield step
RuntimeError: async generator ignored GeneratorExit
```
### Description
When using the agent astream, it sometimes executes successfully, but other times it encounters errors and doesn't execute the tool as expected.
### System Info
System Information
------------------
> OS: Windows
> OS Version: 10.0.22631
> Python Version: 3.11.8 (tags/v3.11.8:db85d51, Feb 6 2024, 22:03:32) [MSC v.1937 64 bit (AMD64)]
Package Information
-------------------
> langchain_core: 0.2.23
> langchain: 0.2.11
> langchain_community: 0.2.10
> langsmith: 0.1.86
> langchain_anthropic: 0.1.20
> langchain_openai: 0.1.17
> langchain_text_splitters: 0.2.2
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph
> langserve | RuntimeError: async generator ignored GeneratorExit when using agent `astream` | https://api.github.com/repos/langchain-ai/langchain/issues/24914/comments | 0 | 2024-08-01T03:14:40Z | 2024-08-09T14:04:07Z | https://github.com/langchain-ai/langchain/issues/24914 | 2,441,355,256 | 24,914 |
[
"langchain-ai",
"langchain"
] | ### Privileged issue
- [X] I am a LangChain maintainer, or was asked directly by a LangChain maintainer to create an issue here.
### Issue Content
## Issue
To make our retriever integrations as easy to use as possible we need to make sure the docs for them are thorough and standardized. There are two parts to this: updating the retriever docstrings and updating the actual integration docs.
This needs to be done for each retriever integration, ideally with one PR per retriever.
Related to broader issues #21983 and #22005.
## Docstrings
Each retriever class docstring should have the sections shown in the [Appendix](#appendix) below. The sections should have input and output code blocks when relevant.
To build a preview of the API docs for the package you're working on run (from root of repo):
```bash
make api_docs_clean; make api_docs_quick_preview API_PKG=community
```
where `API_PKG=` should be the parent directory that houses the edited package (e.g. "community" for `langchain-community`).
## Doc pages
Each retriever [docs page](https://python.langchain.com/v0.2/docs/integrations/retrievers/) should follow [this template](https://github.com/langchain-ai/langchain/blob/master/libs/cli/langchain_cli/integration_template/docs/retrievers.ipynb).
See example [here](https://github.com/langchain-ai/langchain/blob/master/docs/docs/integrations/retrievers/tavily.ipynb).
You can use the `langchain-cli` to quickly get started with a new integration docs page (run from root of repo):
```bash
poetry run pip install -e libs/cli
poetry run langchain-cli integration create-doc --name "foo-bar" --name-class FooBar --component-type retriever --destination-dir ./docs/docs/integrations/retrievers/
```
where `--name` is the integration package name without the "langchain-" prefix and `--name-class` is the class name without the "Retriever" postfix. This will create a template doc with some autopopulated fields at docs/docs/integrations/retrievers/foo_bar.ipynb.
To build a preview of the docs you can run (from root):
```bash
make docs_clean
make docs_build
cd docs/build/output-new
yarn
yarn start
```
## Appendix
Expected sections for the retriever class docstring.
```python
"""__ModuleName__ retriever.
# TODO: Replace with relevant packages, env vars, etc.
Setup:
Install ``__package_name__`` and set environment variable ``__MODULE_NAME___API_KEY``.
.. code-block:: bash
pip install -U __package_name__
export __MODULE_NAME___API_KEY="your-api-key"
# TODO: Populate with relevant params.
Key init args:
arg 1: type
description
arg 2: type
description
# TODO: Replace with relevant init params.
Instantiate:
.. code-block:: python
from __package_name__ import __ModuleName__Retriever
retriever = __ModuleName__Retriever(
# ...
)
Usage:
.. code-block:: python
query = "..."
retriever.invoke(query)
.. code-block:: python
# TODO: Example output.
Use within a chain:
.. code-block:: python
from langchain_core.output_parsers import StrOutputParser
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.runnables import RunnablePassthrough
from langchain_openai import ChatOpenAI
prompt = ChatPromptTemplate.from_template(
\"\"\"Answer the question based only on the context provided.
Context: {context}
Question: {question}\"\"\"
)
llm = ChatOpenAI(model="gpt-3.5-turbo-0125")
def format_docs(docs):
return "\n\n".join(doc.page_content for doc in docs)
chain = (
{"context": retriever | format_docs, "question": RunnablePassthrough()}
| prompt
| llm
| StrOutputParser()
)
chain.invoke("...")
.. code-block:: python
# TODO: Example output.
""" # noqa: E501
```
See example [here](https://github.com/langchain-ai/langchain/blob/a24c445e027cfa5893f99f772fc19dd3e4b28b2e/libs/community/langchain_community/retrievers/tavily_search_api.py#L18). | Standardize retriever integration docs | https://api.github.com/repos/langchain-ai/langchain/issues/24908/comments | 0 | 2024-07-31T22:14:31Z | 2024-07-31T22:17:02Z | https://github.com/langchain-ai/langchain/issues/24908 | 2,441,035,254 | 24,908 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
.
.
.
from langchain.agents import AgentExecutor, OpenAIFunctionsAgent, create_openai_functions_agent
from langchain.agents.agent_toolkits import create_retriever_tool
from langchain.agents.openai_functions_agent.agent_token_buffer_memory import AgentTokenBufferMemory
from langchain.retrievers import ContextualCompressionRetriever
from langchain.retrievers.document_compressors import LLMChainExtractor
from langchain.schema.messages import SystemMessage
from langchain_core.prompts.chat import MessagesPlaceholder
from langchain_openai.chat_models import ChatOpenAI
.
.
.
@cl.on_chat_start
async def start():
memory_key = 'history'
prompt = OpenAIFunctionsAgent.create_prompt(
system_message=SystemMessage(content=Cu.get_system_prompt()),
extra_prompt_messages=[MessagesPlaceholder(variable_name=memory_key)],
)
cl.user_session.set('chain',
AgentExecutor(
agent=create_openai_functions_agent(__llm, __tools, prompt),
tools=__tools,
verbose=__verbose,
memory=AgentTokenBufferMemory(memory_key=memory_key, llm=__llm),
return_intermediate_steps=True
))
.
.
.
@cl.on_message
async def main(cl_message):
response = await cl.make_async(__process_message)(cl_message.content)
.
.
.
await cl.Message(
content=response['output'],
).send()
def __process_message(message):
.
.
.
else:
if __single_collection:
response = __get_response(message)
.
.
.
return response
def __get_response(message):
chain = cl.user_session.get('chain')
cb = cl.LangchainCallbackHandler(
stream_final_answer=True,
answer_prefix_tokens=['FINAL', 'ANSWER']
)
cb.answer_reached = True
return chain.invoke(
{'input': message},
callbacks=[cb]
)
```
### Error Message and Stack Trace (if applicable)
File "/aiui/app.py", line 148, in __process_message
response = __get_response(message)
^^^^^^^^^^^^^^^^^^^^^^^
File "/aiui/app.py", line 189, in __get_response
return chain.invoke(
^^^^^^^^^^^^^
File "/aiui/venv/lib/python3.12/site-packages/langchain/chains/base.py", line 166, in invoke
raise e
File "/aiui/venv/lib/python3.12/site-packages/langchain/chains/base.py", line 161, in invoke
final_outputs: Dict[str, Any] = self.prep_outputs(
^^^^^^^^^^^^^^^^^^
File "/aiui/venv/lib/python3.12/site-packages/langchain/chains/base.py", line 460, in prep_outputs
self.memory.save_context(inputs, outputs)
File "/aiui/venv/lib/python3.12/site-packages/langchain/agents/openai_functions_agent/agent_token_buffer_memory.py", line 97, in save_context
curr_buffer_length = self.llm.get_num_tokens_from_messages(buffer)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/aiui/venv/lib/python3.12/site-packages/langchain_openai/chat_models/base.py", line 877, in get_num_tokens_from_messages
num_tokens += len(encoding.encode(value))
^^^^^^^^^^^^^^^^^^^^^^
File "/aiui/venv/lib/python3.12/site-packages/tiktoken/core.py", line 116, in encode
if match := _special_token_regex(disallowed_special).search(text):
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
TypeError: expected string or buffer
### Description
My application is a ChatBot built for The New School and is currently in the POC stage.
I started to get the error above after upgrading my Langchain libraries.
After debugging the issue, I found the problem in the following class/method
**langchain/libs/partners/openai/langchain_openai/chat_models/base.py**
**get_num_tokens_from_messages**
Changing the **line 877**
from
**num_tokens += len(encoding.encode(value))**
to
**num_tokens += len(encoding.encode(str(value)))**
fixes the issue
**line 875** has this comment
**# Cast str(value) in case the message value is not a string**
but I didn't see it in the code
Please note that, above I replaced all irrelevant pieces of my code with
**.
.
.**
### System Info
System Information
------------------
> OS: Darwin
> OS Version: Darwin Kernel Version 22.6.0: Wed Oct 4 21:26:23 PDT 2023; root:xnu-8796.141.3.701.17~4/RELEASE_ARM64_T6000
> Python Version: 3.12.4 (main, Jun 6 2024, 18:26:44) [Clang 15.0.0 (clang-1500.1.0.2.5)]
Package Information
-------------------
> langchain_core: 0.2.25
> langchain: 0.2.11
> langchain_community: 0.2.10
> langsmith: 0.1.94
> langchain_huggingface: 0.0.3
> langchain_openai: 0.1.19
> langchain_text_splitters: 0.2.2
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph
> langserve
| get_num_tokens_from_messages method in langchain_openai/chat_models/base.py generates "TypeError: expected string or buffer" error | https://api.github.com/repos/langchain-ai/langchain/issues/24901/comments | 0 | 2024-07-31T21:00:06Z | 2024-07-31T21:02:40Z | https://github.com/langchain-ai/langchain/issues/24901 | 2,440,946,660 | 24,901 |
[
"langchain-ai",
"langchain"
] | To make our KV-store integrations as easy to use as possible we need to make sure the docs for them are thorough and standardized. There are two parts to this: updating the KV-store docstrings and updating the actual integration docs.
This needs to be done for each KV-store integration, ideally with one PR per KV-store.
Related to broader issues #21983 and #22005.
## Docstrings
Each KV-store class docstring should have the sections shown in the [Appendix](#appendix) below. The sections should have input and output code blocks when relevant.
To build a preview of the API docs for the package you're working on run (from root of repo):
```shell
make api_docs_clean; make api_docs_quick_preview API_PKG=openai
```
where `API_PKG=` should be the parent directory that houses the edited package (e.g. community, openai, anthropic, huggingface, together, mistralai, groq, fireworks, etc.). This should be quite fast for all the partner packages.
## Doc pages
Each KV-store [docs page](https://python.langchain.com/v0.2/docs/integrations/stores/) should follow [this template](https://github.com/langchain-ai/langchain/blob/master/libs/cli/langchain_cli/integration_template/docs/kv_store.ipynb).
Here is an example: https://python.langchain.com/v0.2/docs/integrations/stores/in_memory/
You can use the `langchain-cli` to quickly get started with a new chat model integration docs page (run from root of repo):
```shell
poetry run pip install -e libs/cli
poetry run langchain-cli integration create-doc --name "foo-bar" --name-class FooBar --component-type kv_store --destination-dir ./docs/docs/integrations/stores/
```
where `--name` is the integration package name without the "langchain-" prefix and `--name-class` is the class name without the "ByteStore" suffix. This will create a template doc with some autopopulated fields at docs/docs/integrations/stores/foo_bar.ipynb.
To build a preview of the docs you can run (from root):
```shell
make docs_clean
make docs_build
cd docs/build/output-new
yarn
yarn start
```
## Appendix
Expected sections for the KV-store class docstring.
```python
"""__ModuleName__ completion KV-store integration.
# TODO: Replace with relevant packages, env vars.
Setup:
Install ``__package_name__`` and set environment variable ``__MODULE_NAME___API_KEY``.
.. code-block:: bash
pip install -U __package_name__
export __MODULE_NAME___API_KEY="your-api-key"
# TODO: Populate with relevant params.
Key init args — client params:
api_key: Optional[str]
__ModuleName__ API key. If not passed in will be read from env var __MODULE_NAME___API_KEY.
See full list of supported init args and their descriptions in the params section.
# TODO: Replace with relevant init params.
Instantiate:
.. code-block:: python
from __module_name__ import __ModuleName__ByteStore
kv_store = __ModuleName__ByteStore(
# api_key="...",
# other params...
)
Set keys:
.. code-block:: python
kv_pairs = [
["key1", "value1"],
["key2", "value2"],
]
kv_store.mset(kv_pairs)
.. code-block:: python
Get keys:
.. code-block:: python
kv_store.mget(["key1", "key2"])
.. code-block:: python
# TODO: Example output.
Delete keys:
..code-block:: python
kv_store.mdelete(["key1", "key2"])
..code-block:: python
""" # noqa: E501
``` | Standardize KV-Store Docs | https://api.github.com/repos/langchain-ai/langchain/issues/24888/comments | 0 | 2024-07-31T17:28:17Z | 2024-07-31T21:41:15Z | https://github.com/langchain-ai/langchain/issues/24888 | 2,440,545,637 | 24,888 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
### user will read optional_arguments as a way make some of the
### variables in the template as optional as shown in example . Here
### I intend to make greetings as optional. Notice that this is inseted both in
### input_variables which are mandatory as well as optional_variables . See the ouptut
### user has to provide both 'user_input' as well as 'greetings' as key in the input. Otherwise
### the code breaks. partial_variables works as intended.
template = ChatPromptTemplate([
("system", "You are a helpful AI bot. Your name is {bot_name}."),
("human", "Hello, how are you doing?"),
("ai", "{greetings}, I'm doing well, thanks!"),
("human", "{user_input}"),
],
input_variables=['user_input'],
optional_variables=["greetings"],
partial_variables={"bot_name": "Monalisa"}
)
print(template)
final_input = {
"user_input": "What is your name?"
}
try:
prompt_value = template.invoke(final_input)
except Exception as e:
print(e)
input_variables=['greetings', 'user_input'] optional_variables=['greetings'] partial_variables={'bot_name': 'Monalisa'} messages=[SystemMessagePromptTemplate(prompt=PromptTemplate(input_variables=['bot_name'], template='You are a helpful AI bot. Your name is {bot_name}.')), HumanMessagePromptTemplate(prompt=PromptTemplate(input_variables=[], template='Hello, how are you doing?')), AIMessagePromptTemplate(prompt=PromptTemplate(input_variables=['greetings'], template="{greetings}, I'm doing well, thanks!")), HumanMessagePromptTemplate(prompt=PromptTemplate(input_variables=['user_input'], template='{user_input}'))]
"Input to ChatPromptTemplate is missing variables {'greetings'}. Expected: ['greetings', 'user_input'] Received: ['user_input']"
### Error Message and Stack Trace (if applicable)
"Input to ChatPromptTemplate is missing variables {'greetings'}. Expected: ['greetings', 'user_input'] Received: ['user_input']"
### Description
as shown in the above section
[opt-vars-chat-template.pdf](https://github.com/user-attachments/files/16444392/opt-vars-chat-template.pdf)
### System Info
langchain 0.2.11
langchain-community 0.2.10
langchain-core 0.2.25
langchain-experimental 0.0.63
langchain-openai 0.1.17
langchain-text-splitters 0.2.2 | optional_variables argument in ChatPromptTemplate is not effective | https://api.github.com/repos/langchain-ai/langchain/issues/24884/comments | 4 | 2024-07-31T16:01:07Z | 2024-08-05T23:57:46Z | https://github.com/langchain-ai/langchain/issues/24884 | 2,440,401,099 | 24,884 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```
from langchain.schema import HumanMessage
from langchain_openai import AzureChatOpenAI
llm = AzureChatOpenAI(
base_url="BASE_URL" + "/deployments/" + "gpt-4v",
openai_api_version = "2024-02-01",
api_key="API-KEY"
)
message = HumanMessage(content="""{
"role": "system",
"content": "You are a helpful assistant and can help with identifying or making assumptions about content in images."
},
{
"role": "user",
"content": [
{
"type": "text",
"text": "Describe this picture:"
},
{
"type": "image_url",
"image_url": {
"url": "https://upload.wikimedia.org/wikipedia/commons/thumb/e/e3/Plains_Zebra_Equus_quagga.jpg/800px-Plains_Zebra_Equus_quagga.jpg"
}
}
]
}""")
print(llm.invoke([message]))
```
### Error Message and Stack Trace (if applicable)
This leads to the following error:
<b>
openai.BadRequestError: Error code: 400 - {'error': {'message': '1 validation error for Request\nbody -> logprobs\n extra fields not permitted (type=value_error.extra)', 'type': 'invalid_request_error', 'param': None, 'code': None}}
</b>
### Description
The error only occurs when using langchain-openai>=0.1.17 and can be attributed to the following PR: https://github.com/langchain-ai/langchain/pull/23691
Here, the parameter logprobs is added to requests per default.
However, AzureOpenAI takes issue with this parameter as stated here: https://learn.microsoft.com/en-us/azure/ai-services/openai/how-to/chatgpt?tabs=python-new&pivots=programming-language-chat-completions -> "If you set any of these parameters, you get an error."
(Using langchain-openai<=0.1.16 or even adding a # comment in front of the logprobs addition in the site-package file circumvents the issue)
### System Info
langchain==0.2.11
langchain-core==0.2.25
langchain-mistralai==0.1.11
langchain-openai==0.1.19
langchain-text-splitters==0.2.2 | langchain-openai>=0.1.17 adds logprobs parameter to gpt-4vision requests which leads to an error in AzureOpenAI | https://api.github.com/repos/langchain-ai/langchain/issues/24880/comments | 3 | 2024-07-31T13:38:07Z | 2024-08-09T13:32:43Z | https://github.com/langchain-ai/langchain/issues/24880 | 2,440,087,663 | 24,880 |
[
"langchain-ai",
"langchain"
] | ### Privileged issue
- [X] I am a LangChain maintainer, or was asked directly by a LangChain maintainer to create an issue here.
### Issue Content
## Issue
To make our Embeddings integrations as easy to use as possible we need to make sure the docs for them are thorough and standardized. There are two parts to this: updating the embeddings docstrings and updating the actual integration docs.
This needs to be done for each embeddings integration, ideally with one PR per embedding provider.
Related to broader issues #21983 and #22005.
## Docstrings
Each Embeddings class docstring should have the sections shown in the [Appendix](#appendix) below. The sections should have input and output code blocks when relevant.
To build a preview of the API docs for the package you're working on run (from root of repo):
```bash
make api_docs_clean; make api_docs_quick_preview API_PKG=openai
```
where `API_PKG=` should be the parent directory that houses the edited package (e.g. community, openai, anthropic, huggingface, together, mistralai, groq, fireworks, etc.). This should be quite fast for all the partner packages.
## Doc pages
Each Embeddings [docs page](https://python.langchain.com/v0.2/docs/integrations/text_embedding/) should follow [this template](https://github.com/langchain-ai/langchain/blob/master/libs/cli/langchain_cli/integration_template/docs/text_embedding.ipynb).
- [ ] TODO(Erick): populate a complete example
You can use the `langchain-cli` to quickly get started with a new chat model integration docs page (run from root of repo):
```bash
poetry run pip install -e libs/cli
poetry run langchain-cli integration create-doc --name "foo-bar" --name-class FooBar --component-type Embeddings --destination-dir ./docs/docs/integrations/text_embedding/
```
where `--name` is the integration package name without the "langchain-" prefix and `--name-class` is the class name without the "Embedding" prefix. This will create a template doc with some autopopulated fields at docs/docs/integrations/text_embedding/foo_bar.ipynb.
To build a preview of the docs you can run (from root):
```bash
make docs_clean
make docs_build
cd docs/build/output-new
yarn
yarn start
```
## Appendix
Expected sections for the Embedding class docstring.
```python
"""__ModuleName__ embedding model integration.
# TODO: Replace with relevant packages, env vars.
Setup:
Install ``__package_name__`` and set environment variable ``__MODULE_NAME___API_KEY``.
.. code-block:: bash
pip install -U __package_name__
export __MODULE_NAME___API_KEY="your-api-key"
# TODO: Populate with relevant params.
Key init args — completion params:
model: str
Name of __ModuleName__ model to use.
See full list of supported init args and their descriptions in the params section.
# TODO: Replace with relevant init params.
Instantiate:
.. code-block:: python
from __module_name__ import __ModuleName__Embeddings
embed = __ModuleName__Embeddings(
model="...",
# api_key="...",
# other params...
)
Embed single text:
.. code-block:: python
input_text = "The meaning of life is 42"
embed.embed_query(input_text)
.. code-block:: python
# TODO: Example output.
# TODO: Delete if token-level streaming isn't supported.
Embed multiple text:
.. code-block:: python
input_texts = ["Document 1...", "Document 2..."]
embed.embed_documents(input_texts)
.. code-block:: python
# TODO: Example output.
# TODO: Delete if native async isn't supported.
Async:
.. code-block:: python
await embed.aembed_query(input_text)
# multiple:
# await embed.aembed_documents(input_texts)
.. code-block:: python
# TODO: Example output.
"""
``` | Standardize Embeddings Docs | https://api.github.com/repos/langchain-ai/langchain/issues/24856/comments | 2 | 2024-07-31T02:16:56Z | 2024-07-31T22:05:16Z | https://github.com/langchain-ai/langchain/issues/24856 | 2,438,970,707 | 24,856 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
import os
from langchain_openai import AzureChatOpenAI
from langchain.callbacks import get_openai_callback
from langchain_core.tracers.context import collect_runs
from dotenv import load_dotenv
load_dotenv()
with get_openai_callback() as cb:
result = model.invoke(["Hi"])
print(result.response_metadata['model_name'])
print("\n")
with collect_runs() as cb:
result = model.invoke(["Hi"])
print(result.response_metadata['model_name'],"\n")
print(cb.traced_runs[0].extra['invocation_params'])
output
```
gpt-4-turbo-2024-04-09
gpt-4-turbo-2024-04-09
{'model': 'gpt-3.5-turbo', 'azure_deployment': 'gpt-4', 'model_name': 'gpt-3.5-turbo', 'stream': False, 'n': 1, 'temperature': 0.0, '_type': 'azure-openai-chat', 'stop': None}
```
### Error Message and Stack Trace (if applicable)
_No response_
### Description
Following is the screenshot of the issue

### System Info
langchain = "^0.2.5"
langchain-community = "^0.2.5"
langchain-openai = "^0.1.9" | Error in trace, Trace for AzureChatOpenAI with gpt-4-turbo-2024-04-09 is not correct | https://api.github.com/repos/langchain-ai/langchain/issues/24838/comments | 2 | 2024-07-30T20:09:04Z | 2024-07-31T15:19:06Z | https://github.com/langchain-ai/langchain/issues/24838 | 2,438,595,293 | 24,838 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
def chat_with_search_engine_and_knowledgebase(self, history: list[dict], message: str):
history.append({
"role": "user",
"content": message,
})
self.logger.info(f"收到一个浏览器对话的请求,prompt:{message}")
chat_completion = self.client.chat.completions.create(
messages=history,
model=MODEL_NAME,
stream=False,
tools=['search_internet','search_local_knowledgebase'],
timeout=MODEL_OUT_TIMEOUT,
)
response = chat_completion.choices[0].message.content
self.logger.info(f"模型回答为:{response}")
return response
history = []
message = "请根据知识库,推荐5个我可能喜欢的电影,给出我一个json格式的list,每个元素里面包含一个title和一个reason,title是电影的名字,reason是推荐的原因,推荐原因用一句话说明即可,不要有额外的内容。例如你应该输出:[{"title":"标题","reason":"原因"}]"
```
### Error Message and Stack Trace (if applicable)
Agent stopped due to iteration limit or time limit.
### Description
输入一句话,模型不断的调用某个agent,一直到报错Agent stopped due to iteration limit or time limit.
以下是使用的prompt:请根据知识库,推荐5个我可能喜欢的电影,给出我一个json格式的list,每个元素里面包含一个title和一个reason,title是电影的名字,reason是推荐的原因,推荐原因用一句话说明即可,不要有额外的内容。例如你应该输出:[{"title":"标题","reason":"原因"}]


### System Info
langchain-chatchat:0.3.1.3
platform: linux
python:3.11.7 | 部分prompt下出现不断调用某个agent的情况 | https://api.github.com/repos/langchain-ai/langchain/issues/24828/comments | 0 | 2024-07-30T17:25:28Z | 2024-07-30T17:27:59Z | https://github.com/langchain-ai/langchain/issues/24828 | 2,438,327,384 | 24,828 |
[
"langchain-ai",
"langchain"
] | ### Privileged issue
- [X] I am a LangChain maintainer, or was asked directly by a LangChain maintainer to create an issue here.
### Issue Content
## Issue
To make our toolkit integrations as easy to use as possible we need to make sure the docs for them are thorough and standardized. There are two parts to this: updating the toolkit docstrings and updating the actual integration docs.
This needs to be done for each toolkit integration, ideally with one PR per toolkit.
Related to broader issues #21983 and #22005.
## Docstrings
Each toolkit class docstring should have the sections shown in the [Appendix](#appendix) below. The sections should have input and output code blocks when relevant.
To build a preview of the API docs for the package you're working on run (from root of repo):
```bash
make api_docs_clean; make api_docs_quick_preview API_PKG=community
```
where `API_PKG=` should be the parent directory that houses the edited package (e.g. "community" for `langchain-community`).
## Doc pages
Each toolkit [docs page](https://python.langchain.com/v0.2/docs/integrations/toolkits/) should follow [this template](https://github.com/langchain-ai/langchain/blob/master/libs/cli/langchain_cli/integration_template/docs/toolkits.ipynb).
See example [here](https://github.com/langchain-ai/langchain/blob/master/docs/docs/integrations/toolkits/sql_database.ipynb).
You can use the `langchain-cli` to quickly get started with a new integration docs page (run from root of repo):
```bash
poetry run pip install -e libs/cli
poetry run langchain-cli integration create-doc --name "foo-bar" --name-class FooBar --component-type toolkit --destination-dir ./docs/docs/integrations/toolkits/
```
where `--name` is the integration package name without the "langchain-" prefix and `--name-class` is the class name without the "Toolkit" postfix. This will create a template doc with some autopopulated fields at docs/docs/integrations/toolkits/foo_bar.ipynb.
To build a preview of the docs you can run (from root):
```bash
make docs_clean
make docs_build
cd docs/build/output-new
yarn
yarn start
```
## Appendix
Expected sections for the toolkit class docstring.
```python
"""__ModuleName__ toolkit.
# TODO: Replace with relevant packages, env vars, etc.
Setup:
Install ``__package_name__`` and set environment variable ``__MODULE_NAME___API_KEY``.
.. code-block:: bash
pip install -U __package_name__
export __MODULE_NAME___API_KEY="your-api-key"
# TODO: Populate with relevant params.
Key init args:
arg 1: type
description
arg 2: type
description
# TODO: Replace with relevant init params.
Instantiate:
.. code-block:: python
from __package_name__ import __ModuleName__Toolkit
toolkit = __ModuleName__Toolkit(
# ...
)
Tools:
.. code-block:: python
toolkit.get_tools()
.. code-block:: python
# TODO: Example output.
Use within an agent:
.. code-block:: python
from langgraph.prebuilt import create_react_agent
agent_executor = create_react_agent(llm, tools)
example_query = "..."
events = agent_executor.stream(
{"messages": [("user", example_query)]},
stream_mode="values",
)
for event in events:
event["messages"][-1].pretty_print()
.. code-block:: python
# TODO: Example output.
"""
``` | Standardize Toolkit docs | https://api.github.com/repos/langchain-ai/langchain/issues/24820/comments | 0 | 2024-07-30T14:26:32Z | 2024-08-06T18:19:41Z | https://github.com/langchain-ai/langchain/issues/24820 | 2,437,982,131 | 24,820 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```py
import asyncio
from enum import Enum
from dotenv import load_dotenv
from langchain_core.output_parsers import PydanticToolsParser
from langchain_ollama import ChatOllama
from langchain_openai import ChatOpenAI
from pydantic.v1 import BaseModel, Field
load_dotenv()
class DateEnum(str, Enum):
first_day = "2024-10-10 10:00:00"
second_day = "2024-10-11 14:00:00"
third_day = "2024-10-12 14:00:00"
class SelectItem(BaseModel):
"""Confirm the user's choice based on the user's answer."""
item: DateEnum = Field(..., description="Select a date based on user responses")
tools = [SelectItem]
ollama_llm = ChatOllama(model="llama3.1:8b").bind_tools(tools)
openai_llm = ChatOpenAI(model="gpt-4o-mini").bind_tools(tools)
parser = PydanticToolsParser(tools=tools)
chain = ollama_llm | parser
fall_back_chain = openai_llm | parser
with_fallback_chain = chain.with_fallbacks([fall_back_chain])
messages = [
("ai", f"Which day is most convenient for you in {list(DateEnum)}?"),
("human", "30"),
]
async def main():
async for event in with_fallback_chain.astream_events(messages, version="v2"):
print(event) # It will not call fall_back
print("-" * 20)
print(await with_fallback_chain.ainvoke(messages)) # It will call fall_back
asyncio.run(main())
```
### Error Message and Stack Trace (if applicable)


### Description
ChatOllama won't use with_fallbacks when I use astream_events.
But it will use with_fallbacks when I use ainvoke.
My goal is to know which model produced this output.
When I connect PydanticToolsParser behind the model output, I can't seem to know who generated it. (it is hidden in the AIMessage of the intermediate model output).
So I wanted to take out the intermediate result from astream_events to determine who generated it.
Later I found that ChatOllama seems to be unable to call fall_back under astream_events? Is there a better solution?
### System Info
langchain==0.2.11
langchain-core==0.2.24
langchain-ollama==0.1.0
langchain-openai==0.1.19
platform linux
python version = 3.10.12
| ChatOllama won't use with_fallbacks when I use astream_events. | https://api.github.com/repos/langchain-ai/langchain/issues/24816/comments | 0 | 2024-07-30T12:50:56Z | 2024-08-01T22:47:36Z | https://github.com/langchain-ai/langchain/issues/24816 | 2,437,766,390 | 24,816 |
[
"langchain-ai",
"langchain"
] | ### Privileged issue
- [X] I am a LangChain maintainer, or was asked directly by a LangChain maintainer to create an issue here.
### Issue Content
## Issue
To make our LLM integrations as easy to use as possible we need to make sure the docs for them are thorough and standardized. There are two parts to this: updating the llm docstrings and updating the actual integration docs.
This needs to be done for each LLM integration, ideally with one PR per LLM.
Related to broader issues #21983 and #22005.
## Docstrings
Each LLM class docstring should have the sections shown in the [Appendix](#appendix) below. The sections should have input and output code blocks when relevant.
To build a preview of the API docs for the package you're working on run (from root of repo):
```bash
make api_docs_clean; make api_docs_quick_preview API_PKG=openai
```
where `API_PKG=` should be the parent directory that houses the edited package (e.g. community, openai, anthropic, huggingface, together, mistralai, groq, fireworks, etc.). This should be quite fast for all the partner packages.
## Doc pages
Each LLM [docs page](https://python.langchain.com/v0.2/docs/integrations/llms/) should follow [this template](https://github.com/langchain-ai/langchain/blob/master/libs/cli/langchain_cli/integration_template/docs/llms.ipynb).
- [ ] TODO(Erick): populate a complete example
You can use the `langchain-cli` to quickly get started with a new chat model integration docs page (run from root of repo):
```bash
poetry run pip install -e libs/cli
poetry run langchain-cli integration create-doc --name "foo-bar" --name-class FooBar --component-type LLM --destination-dir ./docs/docs/integrations/llms/
```
where `--name` is the integration package name without the "langchain-" prefix and `--name-class` is the class name without the "LLM" prefix. This will create a template doc with some autopopulated fields at docs/docs/integrations/llms/foo_bar.ipynb.
To build a preview of the docs you can run (from root):
```bash
make docs_clean
make docs_build
cd docs/build/output-new
yarn
yarn start
```
## Appendix
Expected sections for the LLM class docstring.
```python
"""__ModuleName__ completion model integration.
# TODO: Replace with relevant packages, env vars.
Setup:
Install ``__package_name__`` and set environment variable ``__MODULE_NAME___API_KEY``.
.. code-block:: bash
pip install -U __package_name__
export __MODULE_NAME___API_KEY="your-api-key"
# TODO: Populate with relevant params.
Key init args — completion params:
model: str
Name of __ModuleName__ model to use.
temperature: float
Sampling temperature.
max_tokens: Optional[int]
Max number of tokens to generate.
# TODO: Populate with relevant params.
Key init args — client params:
timeout: Optional[float]
Timeout for requests.
max_retries: int
Max number of retries.
api_key: Optional[str]
__ModuleName__ API key. If not passed in will be read from env var __MODULE_NAME___API_KEY.
See full list of supported init args and their descriptions in the params section.
# TODO: Replace with relevant init params.
Instantiate:
.. code-block:: python
from __module_name__ import __ModuleName__LLM
llm = __ModuleName__LLM(
model="...",
temperature=0,
max_tokens=None,
timeout=None,
max_retries=2,
# api_key="...",
# other params...
)
Invoke:
.. code-block:: python
input_text = "The meaning of life is "
llm.invoke(input_text)
.. code-block:: python
# TODO: Example output.
# TODO: Delete if token-level streaming isn't supported.
Stream:
.. code-block:: python
for chunk in llm.stream(input_text):
print(chunk)
.. code-block:: python
# TODO: Example output.
.. code-block:: python
''.join(llm.stream(input_text))
.. code-block:: python
# TODO: Example output.
# TODO: Delete if native async isn't supported.
Async:
.. code-block:: python
await llm.ainvoke(input_text)
# stream:
# async for chunk in (await llm.astream(input_text))
# batch:
# await llm.abatch([input_text])
.. code-block:: python
# TODO: Example output.
""" # noqa: E501
``` | Standardize LLM Docs | https://api.github.com/repos/langchain-ai/langchain/issues/24803/comments | 0 | 2024-07-30T00:48:34Z | 2024-07-31T16:55:59Z | https://github.com/langchain-ai/langchain/issues/24803 | 2,436,660,709 | 24,803 |
[
"langchain-ai",
"langchain"
] | ### Privileged issue
- [X] I am a LangChain maintainer, or was asked directly by a LangChain maintainer to create an issue here.
### Issue Content
## Issue
To make our vector store integrations as easy to use as possible we need to make sure the docs for them are thorough and standardized. There are two parts to this: updating the vector store docstrings and updating the actual integration docs.
This needs to be done for each VectorStore integration, ideally with one PR per VectorStore.
Related to broader issues #21983 and #22005.
## Docstrings
Each VectorStore class docstring should have the sections shown in the [Appendix](#appendix) below. The sections should have input and output code blocks when relevant.
To build a preview of the API docs for the package you're working on run (from root of repo):
```bash
make api_docs_clean; make api_docs_quick_preview API_PKG=openai
```
where `API_PKG=` should be the parent directory that houses the edited package (e.g. community, openai, anthropic, huggingface, together, mistralai, groq, fireworks, etc.). This should be quite fast for all the partner packages.
## Doc pages
Each ChatModel [docs page](https://python.langchain.com/v0.2/docs/integrations/vectorstores/) should follow [this template](https://github.com/langchain-ai/langchain/blob/master/libs/cli/langchain_cli/integration_template/docs/chat.ipynb). See [ChatOpenAI](https://python.langchain.com/v0.2/docs/integrations/chat/openai/) for an example.
You can use the `langchain-cli` to quickly get started with a new chat model integration docs page (run from root of repo):
```bash
poetry run pip install -e libs/cli
poetry run langchain-cli integration create-doc --name "foo-bar" --name-class FooBar --component-type VectorStore --destination-dir ./docs/docs/integrations/vectorstores/
```
where `--name` is the integration package name without the "langchain-" prefix and `--name-class` is the class name without the "VectorStore" prefix. This will create a template doc with some autopopulated fields at docs/docs/integrations/vectorstores/foo_bar.ipynb.
To build a preview of the docs you can run (from root):
```bash
make docs_clean
make docs_build
cd docs/build/output-new
yarn
yarn start
```
## Appendix
Expected sections for the VectorStore class docstring.
```python
"""__ModuleName__ vector store integration.
Setup:
...
Key init args - indexing params:
...
Key init args - client params:
...
See full list of supported init args and their descriptions in the params section.
Instantiate:
...
Add Documents:
...
Update Documents:
...
Delete Documents:
...
Search:
...
Search with score:
...
Use as Retriever:
...
""" # noqa: E501
``` | Standardize vector store docs | https://api.github.com/repos/langchain-ai/langchain/issues/24800/comments | 0 | 2024-07-30T00:10:32Z | 2024-08-02T15:35:20Z | https://github.com/langchain-ai/langchain/issues/24800 | 2,436,626,924 | 24,800 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
import bs4
from langchain_community.document_loaders import WebBaseLoader,UnstructuredURLLoader,DirectoryLoader,TextLoader
from langchain_text_splitters import RecursiveCharacterTextSplitter,CharacterTextSplitter
# Create a WebBaseLoader or UnstructuredURLLoader instance to load documents from web sources
#directory_path="/opt/aiworkspase/langchain/HOMEwork/zjb/test3"
directory_path="/opt/aiworkspase/langchain/HOMEwork/zjb/articles"
docs=[]
chunk_size=500
chunk_overlap=50
#获取一个文件的所有内容并分片的方法
def load_text_from_path(file_path):
"""
Load text content from the given file path using TextLoader.
:param file_path: The path to the text file to be loaded.
:return: The content of the text file as a string.
"""
loader = TextLoader(file_path)
document = loader.load()
text_splitter = RecursiveCharacterTextSplitter(chunk_size=chunk_size, chunk_overlap=chunk_overlap)
# Split the documents into chunks using the text_splitter
doc_one = text_splitter.split_documents(document)
return doc_one
#从一个目录下取出所有文件的方法
def get_all_files_in_directory(directory_path):
"""
Get all files in the given directory and its subdirectories.
:param directory_path: The path to the directory.
:return: A list of paths to all files in the directory.
"""
all_files = []
for root, dirs, files in os.walk(directory_path):
for file in files:
file_path = os.path.join(root, file)
all_files.append(file_path)
return all_files
#合并一个目录下所有文件的内容分片集合
def process_files(directory_path):
docs_temp=[]
all_files=get_all_files_in_directory(directory_path)
"""
Process each file in the given list of file paths.
:param file_paths: A list of file paths.
"""
for file_path in all_files:
# 处理每个文件路径,并获取其分片数组
doc=load_text_from_path(file_path)
docs_temp.extend(doc)
return docs_temp
docs.extend(process_files(directory_path))
#二次预处理入向量库文件文件
from langchain_milvus import Milvus, Zilliz
import time
def split_list(arr, n):
"""
将数组每n项重新组成一个数组,最后把这些新的数组重新存到一个大数组里。
:param arr: 原始数组
:param n: 每个小数组的项数
:return: 包含小数组的大数组
"""
return [arr[i:i + n] for i in range(0, len(arr), n)]
doc_4=split_list(docs, 4)
from langchain_milvus import Milvus, Zilliz
import time
#有数据以后循环加载
start=0
m=0
for doc_4_item in doc_4:
if m>start:
vectorstore = Milvus.from_documents( # or Zilliz.from_documents
documents=doc_4_item,
collection_name="cyol_zjb_1",
embedding=embeddings,
connection_args={
"uri": "/opt/aiworkspase/langchain/milvus_zjb_500_50_0729.db",
},
drop_old=False, # Drop the old Milvus collection if it exists
)
time.sleep(1)
m=m+1 #m=3084上次
#进行业务查询
from langchain_core.runnables import RunnablePassthrough
from langchain_core.prompts import PromptTemplate
from langchain_core.output_parsers import StrOutputParser
#llm = ChatOpenAI(model_name="gpt-3.5-turbo", temperature=0)
retriever = vectorstore_slect.as_retriever()
template = """使用后面给的内容回答提出的问题。
如果给的内容里不知道答案,就说你不知道,不要试图编造答案。
最多使用三句话,并尽可能简洁地回答。
总是在答案的末尾说“谢谢你的提问!”。
给的内容:{context}
问题: {question}
有用的答案:"""
#rag_prompt = PromptTemplate.from_template(template)
rag_prompt = PromptTemplate(
template=template, input_variables=["context", "question"]
)
# 初始化输出解析器,将模型输出转换为字符串
output_parser = StrOutputParser()
rag_chain = (
{"context": retriever, "question": RunnablePassthrough()}
| rag_prompt
| model
|output_parser
)
#print(rag_chain.invoke("'宋宝颖是谁?多介绍一下他。"))尹学芸
#print(rag_chain.invoke("请告诉我孙家栋是干什么的?"))
#print(rag_chain.invoke("'尹学芸是谁?多介绍一下他。"))
print(rag_chain.invoke("'尹学芸"))
### Error Message and Stack Trace (if applicable)
用小于100个分块的数据查询能查询到。但是超过4万条数据的查询就查不到,获取的 结果与问题几乎无相似性,
query = "孙家栋"
vectorstore_slect.similarity_search(query, k=5)
在大数据集下k需要>=50 才可以查询到
### Description
是不是 前期向量模型配置问题?
### System Info
from langchain_core.runnables import RunnablePassthrough
from langchain_core.prompts import PromptTemplate
from langchain_core.output_parsers import StrOutputParser | Similarity Search Returns no useful,when Using Milvus(使用Milvus做向量相似性搜索,返回无用数据) | https://api.github.com/repos/langchain-ai/langchain/issues/24784/comments | 0 | 2024-07-29T14:59:33Z | 2024-07-29T15:02:15Z | https://github.com/langchain-ai/langchain/issues/24784 | 2,435,662,561 | 24,784 |
[
"langchain-ai",
"langchain"
] | ### URL
https://python.langchain.com/v0.2/docs/how_to/output_parser_retry/
### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
from langchain.output_parsers import RetryOutputParser
template = """Based on the user question, provide an name and the gender.
{format_instructions}
Question: {query}
Response:"""
from langchain.output_parsers import YamlOutputParser
class details(BaseModel):
name: str = Field(description="name of the person")
gender: str = Field(description="Gender of the person")
prompt = PromptTemplate(template=template,input_variables=['query'],partial_variables={"format_instructions": parser.get_format_instructions()})
parser = PydanticOutputParser(pydantic_object=details)
retry_parser = RetryOutputParser.from_llm(parser=parser, llm=OpenAI(temperature=0),max_retries=1)
from langchain_core.runnables import RunnableLambda, RunnableParallel
completion_chain = prompt | llm
main_chain = RunnableParallel(
completion=completion_chain, prompt_value=prompt
) | RunnableLambda(lambda x: retry_parser.parse_with_prompt(**x))
result = main_chain.invoke({"query":"They called him alice"})
reference link: https://python.langchain.com/v0.2/docs/how_to/output_parser_retry/
error:
ValidationError: 1 validation error for Generation
text
str type expected (type=type_error.str)
### Idea or request for content:
Retry-output parser throwing some error. How the bad response is extracted from the error message. Instead of manual bad response input, it should passes from error message or is there any way to get that from the error message | DOC: <Issue related to /v0.2/docs/how_to/output_parser_retry/> | https://api.github.com/repos/langchain-ai/langchain/issues/24778/comments | 1 | 2024-07-29T13:38:09Z | 2024-07-29T20:01:19Z | https://github.com/langchain-ai/langchain/issues/24778 | 2,435,464,336 | 24,778 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
``` python
vectorstore = DocumentDBVectorSearch.from_connection_string(
connection_string=connection_string,
namespace=namespace,
embedding=embeddings,
index_name=INDEX_NAME,
)
# calling similarity_search without filter, makes filter get the default value None and
docs = vectorstore.similarity_search(query=keyword)
```
### Error Message and Stack Trace (if applicable)
Error message: the match filter must be an expression in an object, full error: {'ok': 0.0, 'code': 15959, 'errmsg': 'the match filter must be an expression in an object', 'operationTime': Timestamp(1722245629, 1)}.
### Description
I am trying to use AWS DocumentDB as vector database and when I call similarity_search method from a DocumentDBVectorSearch instance, without filter, only query text, DocumentDB returns an error like: "the match filter must be an expression in an object". This is because None $match expressions are not supported and have to be removed from the pipeline when filter is None.
### System Info
langchain==0.2.11
langchain-aws==0.1.6
langchain-cohere==0.1.9
langchain-community==0.2.10
langchain-core==0.2.23
langchain-experimental==0.0.63
langchain-openai==0.1.17
langchain-text-splitters==0.2.2
platform=mac
python=3.12.4 | AWS DocumentDB similarity search does not work when no filter is used. Error msg: "the match filter must be an expression in an object" | https://api.github.com/repos/langchain-ai/langchain/issues/24775/comments | 1 | 2024-07-29T10:11:34Z | 2024-07-29T15:53:43Z | https://github.com/langchain-ai/langchain/issues/24775 | 2,435,009,768 | 24,775 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
from langchain_community.document_loaders import UnstructuredImageLoader
loader = UnstructuredImageLoader(
"https://photo.16pic.com/00/53/98/16pic_5398252_b.jpg", mode="elements"
)
docs = loader.load()
for doc in docs:
print(doc)
### Error Message and Stack Trace (if applicable)
Traceback (most recent call last):
File "/home/jh/anaconda3/envs/liuchao/lib/python3.11/site-packages/urllib3/connection.py", line 203, in _new_conn
sock = connection.create_connection(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/jh/anaconda3/envs/liuchao/lib/python3.11/site-packages/urllib3/util/connection.py", line 85, in create_connection
raise err
File "/home/jh/anaconda3/envs/liuchao/lib/python3.11/site-packages/urllib3/util/connection.py", line 73, in create_connection
sock.connect(sa)
OSError: [Errno 101] Network is unreachable
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/jh/anaconda3/envs/liuchao/lib/python3.11/site-packages/urllib3/connectionpool.py", line 791, in urlopen
response = self._make_request(
^^^^^^^^^^^^^^^^^^^
File "/home/jh/anaconda3/envs/liuchao/lib/python3.11/site-packages/urllib3/connectionpool.py", line 492, in _make_request
raise new_e
File "/home/jh/anaconda3/envs/liuchao/lib/python3.11/site-packages/urllib3/connectionpool.py", line 468, in _make_request
self._validate_conn(conn)
File "/home/jh/anaconda3/envs/liuchao/lib/python3.11/site-packages/urllib3/connectionpool.py", line 1097, in _validate_conn
conn.connect()
File "/home/jh/anaconda3/envs/liuchao/lib/python3.11/site-packages/urllib3/connection.py", line 611, in connect
self.sock = sock = self._new_conn()
^^^^^^^^^^^^^^^^
File "/home/jh/anaconda3/envs/liuchao/lib/python3.11/site-packages/urllib3/connection.py", line 218, in _new_conn
raise NewConnectionError(
urllib3.exceptions.NewConnectionError: <urllib3.connection.HTTPSConnection object at 0x77ea228afb50>: Failed to establish a new connection: [Errno 101] Network is unreachable
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/jh/anaconda3/envs/liuchao/lib/python3.11/site-packages/requests/adapters.py", line 667, in send
resp = conn.urlopen(
^^^^^^^^^^^^^
File "/home/jh/anaconda3/envs/liuchao/lib/python3.11/site-packages/urllib3/connectionpool.py", line 845, in urlopen
retries = retries.increment(
^^^^^^^^^^^^^^^^^^
File "/home/jh/anaconda3/envs/liuchao/lib/python3.11/site-packages/urllib3/util/retry.py", line 515, in increment
raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='huggingface.co', port=443): Max retries exceeded with url: /unstructuredio/yolo_x_layout/resolve/main/yolox_l0.05.onnx (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x77ea228afb50>: Failed to establish a new connection: [Errno 101] Network is unreachable'))
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/jh/anaconda3/envs/liuchao/lib/python3.11/site-packages/huggingface_hub/file_download.py", line 1722, in _get_metadata_or_catch_error
metadata = get_hf_file_metadata(url=url, proxies=proxies, timeout=etag_timeout, headers=headers)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/jh/anaconda3/envs/liuchao/lib/python3.11/site-packages/huggingface_hub/utils/_validators.py", line 114, in _inner_fn
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/home/jh/anaconda3/envs/liuchao/lib/python3.11/site-packages/huggingface_hub/file_download.py", line 1645, in get_hf_file_metadata
r = _request_wrapper(
^^^^^^^^^^^^^^^^^
File "/home/jh/anaconda3/envs/liuchao/lib/python3.11/site-packages/huggingface_hub/file_download.py", line 372, in _request_wrapper
response = _request_wrapper(
^^^^^^^^^^^^^^^^^
File "/home/jh/anaconda3/envs/liuchao/lib/python3.11/site-packages/huggingface_hub/file_download.py", line 395, in _request_wrapper
response = get_session().request(method=method, url=url, **params)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/jh/anaconda3/envs/liuchao/lib/python3.11/site-packages/requests/sessions.py", line 589, in request
resp = self.send(prep, **send_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/jh/anaconda3/envs/liuchao/lib/python3.11/site-packages/requests/sessions.py", line 703, in send
r = adapter.send(request, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/jh/anaconda3/envs/liuchao/lib/python3.11/site-packages/huggingface_hub/utils/_http.py", line 66, in send
return super().send(request, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/jh/anaconda3/envs/liuchao/lib/python3.11/site-packages/requests/adapters.py", line 700, in send
raise ConnectionError(e, request=request)
requests.exceptions.ConnectionError: (MaxRetryError("HTTPSConnectionPool(host='huggingface.co', port=443): Max retries exceeded with url: /unstructuredio/yolo_x_layout/resolve/main/yolox_l0.05.onnx (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x77ea228afb50>: Failed to establish a new connection: [Errno 101] Network is unreachable'))"), '(Request ID: 9e333dcd-b659-4be2-ad0b-cfb63b2cc7f9)')
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/jh/liuchao_project/tosql2.py", line 15, in <module>
docs = loader.load()
^^^^^^^^^^^^^
File "/home/jh/anaconda3/envs/liuchao/lib/python3.11/site-packages/langchain_core/document_loaders/base.py", line 30, in load
return list(self.lazy_load())
^^^^^^^^^^^^^^^^^^^^^^
File "/home/jh/anaconda3/envs/liuchao/lib/python3.11/site-packages/langchain_community/document_loaders/unstructured.py", line 89, in lazy_load
elements = self._get_elements()
^^^^^^^^^^^^^^^^^^^^
File "/home/jh/anaconda3/envs/liuchao/lib/python3.11/site-packages/langchain_community/document_loaders/image.py", line 33, in _get_elements
return partition_image(filename=self.file_path, **self.unstructured_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/jh/anaconda3/envs/liuchao/lib/python3.11/site-packages/unstructured/documents/elements.py", line 593, in wrapper
elements = func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/home/jh/anaconda3/envs/liuchao/lib/python3.11/site-packages/unstructured/file_utils/filetype.py", line 385, in wrapper
elements = func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/home/jh/anaconda3/envs/liuchao/lib/python3.11/site-packages/unstructured/chunking/dispatch.py", line 74, in wrapper
elements = func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/home/jh/anaconda3/envs/liuchao/lib/python3.11/site-packages/unstructured/partition/image.py", line 103, in partition_image
return partition_pdf_or_image(
^^^^^^^^^^^^^^^^^^^^^^^
File "/home/jh/anaconda3/envs/liuchao/lib/python3.11/site-packages/unstructured/partition/pdf.py", line 310, in partition_pdf_or_image
elements = _partition_pdf_or_image_local(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/jh/anaconda3/envs/liuchao/lib/python3.11/site-packages/unstructured/utils.py", line 249, in wrapper
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/home/jh/anaconda3/envs/liuchao/lib/python3.11/site-packages/unstructured/partition/pdf.py", line 564, in _partition_pdf_or_image_local
inferred_document_layout = process_file_with_model(
^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/jh/anaconda3/envs/liuchao/lib/python3.11/site-packages/unstructured_inference/inference/layout.py", line 353, in process_file_with_model
model = get_model(model_name, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/jh/anaconda3/envs/liuchao/lib/python3.11/site-packages/unstructured_inference/models/base.py", line 79, in get_model
model.initialize(**initialize_params)
File "/home/jh/anaconda3/envs/liuchao/lib/python3.11/site-packages/unstructured_inference/utils.py", line 47, in __getitem__
value = evaluate(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/jh/anaconda3/envs/liuchao/lib/python3.11/site-packages/unstructured_inference/utils.py", line 195, in download_if_needed_and_get_local_path
return hf_hub_download(path_or_repo, filename, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/jh/anaconda3/envs/liuchao/lib/python3.11/site-packages/huggingface_hub/utils/_validators.py", line 114, in _inner_fn
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/home/jh/anaconda3/envs/liuchao/lib/python3.11/site-packages/huggingface_hub/file_download.py", line 1221, in hf_hub_download
return _hf_hub_download_to_cache_dir(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/jh/anaconda3/envs/liuchao/lib/python3.11/site-packages/huggingface_hub/file_download.py", line 1325, in _hf_hub_download_to_cache_dir
_raise_on_head_call_error(head_call_error, force_download, local_files_only)
File "/home/jh/anaconda3/envs/liuchao/lib/python3.11/site-packages/huggingface_hub/file_download.py", line 1826, in _raise_on_head_call_error
raise LocalEntryNotFoundError(
huggingface_hub.utils._errors.LocalEntryNotFoundError: An error happened while trying to locate the file on the Hub and we cannot find the requested files in the local cache. Please check your connection and try again or make sure your Internet connection is on.
### Description
I just load image and print document
but it is a error :
huggingface_hub.utils._errors.LocalEntryNotFoundError: An error happened while trying to locate the file on the Hub and we cannot find the requested files in the local cache. Please check your connection and try again or make sure your Internet connection is on.
### System Info
langchain=0.2.9
python 3.11 | about image load bug | https://api.github.com/repos/langchain-ai/langchain/issues/24774/comments | 0 | 2024-07-29T09:59:13Z | 2024-07-29T10:01:49Z | https://github.com/langchain-ai/langchain/issues/24774 | 2,434,983,670 | 24,774 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
from langchain_openai import ChatOpenAI
from httpx import AsyncCilent as HttpxAsyncClient
session = HttpxAsyncClient(verify=False)
model = ChatOpenAI(
streaming=stream,
verbose=True,
openai api key=base key
openai api base=base url
http async client=session
model name=llm name,
temperature=temperature
max tokens=max tokens,stop=["\n"]
prompt template =prompt comment template)
### Error Message and Stack Trace (if applicable)
httpx.ConnectError: ISSL: CERTIFICATE VERIFY FAILEDl certificate:verify failed: self-signed certificatel.c:1007)
The above exception was the direct cause of the following exception:
Traceback(most recent call last):
File "/usr/local/lib/python3,10/site-packages/starlette/responses.py", line 260, in wrapawait func()
File "/usr/local/lib/python3.10/site-packages/starlette/responses.py", line 249, in stream responseasync for chunk in self.body iterator
File "/home/chatTestcenter/service/code explain.py", line 101., in code chatresponses = chain.batch(lst, config={"max concurrency": 3})
File "/usr/local/lib/python3,10/site-packages/langchain core/runnables/base.py", line 647, in batchreturn cast(Listl0utputl, list(executor.map(invoke, inputs,configs)))
File "/usr/local/lib/python3.10/concurrent/futures/ base.py",line 62l, in result iteratoryield result or cancel(fs,pop())
File "/usr/local/lib/python3.10/concurrent/futures/ base.py",line 319, in result or cancelreturn fut,result(timeout.
File "/usr/local/lib/python3,10/concurrent/futures/ base.py", line 458; in result.txt”2939L
### Description
I am trying to disalbel ssl ceritficate by passing http_async param an instance of Httpx.AysncClient calss, with verify=False.
But after that, I still see ssl verification process and it failed.
### System Info
langchain: 0.2.11
langchain-community: 0.2.10
langchain-core: 0.2.24
langchain-openai: 0.1.19
langchain-text-splitter: 0.2.2
langsmith: 0.1.93
openai: 1.37.1
python: 3.10 | langchain_openai.ChatOpenAI: client attribute not recognized | https://api.github.com/repos/langchain-ai/langchain/issues/24770/comments | 0 | 2024-07-29T08:20:25Z | 2024-07-29T19:38:51Z | https://github.com/langchain-ai/langchain/issues/24770 | 2,434,768,961 | 24,770 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
# model = SentenceTransformer(config.EMBEDDING_MODEL_NAME)
KG_vector_store = Neo4jVector.from_existing_index(
embedding=SentenceTransformerEmbeddings(model_name = config.EMBEDDING_MODEL_NAME),
url=NEO4J_URI,
username=NEO4J_USERNAME,
password=NEO4J_PASSWORD,
database="neo4j",
index_name=VECTOR_INDEX_NAME,
text_node_property=VECTOR_SOURCE_PROPERTY,
retrieval_query=retrieval_query_extra_text,
)
# Create a retriever from the vector store
retriever_extra_text = KG_vector_store.as_retriever(
search_type="mmr",
search_kwargs={'k': 6, 'fetch_k': 50} #,'lambda_mult': 0.25
)

### Error Message and Stack Trace (if applicable)
---------------------------------------------------------------------------
NotImplementedError Traceback (most recent call last)
[<ipython-input-8-569f7332a067>](https://localhost:8080/#) in <cell line: 1>()
----> 1 rag.query("Please describe in detail what is the evidence report about?")['answer']
8 frames
[/content/RAG/KG_for_RAG/src/execute_rag.py](https://localhost:8080/#) in query(self, query)
318 self.init_graph_for_query()
319
--> 320 answer = self.QA_CHAIN.invoke(
321 {"question": query},
322 return_only_outputs=True,
[/usr/local/lib/python3.10/dist-packages/langchain/chains/base.py](https://localhost:8080/#) in invoke(self, input, config, **kwargs)
164 except BaseException as e:
165 run_manager.on_chain_error(e)
--> 166 raise e
167 run_manager.on_chain_end(outputs)
168
[/usr/local/lib/python3.10/dist-packages/langchain/chains/base.py](https://localhost:8080/#) in invoke(self, input, config, **kwargs)
154 self._validate_inputs(inputs)
155 outputs = (
--> 156 self._call(inputs, run_manager=run_manager)
157 if new_arg_supported
158 else self._call(inputs)
[/usr/local/lib/python3.10/dist-packages/langchain/chains/qa_with_sources/base.py](https://localhost:8080/#) in _call(self, inputs, run_manager)
150 )
151 if accepts_run_manager:
--> 152 docs = self._get_docs(inputs, run_manager=_run_manager)
153 else:
154 docs = self._get_docs(inputs) # type: ignore[call-arg]
[/usr/local/lib/python3.10/dist-packages/langchain/chains/qa_with_sources/retrieval.py](https://localhost:8080/#) in _get_docs(self, inputs, run_manager)
47 ) -> List[Document]:
48 question = inputs[self.question_key]
---> 49 docs = self.retriever.invoke(
50 question, config={"callbacks": run_manager.get_child()}
51 )
[/usr/local/lib/python3.10/dist-packages/langchain_core/retrievers.py](https://localhost:8080/#) in invoke(self, input, config, **kwargs)
219 except Exception as e:
220 run_manager.on_retriever_error(e)
--> 221 raise e
222 else:
223 run_manager.on_retriever_end(
[/usr/local/lib/python3.10/dist-packages/langchain_core/retrievers.py](https://localhost:8080/#) in invoke(self, input, config, **kwargs)
212 _kwargs = kwargs if self._expects_other_args else {}
213 if self._new_arg_supported:
--> 214 result = self._get_relevant_documents(
215 input, run_manager=run_manager, **_kwargs
216 )
[/usr/local/lib/python3.10/dist-packages/langchain_core/vectorstores/base.py](https://localhost:8080/#) in _get_relevant_documents(self, query, run_manager)
1255 docs = [doc for doc, _ in docs_and_similarities]
1256 elif self.search_type == "mmr":
-> 1257 docs = self.vectorstore.max_marginal_relevance_search(
1258 query, **self.search_kwargs
1259 )
[/usr/local/lib/python3.10/dist-packages/langchain_core/vectorstores/base.py](https://localhost:8080/#) in max_marginal_relevance_search(self, query, k, fetch_k, lambda_mult, **kwargs)
929 List of Documents selected by maximal marginal relevance.
930 """
--> 931 raise NotImplementedError
932
933 async def amax_marginal_relevance_search(
NotImplementedError:
### Description
MMR NotImplimented in Neo4jVector despite the documentation saying otherwise.
### System Info
System Information
------------------
> OS: Linux
> OS Version: #1 SMP PREEMPT_DYNAMIC Thu Jun 27 21:05:47 UTC 2024
> Python Version: 3.10.12 (main, Mar 22 2024, 16:50:05) [GCC 11.4.0]
Package Information
-------------------
> langchain_core: 0.2.24
> langchain: 0.2.11
> langchain_community: 0.2.0
> langsmith: 0.1.93
> langchain_google_genai: 1.0.8
> langchain_openai: 0.1.7
> langchain_text_splitters: 0.2.2 | MMR NotImplimented in Neo4jVector.But the documentation says otherwise with an example implimentation of MMR | https://api.github.com/repos/langchain-ai/langchain/issues/24768/comments | 3 | 2024-07-29T08:12:33Z | 2024-08-08T16:42:17Z | https://github.com/langchain-ai/langchain/issues/24768 | 2,434,753,679 | 24,768 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
The code is picked up from LangChain documentations
[https://python.langchain.com/v0.2/docs/how_to/tools_chain/](https://python.langchain.com/v0.2/docs/how_to/tools_chain/)
```Python
from langchain import hub
from langchain.agents import AgentExecutor, create_tool_calling_agent
from langchain_core.tools import tool
model = # A mistral model
prompt = ChatPromptTemplate.from_messages(
[
("system", "You are a helpful assistant"),
("placeholder", "{chat_history}"),
("human", "{input}"),
("placeholder", "{agent_scratchpad}"),
]
)
@tool
def multiply(first_int: int, second_int: int) -> int:
"""Multiply two integers together."""
return first_int * second_int
@tool
def add(first_int: int, second_int: int) -> int:
"Add two integers."
return first_int + second_int
@tool
def exponentiate(base: int, exponent: int) -> int:
"Exponentiate the base to the exponent power."
return base**exponent
tools = [multiply, add, exponentiate]
# Construct the tool calling agent
agent = create_tool_calling_agent(model,tools, prompt)
# Create an agent executor by passing in the agent and tools
agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True)
result = agent_executor.invoke(
{
"input": "Take 3 to the fifth power and multiply that by the sum of twelve and three, then square the whole result"
}
)
```
### Error Message and Stack Trace (if applicable)
TypeError: Object of type StructuredTool is not JSON serializable
### Description
I am trying to run the sample code in [https://python.langchain.com/v0.2/docs/how_to/tools_chain/](https://python.langchain.com/v0.2/docs/how_to/tools_chain/) to call an agent equipped with tools. I see two problems:
- If I run the code as it is, it generates the error that "Object of type StructuredTool is not JSON serializable".
- If I create the agent with empty tools list (i.e., tools=[]) it generates the response. However, it is not supposed to be the right way of creating agents, as far as I understand. Besides the answer with mistral7b model is very inaccurate. Even in the example provided in the link above, the answer seems to be different and wrong when checking the [langSmith run](https://smith.langchain.com/public/eeeb27a4-a2f8-4f06-a3af-9c983f76146c/r?runtab=0).
### System Info
langchain-core==0.1.52
langchain==0.1.16
| Langchain agent with tools generates "StructuredTool is not JSON serializable" | https://api.github.com/repos/langchain-ai/langchain/issues/24766/comments | 2 | 2024-07-29T07:50:07Z | 2024-07-29T21:30:32Z | https://github.com/langchain-ai/langchain/issues/24766 | 2,434,709,913 | 24,766 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
``` python
llm = AzureChatOpenAI(
azure_endpoint=api_base,
deployment_name=engine,
model_name=engine,
api_key=key,
api_version=api_version,
temperature=0
) # it's a chat-gpt4o deployed in Azure
def multiply2(a: int, b: int) -> int:
"""Multiplies a and b."""
print('----in multi---')
return a * b
tools = [multiply2]
llm_with_tools = llm.bind_tools(tools)
query = "what's the next integer after 109382*381001?"
r1=llm.invoke(query) # not using tools
print(r1)
print('1------------------')
r2=llm_with_tools.invoke(query)
print(r2)
```
### Error Message and Stack Trace (if applicable)
None
### Description
the content of r1 is
```
To find the next integer after the product of 109382 and 381001, we first need to calculate the product:\n\n\\[ 109382 \\times 381001 = 41632617482 \\]\n\nThe next integer after 41632617482 is:\n\n\\[ 41632617482 + 1 = 41632617483 \\]\n\nSo, the next integer after \\( 109382 \\times 381001 \\) is 41632617483.
```
while r2 is:
```
content='' additional_kwargs={'tool_calls': [{'id': '……', 'function': {'arguments': '{"a":109382,"b":381001}', 'name': 'multiply2'}, 'type': 'function'}]} response_metadata={'token_usage': {'completion_tokens': 20, 'prompt_tokens': 61, 'total_tokens': 81}, 'model_name': 'gpt-4o-2024-05-13', 'system_fingerprint': '……', 'prompt_filter_results': [{'prompt_index': 0, 'content_filter_results': {'hate': {'filtered': False, 'severity': 'safe'}, 'jailbreak': {'filtered': False, 'detected': False}, 'self_harm': {'filtered': False, 'severity': 'safe'}, 'sexual': {'filtered': False, 'severity': 'safe'}, 'violence': {'filtered': False, 'severity': 'safe'}}}], 'finish_reason': 'tool_calls', 'logprobs': None, 'content_filter_results': {}} id='……' tool_calls=[{'name': 'multiply2', 'args': {'a': 109382, 'b': 381001}, 'id': '……', 'type': 'tool_call'}] usage_metadata={'input_tokens': 61, 'output_tokens': 20, 'total_tokens': 81}
```
the content is empty, why?
and the log inside the multiply2() is not printed.
### System Info
System Information
------------------
> OS: Windows
> OS Version: 10.0.22631
> Python Version: 3.12.3 | packaged by conda-forge | (main, Apr 15 2024, 18:20:11) [MSC v.1938 64 bit (AMD64)]
Package Information
-------------------
> langchain_core: 0.2.24
> langchain: 0.2.6
> langchain_community: 0.2.6
> langsmith: 0.1.77
> langchain_experimental: 0.0.63
> langchain_openai: 0.1.14
> langchain_text_splitters: 0.2.1
> langchainhub: 0.1.20
> langgraph: 0.1.11
> langserve: 0.2.2 | gpt4o in azure returning empty content when using tools | https://api.github.com/repos/langchain-ai/langchain/issues/24765/comments | 3 | 2024-07-29T07:31:07Z | 2024-07-31T20:39:54Z | https://github.com/langchain-ai/langchain/issues/24765 | 2,434,674,361 | 24,765 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
from langchain_community.storage import SQLStore
from langchain.embeddings.cache import CacheBackedEmbeddings
from langchain_community.embeddings import DeterministicFakeEmbedding
sql_store = SQLStore(namespace="some_ns",
db_url='sqlite:///embedding_store.db')
# Note - it is required to create the schema first
sql_store.create_schema()
# Using DeterministicFakeEmbedding
# and sql_store
cache_backed_embeddings = CacheBackedEmbeddings(
underlying_embeddings=DeterministicFakeEmbedding(size=128),
document_embedding_store=sql_store
)
# The execution of this complains because
# embed_documents returns list[list[float]]
# whereas the cache store is expecting bytes (LargeBinary)
cache_backed_embeddings.embed_documents(['foo', 'bar'])
```
You can reproduce the issue using this notebook
https://colab.research.google.com/drive/1mLCGRbdWGBOgpdSTxK9qtDL7JbeKT4j2?usp=sharing
### Error Message and Stack Trace (if applicable)
TypeError: memoryview: a bytes-like object is required, not 'list'
The above exception was the direct cause of the following exception:
StatementError Traceback (most recent call last)
[/usr/local/lib/python3.10/dist-packages/sqlalchemy/sql/sqltypes.py](https://localhost:8080/#) in process(value)
891 def process(value):
892 if value is not None:
--> 893 return DBAPIBinary(value)
894 else:
895 return None
StatementError: (builtins.TypeError) memoryview: a bytes-like object is required, not 'list'
[SQL: INSERT INTO langchain_key_value_stores (namespace, "key", value) VALUES (?, ?, ?)]
[parameters: [{'key': 'foo', 'namespace': 'some_ns', 'value': [-2.336764233933763, 0.27510289545940503, -0.7957597268194339, 2.528701058861104, -0.15510189915015854 ... (2403 characters truncated) ... 2, 1.1312065514444096, -0.49558882193160414, -0.06710991747197836, -0.8768019783331409, 1.2976620676496629, -0.7436590792948876, -0.9567656775129801]}, {'key': 'bar', 'namespace': 'some_ns', 'value': [1.1438074881297355, -1.162000219732062, -0.5320296411623279, -0.04450529917299604, -2.210793183255032 ... (2391 characters truncated) ... 199, -1.4820970212122928, 0.36170213573657495, -0.10575371799110189, -0.881757661512149, -0.1130288120425299, 0.07494672180577358, 2.013154033982629]}]]
### Description
I am trying to use `CacheBackedEmbeddings` with `SQLStore`
The `embed_document` of `Embeddings` returns `list[list[float]]` whereas the SQLStore schema expects it to be `bytes`
### System Info
langchain==0.2.11
langchain-community==0.2.10
langchain-core==0.2.24
langchain-text-splitters==0.2.2 | `langchain_community.storage.SQLStore` does not work `langchain.embeddings.cache.CacheBackedEmbeddings` | https://api.github.com/repos/langchain-ai/langchain/issues/24758/comments | 1 | 2024-07-28T20:45:42Z | 2024-07-28T21:49:25Z | https://github.com/langchain-ai/langchain/issues/24758 | 2,434,104,052 | 24,758 |
[
"langchain-ai",
"langchain"
] | ### URL
https://python.langchain.com/v0.2/docs/integrations/callbacks/streamlit/
### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
The `langchain_community.callbacks.StreamlitCallbackHandler` just include example with langchain, but there are not equivalent example for langgraph workflows.
Naive attempts to use `langchain_community.callbacks.StreamlitCallbackHandler` with langgraph can easily result in the following error:
```
Error in StreamlitCallbackHandler.on_llm_end callback: RuntimeError('Current LLMThought is unexpectedly None!')
```
See [this Stack Overflow post](https://stackoverflow.com/questions/78015804/how-to-use-streamlitcallbackhandler-with-langgraph) for more info.
So, it would be helpful to include more support for `StreamlitCallbackHandler` and langgraph.
### Idea or request for content:
Given that users would like to generate more complex langgraph agents in streamlit apps (e.g., multi-agent workflows), it would be helpful to include more docs on this topic, such as how to properly use `StreamlitCallbackHandler` (or an equivalent) with langgraph. | DOC: using StreamlitCallbackHandler (or equivalent) with langgraph | https://api.github.com/repos/langchain-ai/langchain/issues/24757/comments | 0 | 2024-07-28T19:06:08Z | 2024-07-28T19:08:38Z | https://github.com/langchain-ai/langchain/issues/24757 | 2,434,070,886 | 24,757 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
I found execution by LlamaCpp in langchain_community.llms.llamacpp is much slower than Llama in llama_cpp (by 2- 3 times in >10 experiments)
1. Llama in llama_cpp
-- 1784 token per second
<img width="1010" alt="Screenshot 2024-07-29 at 12 37 02 AM" src="https://github.com/user-attachments/assets/ee4ebdbd-1e00-4e62-9f4d-072091c93485">
3. LlamaCpp in langchain_community.llms.llamacpp
-- 560 token per second
<img width="1019" alt="Screenshot 2024-07-29 at 12 37 07 AM" src="https://github.com/user-attachments/assets/b05e473f-3d34-4e4f-8be3-779e8459dd90">
Did I have a wrong settings or is it a bug?
### Error Message and Stack Trace (if applicable)
_No response_
### Description
```
# 1.
from llama_cpp import Llama
llm = Llama(
model_path="/Users/marcus/Downloads/data_science/llama-all/llama3.1/Meta-Llama-3.1-8B-Instruct/Meta-Llama-3.1-8B-Instruct-Q4_K_M.gguf",
n_gpu_layers=-1, # Uncomment to use GPU acceleration
# seed=1337, # Uncomment to set a specific seed
# n_ctx=2048, # Uncomment to increase the context window
n_ctx=8096,
)
res = llm.create_chat_completion(
messages=[
{
"role": "system",
"content": """You are a helpful Assistant."""
},
{
"role": "user",
"content": "Write a bubble sort in python"
}
],
temperature = 0.0,
)
# 2.
from langchain_community.llms.llamacpp import LlamaCpp
from langchain_core.prompts import ChatPromptTemplate
n_gpu_layers = -1 # The number of layers to put on the GPU. The rest will be on the CPU. If you don't know how many layers there are, you can use -1 to move all to GPU.
n_batch = 512
llm = LlamaCpp(
model_path="/Users/marcus/Downloads/data_science/llama-all/llama3.1/Meta-Llama-3.1-8B-Instruct/Meta-Llama-3.1-8B-Instruct-Q4_K_M.gguf",
n_ctx=8096,
n_gpu_layers=n_gpu_layers,
f16_kv=True,
temperature=0,
n_batch=n_batch,
)
question = """Write a bubble sort in python"""
system = "You are a helpful assistant."
human = "{text}"
prompt = ChatPromptTemplate.from_messages([("system", system), ("user", human)])
res = (prompt | llm).invoke(question)
```
Did I have a wrong settings or is it a bug?
### System Info
python = "3.11.3"
langchain = "^0.2.11"
llama-cpp-python = "^0.2.83"
model = Meta-Llama-3.1-8B-Instruct-Q4_K_M.gguf | Huge performance differences between llama_cpp_python and langchain_community.llms.llamacpp | https://api.github.com/repos/langchain-ai/langchain/issues/24756/comments | 0 | 2024-07-28T16:53:09Z | 2024-07-28T16:55:40Z | https://github.com/langchain-ai/langchain/issues/24756 | 2,434,023,675 | 24,756 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```
from dotenv import load_dotenv, find_dotenv
from langchain_openai import ChatOpenAI
from langchain_core.pydantic_v1 import BaseModel, Field
from langchain_core.output_parsers import PydanticOutputParser
from langchain.output_parsers import OutputFixingParser
from langchain.prompts import PromptTemplate
_ = load_dotenv(find_dotenv())
llm = ChatOpenAI(model="gpt-4o")
##############################
### Auto-Fixing Parser
##############################
class Date(BaseModel):
year: int = Field(description="Year")
month: int = Field(description="Month")
day: int = Field(description="Day")
era: str = Field(description="BC or AD")
prompt_template = """
Extact the date within user input.
{format_instructions}
User Input:
{query}
"""
parser = PydanticOutputParser(pydantic_object=Date)
new_parser = OutputFixingParser.from_llm(
parser=parser,
llm=llm
)
template = PromptTemplate(
template=prompt_template,
input_variables=["query"],
partial_variables={"format_instructions": parser.get_format_instructions()}
)
query = "Sunny weather on April 6 2023"
prompt = template.format_prompt(query=query)
response = llm.invoke(prompt.to_messages())
incorrect_output = response.content.replace("4", "April")
print("====Incorrect output=====")
print(incorrect_output)
try:
response = parser.parse(incorrect_output)
except Exception as e:
print("===Exception===")
print(e)
print("===Parsing using outputfixingparser===")
date = new_parser.parse(incorrect_output)
print(date.json())
```
### Error Message and Stack Trace (if applicable)
```
Traceback (most recent call last):
File "/Users/selinali/Documents/dev/zhihu-ai-engineer-lecture-notes/practise/08-langchain/run2.py", line 60, in <module>
date = new_parser.parse(incorrect_output)
File "/Users/selinali/Documents/dev/zhihu-ai-engineer-lecture-notes/.venv/lib/python3.10/site-packages/langchain/output_parsers/fix.py", line 69, in parse
return self.parser.parse(completion)
File "/Users/selinali/Documents/dev/zhihu-ai-engineer-lecture-notes/.venv/lib/python3.10/site-packages/langchain_core/output_parsers/pydantic.py", line 77, in parse
return super().parse(text)
File "/Users/selinali/Documents/dev/zhihu-ai-engineer-lecture-notes/.venv/lib/python3.10/site-packages/langchain_core/output_parsers/json.py", line 98, in parse
return self.parse_result([Generation(text=text)])
File "/Users/selinali/Documents/dev/zhihu-ai-engineer-lecture-notes/.venv/lib/python3.10/site-packages/pydantic/v1/main.py", line 341, in __init__
raise validation_error
pydantic.v1.error_wrappers.ValidationError: 1 validation error for Generation
text
str type expected (type=type_error.str)
```
### Description
I am trying to test using OutputFixingParser by following a tutorial. But it gave me exception as shown in the Stack Trace.
### System Info
python = "^3.10"
langchain = "^0.2.11"
langchain-openai = "^0.1.19" | OutputFixingParser not working | https://api.github.com/repos/langchain-ai/langchain/issues/24753/comments | 2 | 2024-07-28T12:50:21Z | 2024-07-29T20:01:55Z | https://github.com/langchain-ai/langchain/issues/24753 | 2,433,917,153 | 24,753 |
[
"langchain-ai",
"langchain"
] | ### URL
https://docs.smith.langchain.com/old/tracing/faq/langchain_specific_guides
### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
Most of my langsmith traces end up with many nested items that are challenging to untangle and understand. Here, I've already added a `name` parameter to many of the Runnable subclasses that I know that have it, and yet it's quite difficult to see what's going on:

As a LangChain, LangServe and LangSmith pro user, I expect the docs to contain a basic example of how to rename the components of a non-trivial chain so that their business intent is transparent.
### Idea or request for content:
1. Please create a runnable example of a non-trivial chain with at least 100 trace steps that shows how to rename the runs [UPDATE: and traces] in the tree browser in langsmith.
2. Please explicitly mention the LCEL Runnables that take a `name` parameter and those that do not, and also explicitly mention whether there are any `.with_config()` invocations that can substitute for compound chains (for example, I expected `(chain_a | chain_b).with_config(name="chain_a_and_b")` to name the chain in langsmith, but it did not) | DOC: Sample python which customizes the trace names of the runnables in the chain | https://api.github.com/repos/langchain-ai/langchain/issues/24752/comments | 3 | 2024-07-28T10:25:44Z | 2024-08-04T10:11:08Z | https://github.com/langchain-ai/langchain/issues/24752 | 2,433,861,694 | 24,752 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
from langchain_huggingface import ChatHuggingFace,HuggingFaceEndpoint
os.environ['HUGGINGFACEHUB_API_TOKEN'] = 'xxxxxxxxx'
llm = HuggingFaceEndpoint(
repo_id="microsoft/Phi-3-mini-4k-instruct",
task="text-generation",
max_new_tokens=512,
do_sample=False,
repetition_penalty=1.03,
)
```
### Error Message and Stack Trace (if applicable)
Traceback (most recent call last):
File "/Users/mac/langchain/test.py", line 18, in <module>
llm = HuggingFaceEndpoint(
File "/opt/anaconda3/envs/langchain/lib/python3.9/site-packages/pydantic/v1/main.py", line 341, in __init__
raise validation_error
pydantic.v1.error_wrappers.ValidationError: 1 validation error for HuggingFaceEndpoint
__root__
Did not find endpoint_url, please add an environment variable `HF_INFERENCE_ENDPOINT` which contains it, or pass `endpoint_url` as a named parameter. (type=value_error)
### Description
I am trying to initialize the `HuggingFaceEndpoint`, but despite passing the correct `repo_id`, I am encountering an error. I have identified the bug: even though I provide the `repo_id`, the `HuggingFaceEndpoint` validation always checks for the `endpoint_url`, which is incorrect. If the `repo_id` is passed, it should not be checking for the `endpoint_url`. I will create a PR to fix this issue.
### System Info
Package Information
-------------------
> langchain_core: 0.2.24
> langchain: 0.2.11
> langchain_community: 0.2.10
> langsmith: 0.1.93
> langchain_text_splitters: 0.2.0
> langchainhub: 0.1.17 | HuggingFaceEndpoint `Endpoint URL` validation Error | https://api.github.com/repos/langchain-ai/langchain/issues/24742/comments | 4 | 2024-07-27T14:24:11Z | 2024-08-05T00:53:59Z | https://github.com/langchain-ai/langchain/issues/24742 | 2,433,501,157 | 24,742 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
embeddings = AzureOpenAIEmbeddings(
azure_endpoint=azure_endpoint,
openai_api_version=openai_api_version,
openai_api_key=openai_api_key,
openai_api_type=openai_api_type,
deployment=deployment,
chunk_size=1)
vectorstore = AzureSearch(
azure_search_endpoint=azure_search_endpoint,
azure_search_key=azure_search_key,
index_name=index_name,
embedding_function=embeddings.embed_query,
)
system_message_prompt = SystemMessagePromptTemplate.from_template(
system_prompt)
human_message_prompt = HumanMessagePromptTemplate.from_template(
human_template)
chat_prompt = ChatPromptTemplate.from_messages(
[system_message_prompt, human_message_prompt])
doc_chain = load_qa_chain(
conversation_llm, chain_type="stuff", prompt=chat_prompt, callback_manager=default_manager
)
conversation_chain = ConversationalRetrievalChain(
retriever=vectorstore.as_retriever(search_type="similarity_score_threshold", k=rag_top_k,
search_kwargs={"score_threshold": rag_score_threshold}),
combine_docs_chain=doc_chain,
question_generator=question_generator,
return_source_documents=True,
callback_manager=default_manager,
rephrase_question=False,
memory=memory,
max_tokens_limit=max_retrieval_tokens,
)
result = await conversation_chain.ainvoke({"question": question, "chat_history": chat_history}
```
### Error Message and Stack Trace (if applicable)
TypeError("'AsyncSearchItemPaged' object is not iterable")Traceback (most recent call last):
File "/Users/crobert/Code/Higher Bar AI/almitra-pilot-be/venv/lib/python3.10/site-packages/langchain/chains/base.py", line 208, in ainvoke
await self._acall(inputs, run_manager=run_manager)
File "/Users/crobert/Code/Higher Bar AI/almitra-pilot-be/venv/lib/python3.10/site-packages/langchain/chains/conversational_retrieval/base.py", line 212, in _acall
docs = await self._aget_docs(new_question, inputs, run_manager=_run_manager)
File "/Users/crobert/Code/Higher Bar AI/almitra-pilot-be/venv/lib/python3.10/site-packages/langchain/chains/conversational_retrieval/base.py", line 410, in _aget_docs
docs = await self.retriever.ainvoke(
File "/Users/crobert/Code/Higher Bar AI/almitra-pilot-be/venv/lib/python3.10/site-packages/langchain_core/retrievers.py", line 280, in ainvoke
raise e
File "/Users/crobert/Code/Higher Bar AI/almitra-pilot-be/venv/lib/python3.10/site-packages/langchain_core/retrievers.py", line 273, in ainvoke
result = await self._aget_relevant_documents(
File "/Users/crobert/Code/Higher Bar AI/almitra-pilot-be/venv/lib/python3.10/site-packages/langchain_community/vectorstores/azuresearch.py", line 1590, in _aget_relevant_documents
await self.vectorstore.asimilarity_search_with_relevance_scores(
File "/Users/crobert/Code/Higher Bar AI/almitra-pilot-be/venv/lib/python3.10/site-packages/langchain_community/vectorstores/azuresearch.py", line 663, in asimilarity_search_with_relevance_scores
result = await self.avector_search_with_score(query, k=k, **kwargs)
File "/Users/crobert/Code/Higher Bar AI/almitra-pilot-be/venv/lib/python3.10/site-packages/langchain_community/vectorstores/azuresearch.py", line 750, in avector_search_with_score
return _results_to_documents(results)
File "/Users/crobert/Code/Higher Bar AI/almitra-pilot-be/venv/lib/python3.10/site-packages/langchain_community/vectorstores/azuresearch.py", line 1623, in _results_to_documents
docs = [
TypeError: 'AsyncSearchItemPaged' object is not iterable
### Description
[This commit](https://github.com/langchain-ai/langchain/commit/ffe6ca986ee5b439e85c82781c1d8ce3578a3e88) for issue #24064 caused a regression in async support. After that commit, `avector_search_with_score()` calls `_asimple_search()`, which uses `async with self.async_client`, and then tries to call `_results_to_documents()` with the results — but that triggers a "TypeError: 'AsyncSearchItemPaged' object is not iterable" because it uses `AsyncSearchItemPaged` on a closed HTTP connection (because the connection closed at the end of the `_asimple_search()` `with` block.
The original async PR #22075 seemed to have the right idea: the async results need to be handled within the `with` block. Looking at that code, it looks like it should probably work. However, if I roll back to 0.2.7, I run into the "KeyError('content_vector')" that triggered issue #24064. For the moment, I've gotten things running by overriding AzureSearch as follows:
```python
class ExtendedAzureSearch(AzureSearch):
"""Extended AzureSearch class with patch to fix async support."""
async def _asimple_search_docs(
self,
embedding: List[float],
text_query: str,
k: int,
*,
filters: Optional[str] = None,
**kwargs: Any,
) -> List[Tuple[Document, float]]:
"""Perform vector or hybrid search in the Azure search index.
Args:
embedding: A vector embedding to search in the vector space.
text_query: A full-text search query expression;
Use "*" or omit this parameter to perform only vector search.
k: Number of documents to return.
filters: Filtering expression.
Returns:
Matching documents with scores
"""
from azure.search.documents.models import VectorizedQuery
async with self.async_client as async_client:
results = await async_client.search(
search_text=text_query,
vector_queries=[
VectorizedQuery(
vector=np.array(embedding, dtype=np.float32).tolist(),
k_nearest_neighbors=k,
fields=FIELDS_CONTENT_VECTOR,
)
],
filter=filters,
top=k,
**kwargs,
)
docs = [
(
Document(
page_content=result.pop(FIELDS_CONTENT),
metadata=json.loads(result[FIELDS_METADATA])
if FIELDS_METADATA in result
else {
key: value for key, value in result.items() if key != FIELDS_CONTENT_VECTOR
},
),
float(result["@search.score"]),
)
async for result in results
]
return docs
# AP-254 - This version of avector_search_with_score() calls _asimple_search_docs() instead of _asimple_search()
# followed by _results_to_documents(results) because _asimple_search() uses `async with self.async_client`, which
# closes the paging connection on return, which makes it so the results are not available for
# _results_to_documents() (triggering "TypeError: 'AsyncSearchItemPaged' object is not iterable").
async def avector_search_with_score(
self,
query: str,
k: int = 4,
filters: Optional[str] = None,
**kwargs: Any,
) -> List[Tuple[Document, float]]:
"""Return docs most similar to query.
Args:
query (str): Text to look up documents similar to.
k (int, optional): Number of Documents to return. Defaults to 4.
filters (str, optional): Filtering expression. Defaults to None.
Returns:
List[Tuple[Document, float]]: List of Documents most similar
to the query and score for each
"""
embedding = await self._aembed_query(query)
return await self._asimple_search_docs(
embedding, "", k, filters=filters, **kwargs
)
```
### System Info
System Information
------------------
> OS: Darwin
> OS Version: Darwin Kernel Version 23.5.0: Wed May 1 20:12:58 PDT 2024; root:xnu-10063.121.3~5/RELEASE_ARM64_T6000
> Python Version: 3.10.9 (v3.10.9:1dd9be6584, Dec 6 2022, 14:37:36) [Clang 13.0.0 (clang-1300.0.29.30)]
Package Information
-------------------
> langchain_core: 0.2.9
> langchain: 0.2.11
> langchain_community: 0.2.10
> langsmith: 0.1.81
> langchain_aws: 0.1.7
> langchain_openai: 0.1.8
> langchain_text_splitters: 0.2.2
> langchainplus_sdk: 0.0.21
> langgraph: 0.1.14 | AzureSearch.avector_search_with_score() triggers "TypeError: 'AsyncSearchItemPaged' object is not iterable" when calling _results_to_documents() | https://api.github.com/repos/langchain-ai/langchain/issues/24740/comments | 4 | 2024-07-27T11:33:30Z | 2024-08-08T11:14:21Z | https://github.com/langchain-ai/langchain/issues/24740 | 2,433,439,253 | 24,740 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
import asyncio
from langchain_core.language_models.base import LanguageModelInput
from langchain_core.messages import HumanMessage, SystemMessage
from langchain_ollama import OllamaLLM
model = OllamaLLM(model="qwen2:0.5b", repeat_penalty=1.1, top_k=10, temperature=0.8, top_p=0.5)
input = [
SystemMessage(content="Some system content..."),
HumanMessage(content="Some user content..."),
]
async def stream_response(input: LanguageModelInput):
async for chunk in model.astream(input):
print(f"{chunk=}")
asyncio.run(stream_response(input))
```
### Error Message and Stack Trace (if applicable)
Every response chunk is empty.
```python
chunk=''
chunk=''
chunk=''
...
chunk=''
```
### Description
Asynchronous streaming via`.astream(...)` instance method always returns empty string for each chunk of model response. It's because response content is contained within unexpected key, thus is not being extracted.
Checked for models: `qwen2:0.5b`, `qwen2:1.5b`, `llama3.1:8b` using Ollama 0.3.0.
Changing [.astream source code](https://github.com/langchain-ai/langchain/blob/152427eca13da070cc03f3f245a43bff312e43d1/libs/partners/ollama/langchain_ollama/llms.py#L332) from
```python
chunk = GenerationChunk(
text=(
stream_resp["message"]["content"]
if "message" in stream_resp
else ""
),
generation_info=(
dict(stream_resp) if stream_resp.get("done") is True else None
),
)
````
to
```python
chunk = GenerationChunk(
text=(
stream_resp["message"]["content"]
if "message" in stream_resp
else stream_resp.get("response", "")
),
generation_info=(
dict(stream_resp) if stream_resp.get("done") is True else None
),
)
````
resolves this issue.
Synchronous version of this method works fine.
### System Info
langchain==0.2.11
langchain-core==0.2.23
langchain-ollama==0.1.0
langchain-openai==0.1.17
langchain-text-splitters==0.2.2 | [langchain-ollama] `.astream` does not extract model response content | https://api.github.com/repos/langchain-ai/langchain/issues/24737/comments | 2 | 2024-07-27T06:36:23Z | 2024-07-29T20:00:59Z | https://github.com/langchain-ai/langchain/issues/24737 | 2,433,305,218 | 24,737 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
```python
from langchain_huggingface import ChatHuggingFace, HuggingFaceEndpoint
llm = HuggingFaceEndpoint(
endpoint_url="http://10.165.9.23:9009",
task="text-generation",
max_new_tokens=10,
do_sample=False,
temperature=0.8,
)
res = llm.invoke("Hugging Face is")
print(res)
print('-------------------')
llm_engine_hf = ChatHuggingFace(llm=llm, model_id = "meta-llama/Meta-Llama-3-8B-Instruct")
res = llm_engine_hf.invoke("Hugging Face is")
print(res)
```
### Error Message and Stack Trace (if applicable)
_No response_
### Description
Using ChatHuggingFace with llm being HuggingFaceEndpoint got 422 error "Unprocessable entity" from huggingface-hub Inference client post function, when using the latest versions of langchain and huggigface-hub 0.24.3. Downgrading to the following version, I got the code to run.
The working versions of packages
huggingface_hub==0.24.0
langchain==0.2.9
langchain-core==0.2.21
langchain-huggingface==0.0.3
langchain_community==0.2.7
### System Info
The following versions are what caused problems
langchain-community==0.0.38
langchain-core==0.2.19
langchain-huggingface==0.0.3
langchain-openai==0.1.16
huggingface_hub==0.24.3 | Chathuggingface 422 error | https://api.github.com/repos/langchain-ai/langchain/issues/24720/comments | 1 | 2024-07-26T16:48:47Z | 2024-08-02T04:14:05Z | https://github.com/langchain-ai/langchain/issues/24720 | 2,432,601,071 | 24,720 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
``` python
from langchain.callbacks.manager import CallbackManager
from langchain.callbacks.streaming_stdout import StreamingStdOutCallbackHandler
from langchain_community.llms import LlamaCpp
model_path = "/AI/language-models/Llama-3-Taiwan-8B-Instruct.Q5_K_M.gguf"
llm = LlamaCpp(
model_path=model_path,
n_gpu_layers=100,
n_batch=512,
n_ctx=2048,
f16_kv=True,
max_tokens=2048,
callback_manager=CallbackManager([StreamingStdOutCallbackHandler()]),
verbose=True
)
```
### Error Message and Stack Trace (if applicable)
Traceback (most recent call last):
File "d:\program\python\ddavid-langchain\ddavid_langchain\start.py", line 15, in <module>
llm = LlamaCpp(
^^^^^^^^^
File "D:\miniconda3\envs\ddavid-langchain\Lib\site-packages\pydantic\v1\main.py", line 341, in __init__
raise validation_error
pydantic.v1.error_wrappers.ValidationError: 1 validation error for LlamaCpp
__root__
Could not load Llama model from path: /AI/language-models/Llama-3-Taiwan-8B-Instruct.Q5_K_M.gguf. Received error exception: access violation reading 0x0000000000000000 (type=value_error)
Exception ignored in: <function Llama.__del__ at 0x00000247FF061120>
Traceback (most recent call last):
File "D:\miniconda3\envs\ddavid-langchain\Lib\site-packages\llama_cpp\llama.py", line 2089, in __del__
AttributeError: 'Llama' object has no attribute '_lora_adapter'
### Description
I install brand new LangChain + llama-cpp-python under Python 3.10.14, Windows 11. Several days ago it works well, until I try to upgrade llama-cpp-python from 0.2.82 -> 0.2.83. After upgrade the error "AttributeError: 'Llama' object has no attribute '_lora_adapter'" happened.
I try to install again under new env under Python 3.11.9, but still encounter the same error.
I'm not 100% sure that the version of llama-cpp-python leads to this error, because currently I haven't tried python-llama-cpp 0.2.82 again.
### System Info
langchain==0.2.11
langchain-community==0.2.10
langchain-core==0.2.23
langchain-text-splitters==0.2.2
llama_cpp_python==0.2.83
Windows 11
Python 3.10.14 / Python 3.11.9
Install options:
$Env:CMAKE_ARGS="-DGGML_CUDA=on"
$Env:FORCE_CMAKE=1
| Get AttributeError: 'Llama' object has no attribute '_lora_adapter' with llama cpp | https://api.github.com/repos/langchain-ai/langchain/issues/24718/comments | 3 | 2024-07-26T15:33:21Z | 2024-07-31T14:53:35Z | https://github.com/langchain-ai/langchain/issues/24718 | 2,432,483,863 | 24,718 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [x] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
I have the following code for building a RAG Chatbot (using [this](https://python.langchain.com/v0.2/docs/how_to/streaming/) example):
```
from langchain_openai import OpenAIEmbeddings, ChatOpenAI
from langchain.chains import create_retrieval_chain
from langchain.chains.history_aware_retriever import create_history_aware_retriever
from langchain.chains.combine_documents import create_stuff_documents_chain
vectordb = FAISS.load_local(persist_directory, embedding, index_name, allow_dangerous_deserialization=True)
retriever=vectordb.as_retriever()
llm = ChatOpenAI()
....
prompt={.....}
....
question_answer_chain = create_stuff_documents_chain(llm, prompt, output_parser=parser)
rag_chain = create_retrieval_chain(history_aware_retriever, question_answer_chain)
conversational_rag_chain = RunnableWithMessageHistory(
rag_chain,
get_session_history,
input_messages_key="input",
history_messages_key="chat_history",
output_messages_key="answer",
)
while True:
query = input("Ask a question: ")
for chunk in conversational_rag_chain.stream(
{"input": query,},
config={
"configurable": {
"session_id": "demo_1"
}
}
):
if answer_chunk := chunk.get("answer"):
print(f"{answer_chunk}", end="", flush=True)
print()
```
### Error Message and Stack Trace (if applicable)
```
Ask a question: How many colors are in rainbow?
Error in RootListenersTracer.on_chain_end callback: KeyError('answer')
Error in callback coroutine: KeyError('answer')
A rainbow typically has seven colors, which are: Red, Orange, Yellow, Green, Blue, Indigo, Violet.</s>
Ask a question:
```
### Description
Hi,
I am trying to get the answer as `stream`, the problem is whenever the `conversational_rag_chain.stream()` is initiating, using an `input` it is giving the following errors:
`Error in RootListenersTracer.on_chain_end callback: KeyError('answer')`
`Error in callback coroutine: KeyError('answer')`
and then the output is printing as intended.
My question is, how can I solve it? I have entered `output_messages_key="answer"` in the `conversational_rag_chain` already, so am I doing something wrong or is a `bug`?
Any little discussion or help is welcome. Thanks in advance.
### System Info
System Information
------------------
> OS: Linux
> OS Version: #39-Ubuntu SMP PREEMPT_DYNAMIC Fri Jul 5 21:49:14 UTC 2024
> Python Version: 3.12.3 (main, Apr 10 2024, 05:33:47) [GCC 13.2.0]
Package Information
-------------------
> langchain_core: 0.2.23
> langchain: 0.2.11
> langchain_community: 0.2.10
> langsmith: 0.1.93
> langchain_chroma: 0.1.2
> langchain_openai: 0.1.17
> langchain_text_splitters: 0.2.2
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph
> langserve | Error in RootListenersTracer.on_chain_end callback: KeyError('answer') while streaming a RAG Chain Output | https://api.github.com/repos/langchain-ai/langchain/issues/24713/comments | 24 | 2024-07-26T13:26:39Z | 2024-08-10T16:50:23Z | https://github.com/langchain-ai/langchain/issues/24713 | 2,432,241,854 | 24,713 |
[
"langchain-ai",
"langchain"
] | ### URL
https://python.langchain.com/v0.2/docs/integrations/vectorstores/pgvector/#drop-tables
### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
Missing some functions
### Idea or request for content:
I need to check for vectors made by the embedding model and stored in the pgvector instance, I need alse to persist the instance or the vectors made in the vector database. Thanks | DOC: PGvector instance content and persistence | https://api.github.com/repos/langchain-ai/langchain/issues/24708/comments | 0 | 2024-07-26T10:05:54Z | 2024-07-26T10:08:24Z | https://github.com/langchain-ai/langchain/issues/24708 | 2,431,893,558 | 24,708 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Description
In below code i am not getting any option to use stream , So kindly suggest how i can implement Streaming
this code is in a Node of State Agent
model = ChatGoogleGenerativeAI(model="gemini-pro", convert_system_message_to_human=True,temperature=.20) runnable = chat_message_prompt | model
with_message_history = RunnableWithMessageHistory(
runnable,
get_session_history,
input_messages_key="input",
history_messages_key="history"
)
print("********PROMPT TEST********", chat_message_prompt, "*******************")
response = with_message_history.invoke(
{"ability": "teaching", "input": prompt},
config={"configurable": {"session_id": phone_number}},
)
print("******* RESPONSE FROM GEMINI PRO = ", response.content, "*******")
answer = [response.content]
| How we can use Streaming with ChatGoogleGenerativeAI along with message history | https://api.github.com/repos/langchain-ai/langchain/issues/24706/comments | 2 | 2024-07-26T09:29:03Z | 2024-07-29T07:37:15Z | https://github.com/langchain-ai/langchain/issues/24706 | 2,431,823,094 | 24,706 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
When running:
```python
from langchain_ollama import ChatOllama
llm = ChatOllama(
model=MODEL_NAME,
base_url=BASE_URL,
seed=42
)
```
The parameters base_url and seed get ignored. Reviewing the code of this instance, I see that the class definition is missing these attributes.
### Error Message and Stack Trace (if applicable)
_No response_
### Description
Regarding seed, in [PR 249](https://github.com/rasbt/LLMs-from-scratch/issues/249) in ollama, this feature was added to allow reproducibility of the experiments.
Regarding base_url, since ollama allow us to host llms in our own servers, we need to be able to specify the url of the server.
Plus in OllamaFunctions from the package langchain_experimental does provide support to this.
### System Info
langchain==0.2.11
langchain-chroma==0.1.2
langchain-community==0.2.10
langchain-core==0.2.23
langchain-experimental==0.0.63
langchain-groq==0.1.6
langchain-ollama==0.1.0
langchain-text-splitters==0.2.2 | ChatOllama is missing the parameters seed and base_url | https://api.github.com/repos/langchain-ai/langchain/issues/24703/comments | 9 | 2024-07-26T08:27:42Z | 2024-07-30T15:02:00Z | https://github.com/langchain-ai/langchain/issues/24703 | 2,431,706,599 | 24,703 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
Following code
```
from langchain_unstructured import UnstructuredLoader
loader = UnstructuredLoader(
file_name,
file=io.BytesIO(content),
partition_via_api=True,
server_url=get_from_env("url", "UNSTRUCTURED_ENDPOINT"),
)
for document in loader.lazy_load():
print("=" * 50)
print(document)
```
doesn't work, because I cannot give file_name and file-content at same time.
File-content is In-Memory, cannot really load from file.
If I don't give file_name (because I can't), the API doesn't work either, because file type is unknown.
### Error Message and Stack Trace (if applicable)
Both file and file_name given:
```
File "/opt/initai_copilot/experiments/langchain_ext/document_loaders/unstructured_tests.py", line 35, in <module>
loader = UnstructuredLoader(
File "/usr/lib/python3/dist-packages/langchain_unstructured/document_loaders.py", line 96, in __init__
raise ValueError("file_path and file cannot be defined simultaneously.")
ValueError: file_path and file cannot be defined simultaneously.
```
No file_name given:
```
File "/usr/lib/python3/dist-packages/langchain_unstructured/document_loaders.py", line 150, in lazy_load
yield from load_file(f=self.file, f_path=self.file_path)
File "/usr/lib/python3/dist-packages/langchain_unstructured/document_loaders.py", line 185, in lazy_load
else self._elements_json
File "/usr/lib/python3/dist-packages/langchain_unstructured/document_loaders.py", line 202, in _elements_json
return self._elements_via_api
File "/usr/lib/python3/dist-packages/langchain_unstructured/document_loaders.py", line 231, in _elements_via_api
response = client.general.partition(req) # type: ignore
File "/usr/lib/python3/dist-packages/unstructured_client/general.py", line 100, in partition
raise errors.SDKError('API error occurred', http_res.status_code, http_res.text, http_res)
unstructured_client.models.errors.sdkerror.SDKError: API error occurred: Status 400
{"detail":"File type None is not supported."}
```
### Description
See above.
Two problems with file-parameter (In-Memory content):
* Without given file_name, the API partition mode doesn't work.
* With given file_name the constructor doesn't allow both params
### System Info
Not relevant | langchain_unstructured.UnstructuredLoader in api-partition-mode with given file-content also needs file-name | https://api.github.com/repos/langchain-ai/langchain/issues/24701/comments | 0 | 2024-07-26T07:42:10Z | 2024-07-26T07:44:48Z | https://github.com/langchain-ai/langchain/issues/24701 | 2,431,630,922 | 24,701 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
from langchain_groq import ChatGroq
from langchain_community.tools.ddg_search import DuckDuckGoSearchRun
from langchain.prompts import ChatPromptTemplate
from langchain.agents import create_tool_calling_agent
from langchain.agents import AgentExecutor
llm = ChatGroq(temperature=0, model_name="llama-3.1-70b-versatile", api_key="", streaming=True)
ddg_search = DuckDuckGoSearchRun()
prompt = ChatPromptTemplate.from_messages([("system","You are a helpful Search Assistant"),
("human","{input}"),
("placeholder","{agent_scratchpad}")])
tools = [ddg_search]
search_agent = create_tool_calling_agent(llm,tools,prompt)
search_agent_executor = AgentExecutor(agent=search_agent, tools=tools, verbose=False, handle_parsing_errors=True)
async for event in search_agent_executor.astream_events(
{"input": "who is narendra modi"}, version="v1"
):
kind = event["event"]
if kind == "on_chat_model_stream":
content = event["data"]["chunk"].content
if content:
print(content, end="", flush=True)
### Error Message and Stack Trace (if applicable)
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
Cell In[37], [line 1](vscode-notebook-cell:?execution_count=37&line=1)
----> [1](vscode-notebook-cell:?execution_count=37&line=1) async for event in search_agent_executor.astream_events(
[2](vscode-notebook-cell:?execution_count=37&line=2) {"input": "who is narendra modi"}, version="v1"
[3](vscode-notebook-cell:?execution_count=37&line=3) ):
[4](vscode-notebook-cell:?execution_count=37&line=4) kind = event["event"]
[6](vscode-notebook-cell:?execution_count=37&line=6) if kind == "on_chat_model_stream":
File d:\Learning\Groq-Tool-Calling\.venv\Lib\site-packages\langchain_core\runnables\base.py:1246, in Runnable.astream_events(self, input, config, version, include_names, include_types, include_tags, exclude_names, exclude_types, exclude_tags, **kwargs)
[1241](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain_core/runnables/base.py:1241) raise NotImplementedError(
[1242](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain_core/runnables/base.py:1242) 'Only versions "v1" and "v2" of the schema is currently supported.'
[1243](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain_core/runnables/base.py:1243) )
[1245](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain_core/runnables/base.py:1245) async with aclosing(event_stream):
-> [1246](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain_core/runnables/base.py:1246) async for event in event_stream:
[1247](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain_core/runnables/base.py:1247) yield event
File d:\Learning\Groq-Tool-Calling\.venv\Lib\site-packages\langchain_core\tracers\event_stream.py:778, in _astream_events_implementation_v1(runnable, input, config, include_names, include_types, include_tags, exclude_names, exclude_types, exclude_tags, **kwargs)
[774](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain_core/tracers/event_stream.py:774) root_name = config.get("run_name", runnable.get_name())
[776](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain_core/tracers/event_stream.py:776) # Ignoring mypy complaint about too many different union combinations
[777](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain_core/tracers/event_stream.py:777) # This arises because many of the argument types are unions
--> [778](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain_core/tracers/event_stream.py:778) async for log in _astream_log_implementation( # type: ignore[misc]
[779](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain_core/tracers/event_stream.py:779) runnable,
[780](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain_core/tracers/event_stream.py:780) input,
[781](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain_core/tracers/event_stream.py:781) config=config,
[782](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain_core/tracers/event_stream.py:782) stream=stream,
[783](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain_core/tracers/event_stream.py:783) diff=True,
[784](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain_core/tracers/event_stream.py:784) with_streamed_output_list=True,
[785](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain_core/tracers/event_stream.py:785) **kwargs,
[786](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain_core/tracers/event_stream.py:786) ):
[787](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain_core/tracers/event_stream.py:787) run_log = run_log + log
[789](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain_core/tracers/event_stream.py:789) if not encountered_start_event:
[790](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain_core/tracers/event_stream.py:790) # Yield the start event for the root runnable.
File d:\Learning\Groq-Tool-Calling\.venv\Lib\site-packages\langchain_core\tracers\log_stream.py:670, in _astream_log_implementation(runnable, input, config, stream, diff, with_streamed_output_list, **kwargs)
[667](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain_core/tracers/log_stream.py:667) finally:
[668](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain_core/tracers/log_stream.py:668) # Wait for the runnable to finish, if not cancelled (eg. by break)
[669](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain_core/tracers/log_stream.py:669) try:
--> [670](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain_core/tracers/log_stream.py:670) await task
[671](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain_core/tracers/log_stream.py:671) except asyncio.CancelledError:
[672](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain_core/tracers/log_stream.py:672) pass
File d:\Learning\Groq-Tool-Calling\.venv\Lib\site-packages\langchain_core\tracers\log_stream.py:624, in _astream_log_implementation.<locals>.consume_astream()
[621](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain_core/tracers/log_stream.py:621) prev_final_output: Optional[Output] = None
[622](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain_core/tracers/log_stream.py:622) final_output: Optional[Output] = None
--> [624](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain_core/tracers/log_stream.py:624) async for chunk in runnable.astream(input, config, **kwargs):
[625](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain_core/tracers/log_stream.py:625) prev_final_output = final_output
[626](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain_core/tracers/log_stream.py:626) if final_output is None:
File d:\Learning\Groq-Tool-Calling\.venv\Lib\site-packages\langchain\agents\agent.py:1793, in AgentExecutor.astream(self, input, config, **kwargs)
[1781](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain/agents/agent.py:1781) config = ensure_config(config)
[1782](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain/agents/agent.py:1782) iterator = AgentExecutorIterator(
[1783](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain/agents/agent.py:1783) self,
[1784](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain/agents/agent.py:1784) input,
(...)
[1791](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain/agents/agent.py:1791) **kwargs,
[1792](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain/agents/agent.py:1792) )
-> [1793](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain/agents/agent.py:1793) async for step in iterator:
[1794](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain/agents/agent.py:1794) yield step
File d:\Learning\Groq-Tool-Calling\.venv\Lib\site-packages\langchain\agents\agent_iterator.py:266, in AgentExecutorIterator.__aiter__(self)
[260](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain/agents/agent_iterator.py:260) while self.agent_executor._should_continue(
[261](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain/agents/agent_iterator.py:261) self.iterations, self.time_elapsed
[262](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain/agents/agent_iterator.py:262) ):
[263](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain/agents/agent_iterator.py:263) # take the next step: this plans next action, executes it,
[264](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain/agents/agent_iterator.py:264) # yielding action and observation as they are generated
[265](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain/agents/agent_iterator.py:265) next_step_seq: NextStepOutput = []
--> [266](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain/agents/agent_iterator.py:266) async for chunk in self.agent_executor._aiter_next_step(
[267](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain/agents/agent_iterator.py:267) self.name_to_tool_map,
[268](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain/agents/agent_iterator.py:268) self.color_mapping,
[269](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain/agents/agent_iterator.py:269) self.inputs,
[270](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain/agents/agent_iterator.py:270) self.intermediate_steps,
[271](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain/agents/agent_iterator.py:271) run_manager,
[272](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain/agents/agent_iterator.py:272) ):
[273](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain/agents/agent_iterator.py:273) next_step_seq.append(chunk)
[274](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain/agents/agent_iterator.py:274) # if we're yielding actions, yield them as they come
[275](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain/agents/agent_iterator.py:275) # do not yield AgentFinish, which will be handled below
File d:\Learning\Groq-Tool-Calling\.venv\Lib\site-packages\langchain\agents\agent.py:1483, in AgentExecutor._aiter_next_step(self, name_to_tool_map, color_mapping, inputs, intermediate_steps, run_manager)
[1480](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain/agents/agent.py:1480) intermediate_steps = self._prepare_intermediate_steps(intermediate_steps)
[1482](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain/agents/agent.py:1482) # Call the LLM to see what to do.
-> [1483](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain/agents/agent.py:1483) output = await self.agent.aplan(
[1484](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain/agents/agent.py:1484) intermediate_steps,
[1485](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain/agents/agent.py:1485) callbacks=run_manager.get_child() if run_manager else None,
[1486](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain/agents/agent.py:1486) **inputs,
[1487](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain/agents/agent.py:1487) )
[1488](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain/agents/agent.py:1488) except OutputParserException as e:
[1489](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain/agents/agent.py:1489) if isinstance(self.handle_parsing_errors, bool):
File d:\Learning\Groq-Tool-Calling\.venv\Lib\site-packages\langchain\agents\agent.py:619, in RunnableMultiActionAgent.aplan(self, intermediate_steps, callbacks, **kwargs)
[611](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain/agents/agent.py:611) final_output: Any = None
[612](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain/agents/agent.py:612) if self.stream_runnable:
[613](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain/agents/agent.py:613) # Use streaming to make sure that the underlying LLM is invoked in a
[614](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain/agents/agent.py:614) # streaming
(...)
[617](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain/agents/agent.py:617) # Because the response from the plan is not a generator, we need to
[618](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain/agents/agent.py:618) # accumulate the output into final output and return that.
--> [619](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain/agents/agent.py:619) async for chunk in self.runnable.astream(
[620](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain/agents/agent.py:620) inputs, config={"callbacks": callbacks}
[621](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain/agents/agent.py:621) ):
[622](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain/agents/agent.py:622) if final_output is None:
[623](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain/agents/agent.py:623) final_output = chunk
File d:\Learning\Groq-Tool-Calling\.venv\Lib\site-packages\langchain_core\runnables\base.py:3278, in RunnableSequence.astream(self, input, config, **kwargs)
[3275](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain_core/runnables/base.py:3275) async def input_aiter() -> AsyncIterator[Input]:
[3276](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain_core/runnables/base.py:3276) yield input
-> [3278](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain_core/runnables/base.py:3278) async for chunk in self.atransform(input_aiter(), config, **kwargs):
[3279](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain_core/runnables/base.py:3279) yield chunk
File d:\Learning\Groq-Tool-Calling\.venv\Lib\site-packages\langchain_core\runnables\base.py:3261, in RunnableSequence.atransform(self, input, config, **kwargs)
[3255](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain_core/runnables/base.py:3255) async def atransform(
[3256](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain_core/runnables/base.py:3256) self,
[3257](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain_core/runnables/base.py:3257) input: AsyncIterator[Input],
[3258](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain_core/runnables/base.py:3258) config: Optional[RunnableConfig] = None,
[3259](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain_core/runnables/base.py:3259) **kwargs: Optional[Any],
[3260](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain_core/runnables/base.py:3260) ) -> AsyncIterator[Output]:
-> [3261](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain_core/runnables/base.py:3261) async for chunk in self._atransform_stream_with_config(
[3262](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain_core/runnables/base.py:3262) input,
[3263](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain_core/runnables/base.py:3263) self._atransform,
[3264](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain_core/runnables/base.py:3264) patch_config(config, run_name=(config or {}).get("run_name") or self.name),
[3265](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain_core/runnables/base.py:3265) **kwargs,
[3266](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain_core/runnables/base.py:3266) ):
[3267](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain_core/runnables/base.py:3267) yield chunk
File d:\Learning\Groq-Tool-Calling\.venv\Lib\site-packages\langchain_core\runnables\base.py:2160, in Runnable._atransform_stream_with_config(self, input, transformer, config, run_type, **kwargs)
[2158](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain_core/runnables/base.py:2158) while True:
[2159](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain_core/runnables/base.py:2159) if accepts_context(asyncio.create_task):
-> [2160](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain_core/runnables/base.py:2160) chunk: Output = await asyncio.create_task( # type: ignore[call-arg]
[2161](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain_core/runnables/base.py:2161) py_anext(iterator), # type: ignore[arg-type]
[2162](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain_core/runnables/base.py:2162) context=context,
[2163](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain_core/runnables/base.py:2163) )
[2164](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain_core/runnables/base.py:2164) else:
[2165](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain_core/runnables/base.py:2165) chunk = cast(Output, await py_anext(iterator))
File d:\Learning\Groq-Tool-Calling\.venv\Lib\site-packages\langchain_core\tracers\log_stream.py:258, in LogStreamCallbackHandler.tap_output_aiter(self, run_id, output)
[246](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain_core/tracers/log_stream.py:246) async def tap_output_aiter(
[247](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain_core/tracers/log_stream.py:247) self, run_id: UUID, output: AsyncIterator[T]
[248](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain_core/tracers/log_stream.py:248) ) -> AsyncIterator[T]:
[249](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain_core/tracers/log_stream.py:249) """Tap an output async iterator to stream its values to the log.
[250](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain_core/tracers/log_stream.py:250)
[251](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain_core/tracers/log_stream.py:251) Args:
(...)
[256](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain_core/tracers/log_stream.py:256) T: The output value.
[257](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain_core/tracers/log_stream.py:257) """
--> [258](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain_core/tracers/log_stream.py:258) async for chunk in output:
[259](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain_core/tracers/log_stream.py:259) # root run is handled in .astream_log()
[260](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain_core/tracers/log_stream.py:260) if run_id != self.root_id:
[261](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain_core/tracers/log_stream.py:261) # if we can't find the run silently ignore
[262](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain_core/tracers/log_stream.py:262) # eg. because this run wasn't included in the log
[263](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain_core/tracers/log_stream.py:263) if key := self._key_map_by_run_id.get(run_id):
File d:\Learning\Groq-Tool-Calling\.venv\Lib\site-packages\langchain_core\runnables\base.py:3231, in RunnableSequence._atransform(self, input, run_manager, config, **kwargs)
[3229](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain_core/runnables/base.py:3229) else:
[3230](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain_core/runnables/base.py:3230) final_pipeline = step.atransform(final_pipeline, config)
-> [3231](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain_core/runnables/base.py:3231) async for output in final_pipeline:
[3232](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain_core/runnables/base.py:3232) yield output
File d:\Learning\Groq-Tool-Calling\.venv\Lib\site-packages\langchain_core\runnables\base.py:1313, in Runnable.atransform(self, input, config, **kwargs)
[1310](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain_core/runnables/base.py:1310) final: Input
[1311](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain_core/runnables/base.py:1311) got_first_val = False
-> [1313](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain_core/runnables/base.py:1313) async for ichunk in input:
[1314](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain_core/runnables/base.py:1314) # The default implementation of transform is to buffer input and
[1315](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain_core/runnables/base.py:1315) # then call stream.
[1316](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain_core/runnables/base.py:1316) # It'll attempt to gather all input into a single chunk using
[1317](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain_core/runnables/base.py:1317) # the `+` operator.
[1318](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain_core/runnables/base.py:1318) # If the input is not addable, then we'll assume that we can
[1319](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain_core/runnables/base.py:1319) # only operate on the last chunk,
[1320](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain_core/runnables/base.py:1320) # and we'll iterate until we get to the last chunk.
[1321](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain_core/runnables/base.py:1321) if not got_first_val:
[1322](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain_core/runnables/base.py:1322) final = ichunk
File d:\Learning\Groq-Tool-Calling\.venv\Lib\site-packages\langchain_core\runnables\base.py:5276, in RunnableBindingBase.atransform(self, input, config, **kwargs)
[5270](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain_core/runnables/base.py:5270) async def atransform(
[5271](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain_core/runnables/base.py:5271) self,
[5272](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain_core/runnables/base.py:5272) input: AsyncIterator[Input],
[5273](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain_core/runnables/base.py:5273) config: Optional[RunnableConfig] = None,
[5274](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain_core/runnables/base.py:5274) **kwargs: Any,
[5275](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain_core/runnables/base.py:5275) ) -> AsyncIterator[Output]:
-> [5276](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain_core/runnables/base.py:5276) async for item in self.bound.atransform(
[5277](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain_core/runnables/base.py:5277) input,
[5278](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain_core/runnables/base.py:5278) self._merge_configs(config),
[5279](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain_core/runnables/base.py:5279) **{**self.kwargs, **kwargs},
[5280](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain_core/runnables/base.py:5280) ):
[5281](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain_core/runnables/base.py:5281) yield item
File d:\Learning\Groq-Tool-Calling\.venv\Lib\site-packages\langchain_core\runnables\base.py:1331, in Runnable.atransform(self, input, config, **kwargs)
[1328](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain_core/runnables/base.py:1328) final = ichunk
[1330](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain_core/runnables/base.py:1330) if got_first_val:
-> [1331](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain_core/runnables/base.py:1331) async for output in self.astream(final, config, **kwargs):
[1332](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain_core/runnables/base.py:1332) yield output
File d:\Learning\Groq-Tool-Calling\.venv\Lib\site-packages\langchain_core\language_models\chat_models.py:439, in BaseChatModel.astream(self, input, config, stop, **kwargs)
[434](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain_core/language_models/chat_models.py:434) except BaseException as e:
[435](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain_core/language_models/chat_models.py:435) await run_manager.on_llm_error(
[436](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain_core/language_models/chat_models.py:436) e,
[437](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain_core/language_models/chat_models.py:437) response=LLMResult(generations=[[generation]] if generation else []),
[438](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain_core/language_models/chat_models.py:438) )
--> [439](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain_core/language_models/chat_models.py:439) raise e
[440](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain_core/language_models/chat_models.py:440) else:
[441](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain_core/language_models/chat_models.py:441) await run_manager.on_llm_end(
[442](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain_core/language_models/chat_models.py:442) LLMResult(generations=[[generation]]),
[443](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain_core/language_models/chat_models.py:443) )
File d:\Learning\Groq-Tool-Calling\.venv\Lib\site-packages\langchain_core\language_models\chat_models.py:417, in BaseChatModel.astream(self, input, config, stop, **kwargs)
[415](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain_core/language_models/chat_models.py:415) generation: Optional[ChatGenerationChunk] = None
[416](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain_core/language_models/chat_models.py:416) try:
--> [417](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain_core/language_models/chat_models.py:417) async for chunk in self._astream(
[418](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain_core/language_models/chat_models.py:418) messages,
[419](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain_core/language_models/chat_models.py:419) stop=stop,
[420](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain_core/language_models/chat_models.py:420) **kwargs,
[421](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain_core/language_models/chat_models.py:421) ):
[422](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain_core/language_models/chat_models.py:422) if chunk.message.id is None:
[423](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain_core/language_models/chat_models.py:423) chunk.message.id = f"run-{run_manager.run_id}"
File d:\Learning\Groq-Tool-Calling\.venv\Lib\site-packages\langchain_groq\chat_models.py:582, in ChatGroq._astream(self, messages, stop, run_manager, **kwargs)
[578](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain_groq/chat_models.py:578) if "tools" in kwargs:
[579](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain_groq/chat_models.py:579) response = await self.async_client.create(
[580](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain_groq/chat_models.py:580) messages=message_dicts, **{**params, **kwargs}
[581](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain_groq/chat_models.py:581) )
--> [582](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain_groq/chat_models.py:582) chat_result = self._create_chat_result(response)
[583](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain_groq/chat_models.py:583) generation = chat_result.generations[0]
[584](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain_groq/chat_models.py:584) message = cast(AIMessage, generation.message)
File d:\Learning\Groq-Tool-Calling\.venv\Lib\site-packages\langchain_groq\chat_models.py:665, in ChatGroq._create_chat_result(self, response)
[663](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain_groq/chat_models.py:663) generations = []
[664](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain_groq/chat_models.py:664) if not isinstance(response, dict):
--> [665](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain_groq/chat_models.py:665) response = response.dict()
[666](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain_groq/chat_models.py:666) token_usage = response.get("usage", {})
[667](file:///D:/Learning/Groq-Tool-Calling/.venv/Lib/site-packages/langchain_groq/chat_models.py:667) for res in response["choices"]:
AttributeError: 'AsyncStream' object has no attribute 'dict'
### Description
langchain Version: 0.2.11
### System Info
System Information
------------------
> OS: Windows
> OS Version: 10.0.19045
> Python Version: 3.11.4 (tags/v3.11.4:d2340ef, Jun 7 2023, 05:45:37) [MSC v.1934 64 bit (AMD64)]
Package Information
-------------------
> langchain_core: 0.2.23
> langchain: 0.2.11
> langchain_community: 0.2.10
> langsmith: 0.1.93
> langchain_cohere: 0.1.9
> langchain_experimental: 0.0.63
> langchain_groq: 0.1.6
> langchain_openai: 0.1.17
> langchain_text_splitters: 0.2.2
Packages not installed (Not Necessarily a Problem)
--------------------------------------------------
The following packages were not found:
> langgraph
> langserve | agent_executor.astream_events does not work with ChatGroq | https://api.github.com/repos/langchain-ai/langchain/issues/24699/comments | 1 | 2024-07-26T06:03:59Z | 2024-07-26T15:32:26Z | https://github.com/langchain-ai/langchain/issues/24699 | 2,431,489,026 | 24,699 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
Please see https://github.com/langchain-ai/langchain/issues/10864
### Error Message and Stack Trace (if applicable)
Please see https://github.com/langchain-ai/langchain/issues/10864
### Description
Negative similarity scores.
Multiple users have reported negative similarity scores with various models. Can we please reopen https://github.com/langchain-ai/langchain/issues/10864 ? Thanks.
### System Info
Please see https://github.com/langchain-ai/langchain/issues/10864 | When search_type="similarity_score_threshold, retriever returns negative scores (duplicate) | https://api.github.com/repos/langchain-ai/langchain/issues/24698/comments | 0 | 2024-07-26T05:23:15Z | 2024-07-29T10:18:10Z | https://github.com/langchain-ai/langchain/issues/24698 | 2,431,447,825 | 24,698 |
[
"langchain-ai",
"langchain"
] | ### Checked other resources
- [X] I added a very descriptive title to this issue.
- [X] I searched the LangChain documentation with the integrated search.
- [X] I used the GitHub search to find a similar question and didn't find it.
- [X] I am sure that this is a bug in LangChain rather than my code.
- [X] The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).
### Example Code
from langchain.text_splitter import RecursiveCharacterTextSplitter
from langchain_community.document_loaders import WebBaseLoader
from langchain_community.vectorstores import Chroma
from langchain_openai import OpenAIEmbeddings
urls = [
"https://lilianweng.github.io/posts/2023-06-23-agent/",
"https://lilianweng.github.io/posts/2023-03-15-prompt-engineering/",
"https://lilianweng.github.io/posts/2023-10-25-adv-attack-llm/",
]
docs = [WebBaseLoader(url).load() for url in urls]
docs_list = [item for sublist in docs for item in sublist]
text_splitter = RecursiveCharacterTextSplitter.from_tiktoken_encoder(
chunk_size=250, chunk_overlap=0
)
doc_splits = text_splitter.split_documents(docs_list)
# Add to vectorDB
vectorstore = Chroma.from_documents(
documents=doc_splits,
collection_name="rag-chroma",
embedding=OpenAIEmbeddings(),
)
retriever = vectorstore.as_retriever()
### Error Message and Stack Trace (if applicable)
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
Cell In[2], line 21
18 doc_splits = text_splitter.split_documents(docs_list)
20 # Add to vectorDB
---> 21 vectorstore = Chroma.from_documents(
22 documents=doc_splits,
23 collection_name="rag-chroma",
24 embedding=OpenAIEmbeddings(),
25 )
26 retriever = vectorstore.as_retriever()
File ~\AppData\Roaming\Python\Python311\site-packages\langchain_community\vectorstores\chroma.py:878, in Chroma.from_documents(cls, documents, embedding, ids, collection_name, persist_directory, client_settings, client, collection_metadata, **kwargs)
876 texts = [doc.page_content for doc in documents]
877 metadatas = [doc.metadata for doc in documents]
--> 878 return cls.from_texts(
879 texts=texts,
880 embedding=embedding,
881 metadatas=metadatas,
882 ids=ids,
883 collection_name=collection_name,
884 persist_directory=persist_directory,
885 client_settings=client_settings,
886 client=client,
887 collection_metadata=collection_metadata,
...
---> 99 if key in self.model_fields:
100 return getattr(self, key)
101 return None
AttributeError: 'Collection' object has no attribute 'model_fields'
### Description
I just copy the Self-RAG document
### System Info
OS: Windows
OS Version: 10.0.22631
Python Version: 3.11.7 | packaged by Anaconda, Inc. | (main, Dec 15 2023, 18:05:47) [MSC v.1916 64 bit (AMD64)]
langchain_core: 0.2.23
langchain: 0.2.11
langchain_community: 0.2.10
langsmith: 0.1.82
langchain_cohere: 0.1.9
langchain_experimental: 0.0.59
langchain_openai: 0.1.17
langchain_text_splitters: 0.2.0
langchainhub: 0.1.20
langgraph: 0.1.14 | AttributeError: 'Collection' object has no attribute 'model_fields' | https://api.github.com/repos/langchain-ai/langchain/issues/24696/comments | 5 | 2024-07-26T03:06:20Z | 2024-08-02T07:44:15Z | https://github.com/langchain-ai/langchain/issues/24696 | 2,431,319,999 | 24,696 |
[
"langchain-ai",
"langchain"
] | ### URL
https://python.langchain.com/v0.1/docs/templates/neo4j-advanced-rag/
### Checklist
- [X] I added a very descriptive title to this issue.
- [X] I included a link to the documentation page I am referring to (if applicable).
### Issue with current documentation:
When setting this template up for the first time, and before ingesting data, running into the error:
`ValueError: The specified vector index name does not exist. Make sure to check if you spelled it correctly`.
Is existing index a prerequisite, can the doc clarify this?
### Idea or request for content:
_No response_ | DOC: Templates/neo4j-advanced-rag assumes index already exists | https://api.github.com/repos/langchain-ai/langchain/issues/24688/comments | 1 | 2024-07-25T21:06:42Z | 2024-07-26T02:36:02Z | https://github.com/langchain-ai/langchain/issues/24688 | 2,430,977,702 | 24,688 |
[
"langchain-ai",
"langchain"
] | Unfortunately this function fails for pydantic v1 models that use `Annotated` with `Field`, e.g.
```python
class InputModel(BaseModel):
query: Annotated[str, pydantic_v1.Field(description="Hello World")]
_create_subset_model_v1("test", InputModel, InputModel.__annotations__.keys())
```
This produces the following error:
```plain
ValueError: cannot specify `Annotated` and value `Field`s together for 'query'
```
_Originally posted by @tdiggelm in https://github.com/langchain-ai/langchain/pull/24418#discussion_r1691736664_
| `langchain_core.utils.pydantic._create_subset_model_v1` fails for pydantic v1 models that use `Annotated` with `Field`, e.g. | https://api.github.com/repos/langchain-ai/langchain/issues/24676/comments | 2 | 2024-07-25T16:05:21Z | 2024-07-26T05:42:36Z | https://github.com/langchain-ai/langchain/issues/24676 | 2,430,427,140 | 24,676 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.