Tokenizer problem after the november 13 update

#5
by Restodecoca - opened

Yesterday the model was working perfectly, but today I started experiencing issues when trying to deploy it. I tried several things to resolve the problem, including downloading the tokenizer.json directly using the Hugging Face CLI, but it didn’t help. Based on the error logs and behavior, it seems that after the latest update the tokenizer file may be corrupted or improperly formatted.

2025-11-13 13:29:54.318021: I tensorflow/core/util/port.cc:153] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`.
2025-11-13 13:29:54.334983: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:467] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered
WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
E0000 00:00:1763040594.356211    1082 cuda_dnn.cc:8579] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered
E0000 00:00:1763040594.362728    1082 cuda_blas.cc:1407] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
W0000 00:00:1763040594.378874    1082 computation_placer.cc:177] computation placer already registered. Please check linkage and avoid linking the same target more than once.
W0000 00:00:1763040594.378900    1082 computation_placer.cc:177] computation placer already registered. Please check linkage and avoid linking the same target more than once.
W0000 00:00:1763040594.378902    1082 computation_placer.cc:177] computation placer already registered. Please check linkage and avoid linking the same target more than once.
W0000 00:00:1763040594.378904    1082 computation_placer.cc:177] computation placer already registered. Please check linkage and avoid linking the same target more than once.
2025-11-13 13:29:54.383777: I tensorflow/core/platform/cpu_feature_guard.cc:210] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 AVX512F AVX512_VNNI FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
(APIServer pid=1082) INFO 11-13 13:30:08 [api_server.py:1839] vLLM API server version 0.11.0
(APIServer pid=1082) INFO 11-13 13:30:08 [utils.py:233] non-default args: {'model_tag': 'numind/NuExtract-2.0-8B', 'chat_template_content_format': 'openai', 'model': 'numind/NuExtract-2.0-8B', 'trust_remote_code': True, 'max_model_len': 32768, 'hf_token': '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'}
(APIServer pid=1082) The argument `trust_remote_code` is to be used with Auto classes. It has no effect here and is ignored.
(APIServer pid=1082) INFO 11-13 13:30:26 [model.py:547] Resolved architecture: Qwen2_5_VLForConditionalGeneration
(APIServer pid=1082) `torch_dtype` is deprecated! Use `dtype` instead!
(APIServer pid=1082) INFO 11-13 13:30:26 [model.py:1510] Using max model len 32768
(APIServer pid=1082) INFO 11-13 13:30:28 [scheduler.py:205] Chunked prefill is enabled with max_num_batched_tokens=2048.
(APIServer pid=1082) Traceback (most recent call last):
(APIServer pid=1082)   File "/usr/local/bin/vllm", line 10, in <module>
(APIServer pid=1082)     sys.exit(main())
(APIServer pid=1082)              ^^^^^^
(APIServer pid=1082)   File "/usr/local/lib/python3.12/dist-packages/vllm/entrypoints/cli/main.py", line 54, in main
(APIServer pid=1082)     args.dispatch_function(args)
(APIServer pid=1082)   File "/usr/local/lib/python3.12/dist-packages/vllm/entrypoints/cli/serve.py", line 57, in cmd
(APIServer pid=1082)     uvloop.run(run_server(args))
(APIServer pid=1082)   File "/usr/local/lib/python3.12/dist-packages/uvloop/__init__.py", line 96, in run
(APIServer pid=1082)     return __asyncio.run(
(APIServer pid=1082)            ^^^^^^^^^^^^^^
(APIServer pid=1082)   File "/usr/lib/python3.12/asyncio/runners.py", line 195, in run
(APIServer pid=1082)     return runner.run(main)
(APIServer pid=1082)            ^^^^^^^^^^^^^^^^
(APIServer pid=1082)   File "/usr/lib/python3.12/asyncio/runners.py", line 118, in run
(APIServer pid=1082)     return self._loop.run_until_complete(task)
(APIServer pid=1082)            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(APIServer pid=1082)   File "uvloop/loop.pyx", line 1518, in uvloop.loop.Loop.run_until_complete
(APIServer pid=1082)   File "/usr/local/lib/python3.12/dist-packages/uvloop/__init__.py", line 48, in wrapper
(APIServer pid=1082)     return await main
(APIServer pid=1082)            ^^^^^^^^^^
(APIServer pid=1082)   File "/usr/local/lib/python3.12/dist-packages/vllm/entrypoints/openai/api_server.py", line 1884, in run_server
(APIServer pid=1082)     await run_server_worker(listen_address, sock, args, **uvicorn_kwargs)
(APIServer pid=1082)   File "/usr/local/lib/python3.12/dist-packages/vllm/entrypoints/openai/api_server.py", line 1902, in run_server_worker
(APIServer pid=1082)     async with build_async_engine_client(
(APIServer pid=1082)                ^^^^^^^^^^^^^^^^^^^^^^^^^^
(APIServer pid=1082)   File "/usr/lib/python3.12/contextlib.py", line 210, in __aenter__
(APIServer pid=1082)     return await anext(self.gen)
(APIServer pid=1082)            ^^^^^^^^^^^^^^^^^^^^^
(APIServer pid=1082)   File "/usr/local/lib/python3.12/dist-packages/vllm/entrypoints/openai/api_server.py", line 180, in build_async_engine_client
(APIServer pid=1082)     async with build_async_engine_client_from_engine_args(
(APIServer pid=1082)                ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(APIServer pid=1082)   File "/usr/lib/python3.12/contextlib.py", line 210, in __aenter__
(APIServer pid=1082)     return await anext(self.gen)
(APIServer pid=1082)            ^^^^^^^^^^^^^^^^^^^^^
(APIServer pid=1082)   File "/usr/local/lib/python3.12/dist-packages/vllm/entrypoints/openai/api_server.py", line 225, in build_async_engine_client_from_engine_args
(APIServer pid=1082)     async_llm = AsyncLLM.from_vllm_config(
(APIServer pid=1082)                 ^^^^^^^^^^^^^^^^^^^^^^^^^^
(APIServer pid=1082)   File "/usr/local/lib/python3.12/dist-packages/vllm/utils/__init__.py", line 1572, in inner
(APIServer pid=1082)     return fn(*args, **kwargs)
(APIServer pid=1082)            ^^^^^^^^^^^^^^^^^^^
(APIServer pid=1082)   File "/usr/local/lib/python3.12/dist-packages/vllm/v1/engine/async_llm.py", line 207, in from_vllm_config
(APIServer pid=1082)     return cls(
(APIServer pid=1082)            ^^^^
(APIServer pid=1082)   File "/usr/local/lib/python3.12/dist-packages/vllm/v1/engine/async_llm.py", line 114, in __init__
(APIServer pid=1082)     self.tokenizer = init_tokenizer_from_configs(
(APIServer pid=1082)                      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(APIServer pid=1082)   File "/usr/local/lib/python3.12/dist-packages/vllm/transformers_utils/tokenizer.py", line 286, in init_tokenizer_from_configs
(APIServer pid=1082)     return get_tokenizer(
(APIServer pid=1082)            ^^^^^^^^^^^^^^
(APIServer pid=1082)   File "/usr/local/lib/python3.12/dist-packages/vllm/transformers_utils/tokenizer.py", line 238, in get_tokenizer
(APIServer pid=1082)     raise e
(APIServer pid=1082)   File "/usr/local/lib/python3.12/dist-packages/vllm/transformers_utils/tokenizer.py", line 217, in get_tokenizer
(APIServer pid=1082)     tokenizer = AutoTokenizer.from_pretrained(
(APIServer pid=1082)                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(APIServer pid=1082)   File "/usr/local/lib/python3.12/dist-packages/transformers/models/auto/tokenization_auto.py", line 1073, in from_pretrained
(APIServer pid=1082)     tokenizer_config = get_tokenizer_config(pretrained_model_name_or_path, **kwargs)
(APIServer pid=1082)                        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(APIServer pid=1082)   File "/usr/local/lib/python3.12/dist-packages/transformers/models/auto/tokenization_auto.py", line 927, in get_tokenizer_config
(APIServer pid=1082)     result = json.load(reader)
(APIServer pid=1082)              ^^^^^^^^^^^^^^^^^
(APIServer pid=1082)   File "/usr/lib/python3.12/json/__init__.py", line 293, in load
(APIServer pid=1082)     return loads(fp.read(),
(APIServer pid=1082)            ^^^^^^^^^^^^^^^^
(APIServer pid=1082)   File "/usr/lib/python3.12/json/__init__.py", line 346, in loads
(APIServer pid=1082)     return _default_decoder.decode(s)
(APIServer pid=1082)            ^^^^^^^^^^^^^^^^^^^^^^^^^^
(APIServer pid=1082)   File "/usr/lib/python3.12/json/decoder.py", line 338, in decode
(APIServer pid=1082)     obj, end = self.raw_decode(s, idx=_w(s, 0).end())
(APIServer pid=1082)                ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(APIServer pid=1082)   File "/usr/lib/python3.12/json/decoder.py", line 354, in raw_decode
(APIServer pid=1082)     obj, end = self.scan_once(s, idx)
(APIServer pid=1082)                ^^^^^^^^^^^^^^^^^^^^^^
(APIServer pid=1082) json.decoder.JSONDecodeError: Expecting ',' delimiter: line 199 column 37 (char 4467)
Restodecoca changed discussion status to closed

Sign up or log in to comment