"model_type": "lighton_ocr"

#7

"model_type" is not "mistral3"

ValueError: The checkpoint you are trying to load has model type lighton_ocr
but Transformers does not recognize this architecture.

"lighton_ocr" is now supported by transformers==5.0.0rc3

This comment has been hidden (marked as Spam)

"lighton_ocr" is now supported by transformers==5.0.0rc3

still error:

(APIServer pid=56139) File "/mnt/work/miniconda3/envs/vllm/lib/python3.11/site-packages/vllm/multimodal/registry.py", line 261, in _create_processing_info
(APIServer pid=56139) return factories.info(ctx)
(APIServer pid=56139) ^^^^^^^^^^^^^^^^^^^
(APIServer pid=56139) File "/mnt/work/miniconda3/envs/vllm/lib/python3.11/site-packages/vllm/model_executor/models/mistral3.py", line 342, in _build_mistral3_info
(APIServer pid=56139) hf_config = ctx.get_hf_config(Mistral3Config)
(APIServer pid=56139) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
(APIServer pid=56139) File "/mnt/work/miniconda3/envs/vllm/lib/python3.11/site-packages/vllm/multimodal/processing.py", line 1132, in get_hf_config
(APIServer pid=56139) raise TypeError(
(APIServer pid=56139) TypeError: Invalid type of HuggingFace config. Expected type: <class 'transformers.models.mistral3.configuration_mistral3.Mistral3Config'>, but found type: <class 'transformers.models.lighton_ocr.configuration_lighton_ocr.LightOnOcrConfig'>
(vllm) admin@8-n01:/mnt/work/xiakj/hf$ pip list|grep transformers
transformers 5.0.0rc3
(vllm) admin@8-n01:/mnt/work/xiakj/hf$ vi LightOnOCR-2-1B/config.json
(vllm) admin@8-n01:/mnt/work/xiakj/hf$ cat LightOnOCR-2-1B/config.json |grep lighton_ocr
"model_type": "lighton_ocr",
(vllm) admin@8-n01:/mnt/work/xiakj/hf$ cat LightOnOCR-2-1B/config.json
{
"architectures": [
"LightOnOCRForConditionalGeneration"
],
"dtype": "bfloat16",
"eos_token_id": 151645,
"image_token_id": 151655,
"model_type": "lighton_ocr",
"multimodal_projector_bias": false,

Oh... vLLM doesn't catch up transformers==5.0.0rc3 yet. And I've just found another bug on transformers==5.0.0rc3 . Sorry but it is too early for us to change "model_type" into "lighton_ocr" and I will wait for official release of transformers 5.0.0 and vLLM.

KoichiYasuoka changed pull request status to closed

Oh... vLLM doesn't catch up transformers==5.0.0rc3 yet. And I've just found another bug on transformers==5.0.0rc3 . Sorry but it is too early for us to change "model_type" into "lighton_ocr" and I will wait for official release of transformers 5.0.0 and vLLM.

okk.

Sign up or log in to comment