Vision support?

#1
by PrathmeshZ - opened

It does not have a mmproj file so does it support vision task? I wanted to use it with llama.cpp

Attempting mmproj conversion: python3 /home/mahadeva/code/models/llama.cpp/convert_hf_to_gguf.py /home/mahadeva/.cache/huggingface/hub/models--nvidia--Eagle2-1B/snapshots/508bc72fb1a946db3d5c1ebca50165079afef782 --outfile /home/mahadeva/code/models/Eagle2-1B/Eagle2-1B-f32.mmproj --model-name Eagle2-1B --mmproj --outtype f32
mmproj conversion failed for f32: INFO:hf-to-gguf:Loading model: 508bc72fb1a946db3d5c1ebca50165079afef782
WARNING:hf-to-gguf:Failed to load model config from /home/mahadeva/.cache/huggingface/hub/models--nvidia--Eagle2-1B/snapshots/508bc72fb1a946db3d5c1ebca50165079afef782: The repository /home/mahadeva/.cache/huggingface/hub/models--nvidia--Eagle2-1B/snapshots/508bc72fb1a946db3d5c1ebca50165079afef782 contains custom code which must be executed to correctly load the model. You can inspect the repository content at /home/mahadeva/.cache/huggingface/hub/models--nvidia--Eagle2-1B/snapshots/508bc72fb1a946db3d5c1ebca50165079afef782 .
You can inspect the repository content at https://hf.co//home/mahadeva/.cache/huggingface/hub/models--nvidia--Eagle2-1B/snapshots/508bc72fb1a946db3d5c1ebca50165079afef782.
Please pass the argument trust_remote_code=True to allow custom code to be run.
WARNING:hf-to-gguf:Trying to load config.json instead
INFO:hf-to-gguf:Model architecture: Eagle2_5_VLForConditionalGeneration
ERROR:hf-to-gguf:Model Eagle2_5_VLForConditionalGeneration is not supported

Attempting mmproj conversion: python3 /home/mahadeva/code/models/llama.cpp/convert_hf_to_gguf.py /home/mahadeva/.cache/huggingface/hub/models--nvidia--Eagle2-1B/snapshots/508bc72fb1a946db3d5c1ebca50165079afef782 --outfile /home/mahadeva/code/models/Eagle2-1B/Eagle2-1B-f16.mmproj --model-name Eagle2-1B --mmproj --outtype f16
mmproj conversion failed for f16: INFO:hf-to-gguf:Loading model: 508bc72fb1a946db3d5c1ebca50165079afef782
WARNING:hf-to-gguf:Failed to load model config from /home/mahadeva/.cache/huggingface/hub/models--nvidia--Eagle2-1B/snapshots/508bc72fb1a946db3d5c1ebca50165079afef782: The repository /home/mahadeva/.cache/huggingface/hub/models--nvidia--Eagle2-1B/snapshots/508bc72fb1a946db3d5c1ebca50165079afef782 contains custom code which must be executed to correctly load the model. You can inspect the repository content at /home/mahadeva/.cache/huggingface/hub/models--nvidia--Eagle2-1B/snapshots/508bc72fb1a946db3d5c1ebca50165079afef782 .
You can inspect the repository content at https://hf.co//home/mahadeva/.cache/huggingface/hub/models--nvidia--Eagle2-1B/snapshots/508bc72fb1a946db3d5c1ebca50165079afef782.
Please pass the argument trust_remote_code=True to allow custom code to be run.
WARNING:hf-to-gguf:Trying to load config.json instead
INFO:hf-to-gguf:Model architecture: Eagle2_5_VLForConditionalGeneration
ERROR:hf-to-gguf:Model Eagle2_5_VLForConditionalGeneration is not supported

Attempting mmproj conversion: python3 /home/mahadeva/code/models/llama.cpp/convert_hf_to_gguf.py /home/mahadeva/.cache/huggingface/hub/models--nvidia--Eagle2-1B/snapshots/508bc72fb1a946db3d5c1ebca50165079afef782 --outfile /home/mahadeva/code/models/Eagle2-1B/Eagle2-1B-bf16.mmproj --model-name Eagle2-1B --mmproj --outtype bf16
mmproj conversion failed for bf16: INFO:hf-to-gguf:Loading model: 508bc72fb1a946db3d5c1ebca50165079afef782
WARNING:hf-to-gguf:Failed to load model config from /home/mahadeva/.cache/huggingface/hub/models--nvidia--Eagle2-1B/snapshots/508bc72fb1a946db3d5c1ebca50165079afef782: The repository /home/mahadeva/.cache/huggingface/hub/models--nvidia--Eagle2-1B/snapshots/508bc72fb1a946db3d5c1ebca50165079afef782 contains custom code which must be executed to correctly load the model. You can inspect the repository content at /home/mahadeva/.cache/huggingface/hub/models--nvidia--Eagle2-1B/snapshots/508bc72fb1a946db3d5c1ebca50165079afef782 .
You can inspect the repository content at https://hf.co//home/mahadeva/.cache/huggingface/hub/models--nvidia--Eagle2-1B/snapshots/508bc72fb1a946db3d5c1ebca50165079afef782.
Please pass the argument trust_remote_code=True to allow custom code to be run.
WARNING:hf-to-gguf:Trying to load config.json instead
INFO:hf-to-gguf:Model architecture: Eagle2_5_VLForConditionalGeneration
ERROR:hf-to-gguf:Model Eagle2_5_VLForConditionalGeneration is not supported

Attempting mmproj conversion: python3 /home/mahadeva/code/models/llama.cpp/convert_hf_to_gguf.py /home/mahadeva/.cache/huggingface/hub/models--nvidia--Eagle2-1B/snapshots/508bc72fb1a946db3d5c1ebca50165079afef782 --outfile /home/mahadeva/code/models/Eagle2-1B/Eagle2-1B-q8_0.mmproj --model-name Eagle2-1B --mmproj --outtype q8_0
mmproj conversion failed for q8_0: INFO:hf-to-gguf:Loading model: 508bc72fb1a946db3d5c1ebca50165079afef782
WARNING:hf-to-gguf:Failed to load model config from /home/mahadeva/.cache/huggingface/hub/models--nvidia--Eagle2-1B/snapshots/508bc72fb1a946db3d5c1ebca50165079afef782: The repository /home/mahadeva/.cache/huggingface/hub/models--nvidia--Eagle2-1B/snapshots/508bc72fb1a946db3d5c1ebca50165079afef782 contains custom code which must be executed to correctly load the model. You can inspect the repository content at /home/mahadeva/.cache/huggingface/hub/models--nvidia--Eagle2-1B/snapshots/508bc72fb1a946db3d5c1ebca50165079afef782 .
You can inspect the repository content at https://hf.co//home/mahadeva/.cache/huggingface/hub/models--nvidia--Eagle2-1B/snapshots/508bc72fb1a946db3d5c1ebca50165079afef782.
Please pass the argument trust_remote_code=True to allow custom code to be run.
WARNING:hf-to-gguf:Trying to load config.json instead
INFO:hf-to-gguf:Model architecture: Eagle2_5_VLForConditionalGeneration
ERROR:hf-to-gguf:Model Eagle2_5_VLForConditionalGeneration is not supported

.... I updated to the latest llama.cpp its still not supported. Best way forward is to open request to add support for Eagle2_5_VLForConditionalGeneration on the llama.cpp github https://github.com/ggml-org/llama.cpp/issues that might speed things up a little.

opened an issue https://github.com/ggml-org/llama.cpp/issues/16704 let's hope for the best

PrathmeshZ changed discussion status to closed

Sign up or log in to comment