lingshu-medical-mllm/Lingshu-I-8B
May I request quants for https://huggingface.co/lingshu-medical-mllm/Lingshu-I-8B
(SOTA performance on medical VQA tasks)
It's queued!
You can check for progress at http://hf.tst.eu/status.html or regularly check the model
summary page at https://hf.tst.eu/model#Lingshu-I-8B-GGUF for quants to appear.
Thank you!
Ah too bad, it seems the model is not supported or something, status says: "error/1 ValueError Can not map tensor"
INFO:hf-to-gguf:blk.6.attn_output.weight, torch.bfloat16 --> BF16, shape = {3584, 3584}
INFO:hf-to-gguf:blk.6.attn_q.bias, torch.bfloat16 --> F32, shape = {3584}
INFO:hf-to-gguf:blk.6.attn_q.weight, torch.bfloat16 --> BF16, shape = {3584, 3584}
INFO:hf-to-gguf:blk.6.attn_v.bias, torch.bfloat16 --> F32, shape = {512}
INFO:hf-to-gguf:blk.6.attn_v.weight, torch.bfloat16 --> BF16, shape = {3584, 512}
Traceback (most recent call last):
File "/llmjob/llama.cpp-nico/convert_hf_to_gguf.py", line 12009, in
main()
File "/llmjob/llama.cpp-nico/convert_hf_to_gguf.py", line 12003, in main
model_instance.write()
File "/llmjob/llama.cpp-nico/convert_hf_to_gguf.py", line 698, in write
self.prepare_tensors()
File "/llmjob/llama.cpp-nico/convert_hf_to_gguf.py", line 554, in prepare_tensors
for new_name, data_torch in (self.modify_tensors(data_torch, name, bid)):
File "/llmjob/llama.cpp-nico/convert_hf_to_gguf.py", line 3568, in modify_tensors
yield from super().modify_tensors(data_torch, name, bid)
File "/llmjob/llama.cpp-nico/convert_hf_to_gguf.py", line 516, in modify_tensors
return [(self.map_tensor_name(name), data_torch)]
File "/llmjob/llama.cpp-nico/convert_hf_to_gguf.py", line 508, in map_tensor_name
raise ValueError(f"Can not map tensor {name!r}")
ValueError: Can not map tensor 'vision_tower.embeddings.cls_token'
yeah, seems like it's not supported, probably vision block isnt expected or something