https://huggingface.co/VIDraft/Qwen3-R1984-30B-A3B
#1418
by
cocorang
- opened
mradermacher was faster and already queued it as highly anticipated model.
You can check for progress at http://hf.tst.eu/status.html or regularly check the model
summary page at https://hf.tst.eu/model#Qwen3-R1984-30B-A3B-GGUF for quants to appear.
["https://huggingface.co/VIDraft/Qwen3-R1984-30B-A3B",["worker","+cork","s","0"],1749369186],
https://huggingface.co/VIDraft/Qwen3-R1984-30B-A3B already in llmjob.submit.txt
Turns out this model failed in the past with the following error:
Qwen3-R1984-30B-A3B INFO:hf-to-gguf:blk.3.ffn_gate_inp.weight, torch.bfloat16 --> F32, shape = {2048, 128}
Qwen3-R1984-30B-A3B INFO:hf-to-gguf:blk.3.attn_k_norm.weight, torch.bfloat16 --> F32, shape = {128}
Qwen3-R1984-30B-A3B INFO:hf-to-gguf:blk.3.attn_k.weight, torch.bfloat16 --> F16, shape = {2048, 512}
Qwen3-R1984-30B-A3B INFO:hf-to-gguf:blk.3.attn_output.weight, torch.bfloat16 --> F16, shape = {4096, 2048}
Qwen3-R1984-30B-A3B INFO:hf-to-gguf:blk.3.attn_q_norm.weight, torch.bfloat16 --> F32, shape = {128}
Qwen3-R1984-30B-A3B INFO:hf-to-gguf:blk.3.attn_q.weight, torch.bfloat16 --> F16, shape = {2048, 4096}
Qwen3-R1984-30B-A3B INFO:hf-to-gguf:blk.3.attn_v.weight, torch.bfloat16 --> F16, shape = {2048, 512}
Qwen3-R1984-30B-A3B INFO:hf-to-gguf:gguf: loading model part 'model-00001-of-00025.safetensors'
Qwen3-R1984-30B-A3B INFO:hf-to-gguf:token_embd.weight, torch.float32 --> F16, shape = {2048, 151936}
Qwen3-R1984-30B-A3B Traceback (most recent call last):
Qwen3-R1984-30B-A3B File "/llmjob/llama.cpp/convert_hf_to_gguf.py", line 6533, in <module>
Qwen3-R1984-30B-A3B main()
Qwen3-R1984-30B-A3B File "/llmjob/llama.cpp/convert_hf_to_gguf.py", line 6527, in main
Qwen3-R1984-30B-A3B model_instance.write()
Qwen3-R1984-30B-A3B File "/llmjob/llama.cpp/convert_hf_to_gguf.py", line 403, in write
Qwen3-R1984-30B-A3B self.prepare_tensors()
Qwen3-R1984-30B-A3B File "/llmjob/llama.cpp/convert_hf_to_gguf.py", line 3051, in prepare_tensors
Qwen3-R1984-30B-A3B super().prepare_tensors()
Qwen3-R1984-30B-A3B File "/llmjob/llama.cpp/convert_hf_to_gguf.py", line 365, in prepare_tensors
Qwen3-R1984-30B-A3B self.gguf_writer.add_tensor(new_name, data, raw_dtype=data_qtype)
Qwen3-R1984-30B-A3B File "/llmjob/llama.cpp/gguf-py/gguf/gguf_writer.py", line 386, in add_tensor
Qwen3-R1984-30B-A3B self.add_tensor_info(name, shape, tensor.dtype, tensor.nbytes, raw_dtype=raw_dtype)
Qwen3-R1984-30B-A3B File "/llmjob/llama.cpp/gguf-py/gguf/gguf_writer.py", line 337, in add_tensor_info
Qwen3-R1984-30B-A3B raise ValueError(f'Duplicated tensor name {name!r}')
Qwen3-R1984-30B-A3B ValueError: Duplicated tensor name 'token_embd.weight'
Qwen3-R1984-30B-A3B yes: standard output: Broken pipe
Qwen3-R1984-30B-A3B job finished, status 1
Qwen3-R1984-30B-A3B job-done<0 Qwen3-R1984-30B-A3B noquant 1>
Qwen3-R1984-30B-A3B
Qwen3-R1984-30B-A3B NAME: Qwen3-R1984-30B-A3B
Qwen3-R1984-30B-A3B TIME: Mon Jun 9 01:15:34 2025
Qwen3-R1984-30B-A3B WORKER: nico1