quant-hinny-coder-6.7b-java

#1424
by Netsnake - opened

hinny-coder/quant-hinny-coder-6.7b-java

It's queued! :D

You can check for progress at http://hf.tst.eu/status.html or regularly check the model
summary page at https://hf.tst.eu/model#quant-hinny-coder-6.7b-java-GGUF for quants to appear.

This model unfortunately failed with the following error which give the LlamaForCausalLM architecture was somewhat expected. Most LlamaForCausalLM models use a custom non-llama.cpp compatible architecture as beside the leaked llama v1 models LlamaForCausalLM is not an architecture used in and popular foundation model as far as I'm aware.

INFO:hf-to-gguf:gguf: loading model weight map from 'model.safetensors.index.json'
INFO:hf-to-gguf:gguf: loading model part 'model-00001-of-00002.safetensors'
INFO:hf-to-gguf:token_embd.weight,           torch.float16 --> F16, shape = {4096, 32256}
INFO:hf-to-gguf:blk.0.attn_norm.weight,      torch.float16 --> F32, shape = {4096}
Traceback (most recent call last):
  File "/llmjob/llama.cpp/convert_hf_to_gguf.py", line 9178, in <module>
    main()
  File "/llmjob/llama.cpp/convert_hf_to_gguf.py", line 9172, in main
    model_instance.write()
  File "/llmjob/llama.cpp/convert_hf_to_gguf.py", line 439, in write
    self.prepare_tensors()
  File "/llmjob/llama.cpp/convert_hf_to_gguf.py", line 2265, in prepare_tensors
    super().prepare_tensors()
  File "/llmjob/llama.cpp/convert_hf_to_gguf.py", line 300, in prepare_tensors
    for new_name, data_torch in (self.modify_tensors(data_torch, name, bid)):
  File "/llmjob/llama.cpp/convert_hf_to_gguf.py", line 2232, in modify_tensors
    return [(self.map_tensor_name(name), data_torch)]
  File "/llmjob/llama.cpp/convert_hf_to_gguf.py", line 259, in map_tensor_name
    raise ValueError(f"Can not map tensor {name!r}")
ValueError: Can not map tensor 'model.layers.0.mlp.down_proj.SCB'
job finished, status 1
job-done<0 quant-hinny-coder-6.7b-java noquant 1>

error/1 ValueError Can not map tensor '

Sign up or log in to comment