How to use from
Docker Model Runner
docker model run hf.co/invalid-coder/test-GGUF:Q2_K
Quick Links

YAML Metadata Warning:empty or missing yaml metadata in repo card

Check out the documentation for more information.

This is the test version for pruning. This model is a base model that will be pruned and quantized for on-device purpose.

I used mergekit for merging two models:

https://github.com/cg123/mergekit The two models I combined are:

https://huggingface.co/jeonsworld/CarbonVillain-en-10.7B-v2 https://huggingface.co/kyujinpy/Sakura-SOLAR-Instruct-DPO-v2

I used GGUF quantization.

Downloads last month
14
GGUF
Model size
0 params
Architecture
llama
Hardware compatibility
Log In to add your hardware

2-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support