Hi! I'm back. School has been a b- and a half and I've been really out of it because of the weather, but I'm back. This is my next attempt at doing CPT and SFT on a model. It's not good, but I think I'm making progress. Also, if you saw this when it first came out, no you didn't. I had that one going overnight and the grad_norm had apparently climbed into the few hundred thousands, so it was bascially stuck for over half the run. This one is still bad, but not as bad. Everyone feel free to tweak this to make this better. Please let me know how you do it and how well it goes!

Uploaded finetuned model

  • Developed by: DrRiceIO7
  • License: apache-2.0
  • Finetuned from model : DrRiceIO7/SmolLM2-1.7B-CPT-Merged

This llama model was trained 2x faster with Unsloth and Huggingface's TRL library.

Downloads last month
3
Safetensors
Model size
2B params
Tensor type
F16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for DrRiceIO7/SmolLM2-1.7B-SFT-Merged

Finetuned
(1)
this model