llama-duo/synth_coding_dataset_dedup
Viewer • Updated • 117k • 14
How to use llama-duo/gemma2b-coding-gpt4o-100k with PEFT:
from peft import PeftModel
from transformers import AutoModelForCausalLM
base_model = AutoModelForCausalLM.from_pretrained("google/gemma-2b")
model = PeftModel.from_pretrained(base_model, "llama-duo/gemma2b-coding-gpt4o-100k")This model is a fine-tuned version of google/gemma-2b on the llama-duo/synth_coding_dataset_dedup dataset. It achieves the following results on the evaluation set:
More information needed
More information needed
More information needed
The following hyperparameters were used during training:
| Training Loss | Epoch | Step | Validation Loss |
|---|---|---|---|
| 0.6871 | 0.9979 | 235 | 1.4013 |
| 0.6707 | 2.0 | 471 | 1.3993 |
| 0.6047 | 2.9979 | 706 | 1.4091 |
| 0.5773 | 4.0 | 942 | 1.4428 |
| 0.5548 | 4.9979 | 1177 | 1.4904 |
| 0.5409 | 6.0 | 1413 | 1.5480 |
| 0.5151 | 6.9979 | 1648 | 1.6102 |
| 0.4987 | 8.0 | 1884 | 1.6578 |
| 0.4875 | 8.9979 | 2119 | 1.6813 |
| 0.4904 | 9.9788 | 2350 | 1.6825 |
Base model
google/gemma-2b