Dataset Viewer
Auto-converted to Parquet Duplicate
Model_name
stringclasses
2 values
Train_size
int64
100
100
Test_size
int64
10
10
arg
dict
lora
listlengths
3
8
Parameters
int64
409M
912M
Trainable_parameters
int64
1.77M
161M
r
int64
8
8
Memory Allocation
stringclasses
2 values
Training Time
stringclasses
2 values
Performance
dict
facebook/bart-large-mnli
100
10
{ "auto_find_batch_size": true, "gradient_accumulation_steps": 4, "learning_rate": 0.00005, "logging_steps": 1, "lr_scheduler_type": "linear", "num_train_epochs": 1, "optim": "adamw_8bit", "output_dir": "outputs", "per_device_train_batch_size": null, "report_to": "none", "save_strategy": "no", "save_total_limit": 0, "seed": 3407, "warmup_steps": 5, "weight_decay": 0.01 }
[ "k_proj", "q_proj", "v_proj" ]
409,124,878
1,769,472
8
1950.61
14.32
{ "accuracy": 0.1, "f1_macro": 0.027777777777777776, "f1_weighted": 0.05, "precision": 0.018518518518518517, "recall": 0.05555555555555555 }
unsloth/Qwen3-0.6B-unsloth-bnb-4bit
100
10
{ "auto_find_batch_size": null, "gradient_accumulation_steps": 4, "learning_rate": 0.00002, "logging_steps": 10, "lr_scheduler_type": "linear", "num_train_epochs": 1, "optim": "adamw_8bit", "output_dir": "outputs", "per_device_train_batch_size": 8, "report_to": "none", "save_strategy": null, "save_total_limit": null, "seed": 3407, "warmup_steps": 5, "weight_decay": 0.01 }
[ "down_proj", "gate_proj", "k_proj", "lm_head", "o_proj", "q_proj", "up_proj", "v_proj" ]
912,261,120
160,628,736
8
3303.32
59.48
{ "accuracy": 0, "f1_macro": 0, "f1_weighted": 0, "precision": 0, "recall": 0 }
README.md exists but content is empty.
Downloads last month
4