Dataset Viewer
Auto-converted to Parquet Duplicate
Model_name
stringclasses
1 value
Train_size
int64
50.8k
50.8k
Test_size
int64
12.7k
12.7k
arg
dict
lora
listlengths
4
4
Parameters
int64
1.56B
1.57B
Trainable_parameters
int64
15.5M
31M
r
int64
16
32
Memory Allocation
stringclasses
2 values
Training Time
stringclasses
2 values
accuracy
float64
0.9
0.9
f1_macro
float64
0.9
0.9
f1_weighted
float64
0.9
0.9
precision
float64
0.9
0.9
recall
float64
0.9
0.9
Qwen/Qwen2-1.5B
50,775
12,652
{ "adafactor": false, "adam_beta1": 0.9, "adam_beta2": 0.999, "adam_epsilon": 1e-8, "bf16": false, "fp16": false, "fp16_opt_level": "O1", "gradient_accumulation_steps": 4, "half_precision_backend": "auto", "label_smoothing_factor": 0, "learning_rate": 0.00005, "lr_scheduler_type": "linear", "max_grad_norm": 1, "max_steps": -1, "n_gpu": 1, "num_train_epochs": 1, "optim": "adamw_8bit", "optim_args": "Not have", "per_device_eval_batch_size": 8, "per_device_train_batch_size": 8, "warmup_ratio": 0, "warmup_steps": 5, "weight_decay": 0.01 }
[ "down_proj", "gate_proj", "o_proj", "up_proj" ]
1,558,791,680
15,502,848
16
1830.97
2430.59
0.90215
0.897806
0.902341
0.898806
0.897009
Qwen/Qwen2-1.5B
50,775
12,652
{ "adafactor": false, "adam_beta1": 0.9, "adam_beta2": 0.999, "adam_epsilon": 1e-8, "bf16": false, "fp16": false, "fp16_opt_level": "O1", "gradient_accumulation_steps": 4, "half_precision_backend": "auto", "label_smoothing_factor": 0, "learning_rate": 0.00005, "lr_scheduler_type": "linear", "max_grad_norm": 1, "max_steps": -1, "n_gpu": 1, "num_train_epochs": 1, "optim": "adamw_8bit", "optim_args": "Not have", "per_device_eval_batch_size": 8, "per_device_train_batch_size": 8, "warmup_ratio": 0, "warmup_steps": 5, "weight_decay": 0.01 }
[ "down_proj", "gate_proj", "o_proj", "up_proj" ]
1,574,274,560
30,985,728
32
1170.74
2475.69
0.904205
0.899939
0.904472
0.900677
0.899533
README.md exists but content is empty.
Downloads last month
4