Dataset Viewer
Auto-converted to Parquet Duplicate
Model_name
stringclasses
1 value
Train_size
int64
50
50
Test_size
int64
50
50
arg
dict
lora
listlengths
3
3
Parameters
int64
5.6B
5.66B
Trainable_parameters
int64
6.9M
61.7M
r
int64
8
8
Memory Allocation
stringclasses
2 values
Training Time
stringclasses
2 values
accuracy
float64
0
0.04
f1_macro
float64
0
0.02
f1_weighted
float64
0
0.03
precision
float64
0
0.02
recall
float64
0
0.04
google/t5gemma-2b-2b-prefixlm-it
50
50
{ "adafactor": false, "adam_beta1": 0.9, "adam_beta2": 0.999, "adam_epsilon": 1e-8, "bf16": false, "fp16": false, "fp16_opt_level": "O1", "gradient_accumulation_steps": 4, "half_precision_backend": "auto", "label_smoothing_factor": 0, "learning_rate": 0.00005, "lr_scheduler_type": "linear", "max_grad_norm": 1, "max_steps": -1, "n_gpu": 2, "num_train_epochs": 1, "optim": "adamw_8bit", "optim_args": "Not have", "per_device_eval_batch_size": 8, "per_device_train_batch_size": 8, "warmup_ratio": 0, "warmup_steps": 5, "weight_decay": 0.01 }
[ "k_proj", "q_proj", "v_proj" ]
5,603,789,596
6,901,262
8
5998.11
6.69
0
0
0
0
0
google/t5gemma-2b-2b-prefixlm-it
50
50
{ "adafactor": false, "adam_beta1": 0.9, "adam_beta2": 0.999, "adam_epsilon": 1e-8, "bf16": false, "fp16": false, "fp16_opt_level": "O1", "gradient_accumulation_steps": 4, "half_precision_backend": "auto", "label_smoothing_factor": 0, "learning_rate": 0.00005, "lr_scheduler_type": "linear", "max_grad_norm": 1, "max_steps": -1, "n_gpu": 2, "num_train_epochs": 1, "optim": "adamw_8bit", "optim_args": "Not have", "per_device_eval_batch_size": 8, "per_device_train_batch_size": 8, "warmup_ratio": 0, "warmup_steps": 5, "weight_decay": 0.01 }
[ "k_proj", "q_proj", "v_proj" ]
5,658,549,518
61,661,184
8
5892.86
7.03
0.04
0.020308
0.02549
0.015079
0.044643
README.md exists but content is empty.
Downloads last month
4