Model_name
string | Train_size
int64 | Test_size
int64 | arg
dict | lora
list | Parameters
int64 | Trainable_parameters
int64 | r
int64 | Memory Allocation
string | Training Time
string | accuracy
float64 | f1_macro
float64 | f1_weighted
float64 | precision
float64 | recall
float64 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Alibaba-NLP/E2Rank-4B
| 1,000
| 32
|
{
"adafactor": false,
"adam_beta1": 0.9,
"adam_beta2": 0.999,
"adam_epsilon": 1e-8,
"bf16": false,
"fp16": false,
"fp16_opt_level": "O1",
"gradient_accumulation_steps": 4,
"half_precision_backend": "auto",
"label_smoothing_factor": 0.1,
"learning_rate": 0.00005,
"lr_scheduler_type": "linear",
"max_grad_norm": 1,
"max_steps": 30,
"n_gpu": 1,
"num_train_epochs": 0,
"optim": "adamw_8bit",
"optim_args": "Not have",
"per_device_eval_batch_size": 8,
"per_device_train_batch_size": 8,
"warmup_ratio": 0,
"warmup_steps": 5,
"weight_decay": 0.01
}
|
[
"down_proj",
"gate_proj",
"k_proj",
"o_proj",
"q_proj",
"up_proj",
"v_proj"
] | 4,286,092,288
| 264,274,432
| 128
|
19592.38
|
418.56
| 0.25
| 0.195482
| 0.224454
| 0.202015
| 0.253846
|
README.md exists but content is empty.
- Downloads last month
- 6