videomae_kinetics_wlasl_2000_20ep_coR

This model is a fine-tuned version of MCG-NJU/videomae-base-finetuned-kinetics on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 2.5193
  • Accuracy: 0.4382

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 2
  • eval_batch_size: 2
  • seed: 42
  • gradient_accumulation_steps: 4
  • total_train_batch_size: 8
  • optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_ratio: 0.1
  • training_steps: 35720
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Accuracy
30.4461 0.05 1786 7.5731 0.0020
28.8085 1.0500 3572 6.7152 0.0240
24.6341 2.0500 5358 5.7700 0.0827
20.2204 3.0500 7145 4.9529 0.1581
16.0475 4.05 8931 4.2523 0.2296
12.2515 5.0500 10717 3.6646 0.3044
9.0329 6.0500 12503 3.2212 0.3488
6.4841 7.0500 14290 2.9584 0.3739
4.5533 8.05 16076 2.7372 0.3996
3.1616 9.0500 17862 2.6446 0.4119
2.219 10.0500 19648 2.5457 0.4254
1.5975 11.0500 21435 2.5348 0.4201
1.2009 12.05 23221 2.5232 0.4270
0.9648 13.0500 25007 2.5305 0.4277
0.7877 14.0500 26793 2.5389 0.4231
0.6812 15.0500 28580 2.5198 0.4349
0.5795 16.05 30366 2.5196 0.4331
0.5066 17.0500 32152 2.5130 0.4336
0.453 18.0500 33938 2.5175 0.4385
0.3928 19.0499 35720 2.5193 0.4382

Framework versions

  • Transformers 4.46.1
  • Pytorch 2.5.1+cu124
  • Datasets 3.1.0
  • Tokenizers 0.20.1
Downloads last month
8
Safetensors
Model size
87.8M params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for Shawon16/videomae_kinetics_wlasl_2000_20ep_coR

Finetuned
(279)
this model