Lemon / stage1_7b /README.md
cheryyunl's picture
Upload folder using huggingface_hub
b2f4c2b verified
metadata
library_name: transformers
license: other
base_model: multimodal_qwen2.5_7b_model
tags:
  - llama-factory
  - full
  - generated_from_trainer
model-index:
  - name: sft_7b_stage1_abl_lr
    results: []

sft_7b_stage1_abl_lr

This model is a fine-tuned version of multimodal_qwen2.5_7b_model on the objaverse, the procthor, the structured3d, the shapenet, the arkitscenes, the hm3d, the 3dfuture, the scannet, the multiscan and the 3rscan datasets. It achieves the following results on the evaluation set:

  • Loss: 2.3730

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • distributed_type: multi-GPU
  • num_devices: 4
  • gradient_accumulation_steps: 8
  • total_train_batch_size: 256
  • total_eval_batch_size: 32
  • optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 3

Training results

Training Loss Epoch Step Validation Loss
3.1652 0.1471 1000 2.9991
2.8946 0.2942 2000 2.8477
2.8231 0.4413 3000 2.7286
2.6174 0.5884 4000 2.6507
2.619 0.7356 5000 2.5940
2.6865 0.8827 6000 2.5522
2.4415 1.0297 7000 2.5223
2.4281 1.1768 8000 2.4910
2.3066 1.3239 9000 2.4667
2.3294 1.4712 10000 2.4428
2.346 1.6183 11000 2.4197
2.3633 1.7654 12000 2.3990
2.3653 1.9125 13000 2.3790
1.9338 2.0597 14000 2.3935
1.9587 2.2068 15000 2.3912
1.9198 2.3539 16000 2.3836
1.9226 2.5011 17000 2.3771
1.9083 2.6482 18000 2.3730
1.9092 2.7953 19000 2.3703
1.8372 2.9424 20000 2.3699

Framework versions

  • Transformers 4.51.3
  • Pytorch 2.4.1
  • Datasets 3.2.0
  • Tokenizers 0.21.0