Datasets:

Modalities:
Text
ArXiv:
License:

reproduce question

#16
by fengwei - opened

We attempted to reproduce the 1.2B MiniCPM model using your publicly released Chinese dataset (120B tokens) and the tokenizer from the 1.2B model. However, the reproduction results were significantly suboptimal, with a CMMLU score of only 25%. It seems the model failed to learn effectively.

Could you kindly help us identify potential issues or share the training hyperparameters used for your model?


Our Training hyperparameters:

train

per_device_train_batch_size: 32
gradient_accumulation_steps: 1
learning_rate: 1.0e-3
num_train_epochs: 1.0
bf16: true
ddp_timeout: 180000000
resume_from_checkpoint: output/MiniCPM-1B-pt-fineweb-zh-20250611
max_grad_norm: 1.0
seed: 6198

optimizer

optim: adamw_torch
weight_decay: 0.1
adam_epsilon: 1.0e-8
adam_beta1: 0.9
adam_beta2: 0.95

scheduler

lr_scheduler_type: cosine
warmup_steps: 1000

OpenBMB org

Which evaluation repo do you use? We use Lighteval, using OpenCompass always get 25%

fengwei changed discussion status to closed

Sign up or log in to comment