Finetuning scripts

#5
by johnlockejrr - opened

Do you intend to release the scripts for finetuning the model or the recipe? Opensource models without finetuning scripts are dead even being good ones, look at Nemotron OCR v1.

No problem.

$ nemo-ocr train --model_config configs/catmus.yaml
โ•ญโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€ Finetune โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ•ฎ
โ”‚ Extended charset: +111 chars โ”‚
โ•ฐโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ•ฏ
[WARN] Missing keys: ['stem.weight', 'stem.bias', 'classifier.weight', 'classifier.bias']
Using bfloat16 Automatic Mixed Precision (AMP)
Trainer already configured with model summary callbacks: [<class 'nemo_ocr.train.callbacks.DelayedRichModelSummary'>]. Skipping setting a default `ModelSummary` callback.
GPU available: True (cuda), used: True
TPU available: False, using: 0 TPU cores
๐Ÿ’ก Tip: For seamless cloud logging and experiment tracking, try installing [litlogger](https://pypi.org/project/litlogger/) to enable LitLogger, which logs metrics and artifacts automatically to the Lightning Experiments platform.
LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0,1]
โ”โ”โ”โ”โ”โ”ณโ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”ณโ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”ณโ”โ”โ”โ”โ”โ”โ”โ”โ”ณโ”โ”โ”โ”โ”โ”โ”โ”ณโ”โ”โ”โ”โ”โ”โ”โ”“
โ”ƒ    โ”ƒ Name             โ”ƒ Type               โ”ƒ Params โ”ƒ Mode  โ”ƒ FLOPs โ”ƒ
โ”กโ”โ”โ”โ”โ•‡โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ•‡โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ•‡โ”โ”โ”โ”โ”โ”โ”โ”โ•‡โ”โ”โ”โ”โ”โ”โ”โ•‡โ”โ”โ”โ”โ”โ”โ”โ”ฉ
โ”‚ 0  โ”‚ model            โ”‚ NemoRecognizer     โ”‚ 36.2 M โ”‚ train โ”‚     0 โ”‚
โ”‚ 1  โ”‚ model.stem       โ”‚ Conv2d             โ”‚    256 โ”‚ train โ”‚     0 โ”‚
โ”‚ 2  โ”‚ model.encoder    โ”‚ NemoCNNEncoder     โ”‚  9.8 M โ”‚ train โ”‚     0 โ”‚
โ”‚ 3  โ”‚ model.encoder.0  โ”‚ Sequential         โ”‚  147 K โ”‚ train โ”‚     0 โ”‚
โ”‚ 4  โ”‚ model.encoder.1  โ”‚ Sequential         โ”‚  295 K โ”‚ train โ”‚     0 โ”‚
โ”‚ 5  โ”‚ model.encoder.2  โ”‚ MaxPool2d          โ”‚      0 โ”‚ train โ”‚     0 โ”‚
โ”‚ 6  โ”‚ model.encoder.3  โ”‚ Sequential         โ”‚  590 K โ”‚ train โ”‚     0 โ”‚
โ”‚ 7  โ”‚ model.encoder.4  โ”‚ Sequential         โ”‚  1.2 M โ”‚ train โ”‚     0 โ”‚
โ”‚ 8  โ”‚ model.encoder.5  โ”‚ MaxPool2d          โ”‚      0 โ”‚ train โ”‚     0 โ”‚
โ”‚ 9  โ”‚ model.encoder.6  โ”‚ Sequential         โ”‚  2.4 M โ”‚ train โ”‚     0 โ”‚
โ”‚ 10 โ”‚ model.encoder.7  โ”‚ Sequential         โ”‚  4.7 M โ”‚ train โ”‚     0 โ”‚
โ”‚ 11 โ”‚ model.encoder.8  โ”‚ MaxPool2d          โ”‚      0 โ”‚ train โ”‚     0 โ”‚
โ”‚ 12 โ”‚ model.encoder.9  โ”‚ Sequential         โ”‚  525 K โ”‚ train โ”‚     0 โ”‚
โ”‚ 13 โ”‚ model.tx         โ”‚ TransformerEncoder โ”‚ 18.9 M โ”‚ train โ”‚     0 โ”‚
โ”‚ 14 โ”‚ model.tx.layers  โ”‚ ModuleList         โ”‚ 18.9 M โ”‚ train โ”‚     0 โ”‚
โ”‚ 15 โ”‚ model.tx.norm    โ”‚ LayerNorm          โ”‚  1.0 K โ”‚ train โ”‚     0 โ”‚
โ”‚ 16 โ”‚ model.classifier โ”‚ Linear             โ”‚  7.4 M โ”‚ train โ”‚     0 โ”‚
โ”‚ 17 โ”‚ criterion        โ”‚ CTCLoss            โ”‚      0 โ”‚ train โ”‚     0 โ”‚
โ””โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜
Trainable params: 26.3 M
Non-trainable params: 9.8 M
Total params: 36.2 M
Total estimated model params size (MB): 144
Modules in train mode: 92
Modules in eval mode: 0
Total FLOPs: 0
Epoch 0/149 โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ” 4280/4280 0:12:19 โ€ข 0:00:00 5.81it/s v_num: 0.000 tr_loss_step: 2.192 va_loss: 2.308 va_cer: 0.698 va_wer: 1.051 tr_loss_epoch: 2.698 early_stop: 0/10 0.69808
Epoch 1/149 โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ” 4280/4280 0:12:22 โ€ข 0:00:00 5.88it/s v_num: 0.000 tr_loss_step: 1.974 va_loss: 1.939 va_cer: 0.577 va_wer: 0.977 tr_loss_epoch: 2.066 early_stop: 0/10 0.57666
Epoch 2/149 โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ” 4280/4280 0:12:21 โ€ข 0:00:00 5.86it/s v_num: 0.000 tr_loss_step: 0.904 va_loss: 1.906 va_cer: 0.532 va_wer: 0.975 tr_loss_epoch: 1.766 early_stop: 0/10 0.53154
Epoch 3/149 โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ•บโ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ”โ” 1202/4280 0:03:29 โ€ข 0:08:54 5.77it/s v_num: 0.000 tr_loss_step: 1.616 va_loss: 1.906 va_cer: 0.532 va_wer: 0.975 tr_loss_epoch: 1.766 early_stop: 0/10 0.53154

Sign up or log in to comment