runtime error

Exit code: 1. Reason: .7M/189M [00:01<00:02, 47.0MB/s] chinese-hubert-base/pytorch_model.bin: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 189M/189M [00:01<00:00, 144MB/s] config.json: 0%| | 0.00/963 [00:00<?, ?B/s] config.json: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 963/963 [00:00<00:00, 7.34MB/s] chinese-roberta-wwm-ext-large/pytorch_mo(…): 0%| | 0.00/651M [00:00<?, ?B/s] chinese-roberta-wwm-ext-large/pytorch_mo(…): 7%|β–‹ | 47.6M/651M [00:01<00:13, 43.3MB/s] chinese-roberta-wwm-ext-large/pytorch_mo(…): 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 651M/651M [00:01<00:00, 416MB/s] tokenizer.json: 0%| | 0.00/269k [00:00<?, ?B/s] tokenizer.json: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 269k/269k [00:00<00:00, 123MB/s] s1v3.ckpt: 0%| | 0.00/155M [00:00<?, ?B/s] s1v3.ckpt: 57%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‹ | 88.3M/155M [00:01<00:00, 74.8MB/s] s1v3.ckpt: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 155M/155M [00:01<00:00, 130MB/s] sv/pretrained_eres2netv2w24s4ep4.ckpt: 0%| | 0.00/108M [00:00<?, ?B/s] sv/pretrained_eres2netv2w24s4ep4.ckpt: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 108M/108M [00:01<00:00, 93.5MB/s] sv/pretrained_eres2netv2w24s4ep4.ckpt: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 108M/108M [00:01<00:00, 93.4MB/s] v2Pro/s2Gv2ProPlus.pth: 0%| | 0.00/200M [00:00<?, ?B/s] v2Pro/s2Gv2ProPlus.pth: 33%|β–ˆβ–ˆβ–ˆβ–Ž | 66.0M/200M [00:01<00:02, 54.0MB/s] v2Pro/s2Gv2ProPlus.pth: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 200M/200M [00:01<00:00, 154MB/s] [nltk_data] Downloading package averaged_perceptron_tagger_eng to [nltk_data] /root/nltk_data... [nltk_data] Unzipping taggers/averaged_perceptron_tagger_eng.zip. The cache for model files in Transformers v4.22.0 has been updated. Migrating your old cache. This is a one-time only operation. You can interrupt this and resume the migration later on by calling `transformers.utils.move_cache()`. 0it [00:00, ?it/s] 0it [00:00, ?it/s] ❌ ε―Όε…₯ε€±θ΄₯: No module named 'AR.modules.activation'

Container logs:

Fetching error logs...