runtime error
Exit code: 1. Reason: .7M/189M [00:01<00:02, 47.0MB/s][A chinese-hubert-base/pytorch_model.bin: 100%|ββββββββββ| 189M/189M [00:01<00:00, 144MB/s] config.json: 0%| | 0.00/963 [00:00<?, ?B/s][A config.json: 100%|ββββββββββ| 963/963 [00:00<00:00, 7.34MB/s] chinese-roberta-wwm-ext-large/pytorch_mo(β¦): 0%| | 0.00/651M [00:00<?, ?B/s][A chinese-roberta-wwm-ext-large/pytorch_mo(β¦): 7%|β | 47.6M/651M [00:01<00:13, 43.3MB/s][A chinese-roberta-wwm-ext-large/pytorch_mo(β¦): 100%|ββββββββββ| 651M/651M [00:01<00:00, 416MB/s] tokenizer.json: 0%| | 0.00/269k [00:00<?, ?B/s][A tokenizer.json: 100%|ββββββββββ| 269k/269k [00:00<00:00, 123MB/s] s1v3.ckpt: 0%| | 0.00/155M [00:00<?, ?B/s][A s1v3.ckpt: 57%|ββββββ | 88.3M/155M [00:01<00:00, 74.8MB/s][A s1v3.ckpt: 100%|ββββββββββ| 155M/155M [00:01<00:00, 130MB/s] sv/pretrained_eres2netv2w24s4ep4.ckpt: 0%| | 0.00/108M [00:00<?, ?B/s][A sv/pretrained_eres2netv2w24s4ep4.ckpt: 100%|ββββββββββ| 108M/108M [00:01<00:00, 93.5MB/s][A sv/pretrained_eres2netv2w24s4ep4.ckpt: 100%|ββββββββββ| 108M/108M [00:01<00:00, 93.4MB/s] v2Pro/s2Gv2ProPlus.pth: 0%| | 0.00/200M [00:00<?, ?B/s][A v2Pro/s2Gv2ProPlus.pth: 33%|ββββ | 66.0M/200M [00:01<00:02, 54.0MB/s][A v2Pro/s2Gv2ProPlus.pth: 100%|ββββββββββ| 200M/200M [00:01<00:00, 154MB/s] [nltk_data] Downloading package averaged_perceptron_tagger_eng to [nltk_data] /root/nltk_data... [nltk_data] Unzipping taggers/averaged_perceptron_tagger_eng.zip. The cache for model files in Transformers v4.22.0 has been updated. Migrating your old cache. This is a one-time only operation. You can interrupt this and resume the migration later on by calling `transformers.utils.move_cache()`. 0it [00:00, ?it/s][A 0it [00:00, ?it/s] β ε―Όε ₯ε€±θ΄₯: No module named 'AR.modules.activation'
Container logs:
Fetching error logs...