runtime error
Exit code: 1. Reason: pth GPT2InferenceModel has generative capabilities, as `prepare_inputs_for_generation` is explicitly overwritten. However, it doesn't directly inherit from `GenerationMixin`. From πv4.50π onwards, `PreTrainedModel` will NOT inherit from `GenerationMixin`, and this model will lose the ability to call `generate` and other related functions. - If you're using `trust_remote_code=True`, you can get rid of this warning by loading the model with an auto class. See https://huggingface.co/docs/transformers/en/model_doc/auto#auto-classes - If you are the owner of the model architecture code, please modify your model class such that it inherits from `GenerationMixin` (after `PreTrainedModel`, otherwise you'll get an exception). - If you are not the owner of the model architecture class, please contact the model code owner to update it. >> semantic_codec weights restored from: /root/.cache/huggingface/hub/models--amphion--MaskGCT/snapshots/265c6cef07625665d0c28d2faafb1415562379dc/semantic_codec/model.safetensors cfm loaded length_regulator loaded gpt_layer loaded >> s2mel weights restored from: ./checkpoints/s2mel.pth >> campplus_model weights restored from: /root/.cache/huggingface/hub/models--funasr--campplus/snapshots/fb71fe990cbf6031ae6987a2d76fe64f94377b7e/campplus_cn_common.bin Loading weights from nvidia/bigvgan_v2_22khz_80band_256x Removing weight norm... >> bigvgan weights restored from: nvidia/bigvgan_v2_22khz_80band_256x Traceback (most recent call last): File "/app/webui.py", line 46, in <module> tts = IndexTTS2(model_dir=cmd_args.model_dir, cfg_path=os.path.join(cmd_args.model_dir, "config.yaml"), ...<2 lines>... use_cuda_kernel=cmd_args.cuda_kernel, ) File "/app/indextts/infer_v2.py", line 160, in __init__ self.normalizer.load() ~~~~~~~~~~~~~~~~~~~~^^ File "/app/indextts/utils/front.py", line 101, in load from tn.english.normalizer import Normalizer as NormalizerEn ModuleNotFoundError: No module named 'tn.english'
Container logs:
Fetching error logs...