runtime error
Exit code: 1. Reason: oints, so we will NOT tie them. You should update the config with `tie_word_embeddings=False` to silence this warning generation_config.json: 0%| | 0.00/189 [00:00<?, ?B/s][A generation_config.json: 100%|ββββββββββ| 189/189 [00:00<00:00, 713kB/s] β TraducciΓ³n cargada correctamente π [3/3] Cargando modelo de descripciones: PlanTL-GOB-ES/gpt2-large-bne β ERROR cargando modelos: PlanTL-GOB-ES/gpt2-large-bne does not appear to have a file named pytorch_model.bin or model.safetensors. Traceback (most recent call last): File "/app/app.py", line 21, in <module> translator = MenuTranslatorPipeline() File "/app/core/pipeline.py", line 25, in __init__ self.manager.load_all() File "/app/core/models.py", line 50, in load_all self._load_description() File "/app/core/models.py", line 89, in _load_description self.description_model = AutoModelForCausalLM.from_pretrained( File "/usr/local/lib/python3.10/site-packages/transformers/models/auto/auto_factory.py", line 374, in from_pretrained return model_class.from_pretrained( File "/usr/local/lib/python3.10/site-packages/transformers/modeling_utils.py", line 3987, in from_pretrained checkpoint_files, sharded_metadata = _get_resolved_checkpoint_files( File "/usr/local/lib/python3.10/site-packages/transformers/modeling_utils.py", line 687, in _get_resolved_checkpoint_files raise OSError( OSError: PlanTL-GOB-ES/gpt2-large-bne does not appear to have a file named pytorch_model.bin or model.safetensors. model.safetensors: 0%| | 11.3M/2.46G [00:01<04:35, 8.90MB/s][A model.safetensors: 5%|β | 134M/2.46G [00:02<00:33, 68.6MB/s] [A model.safetensors: 22%|βββ | 537M/2.46G [00:03<00:10, 185MB/s] [A model.safetensors: 35%|ββββ | 851M/2.46G [00:04<00:07, 213MB/s][A model.safetensors: 100%|ββββββββββ| 2.46G/2.46G [00:05<00:00, 647MB/s][A model.safetensors: 100%|ββββββββββ| 2.46G/2.46G [00:05<00:00, 418MB/s]
Container logs:
Fetching error logs...