runtime error

Exit code: 1. Reason: ██████| 1.80M/1.80M [00:00<00:00, 24.0MB/s] tokenizer.model: 0%| | 0.00/493k [00:00<?, ?B/s] tokenizer.model: 100%|██████████| 493k/493k [00:00<00:00, 2.36MB/s] special_tokens_map.json: 0%| | 0.00/72.0 [00:00<?, ?B/s] special_tokens_map.json: 100%|██████████| 72.0/72.0 [00:00<00:00, 355kB/s] Passing `generation_config` together with generation-related arguments=({'max_new_tokens'}) is deprecated and will be removed in future versions. Please pass either a `generation_config` object OR all generation parameters explicitly, but not both. Traceback (most recent call last): File "/app/app.py", line 27, in <module> demo = gr.ChatInterface( fn=chat_with_doctor, ...<2 lines>... theme="soft" ) TypeError: ChatInterface.__init__() got an unexpected keyword argument 'theme' Error during conversion: ReadTimeout('The read operation timed out') Exception in thread Thread-auto_conversion: Traceback (most recent call last): File "/usr/local/lib/python3.13/threading.py", line 1044, in _bootstrap_inner self.run() ~~~~~~~~^^ File "/usr/local/lib/python3.13/threading.py", line 995, in run self._target(*self._args, **self._kwargs) ~~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/usr/local/lib/python3.13/site-packages/transformers/safetensors_conversion.py", line 117, in auto_conversion raise e File "/usr/local/lib/python3.13/site-packages/transformers/safetensors_conversion.py", line 96, in auto_conversion sha = get_conversion_pr_reference(api, pretrained_model_name_or_path, **cached_file_kwargs) File "/usr/local/lib/python3.13/site-packages/transformers/safetensors_conversion.py", line 77, in get_conversion_pr_reference raise OSError( ...<2 lines>... ) OSError: Could not create safetensors conversion PR. The repo does not appear to have a file named pytorch_model.bin or model.safetensors.If you are loading with variant, use `use_safetensors=False` to load the original model.

Container logs:

Fetching error logs...