runtime error

Exit code: 1. Reason: r was saved with this model. `use_fast=True` will be the default behavior in v4.52, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with `use_fast=False`. preprocessor_config.json: 0%| | 0.00/287 [00:00<?, ?B/s] preprocessor_config.json: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 287/287 [00:00<00:00, 1.89MB/s] tokenizer_config.json: 0%| | 0.00/506 [00:00<?, ?B/s] tokenizer_config.json: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 506/506 [00:00<00:00, 4.34MB/s] vocab.txt: 0%| | 0.00/232k [00:00<?, ?B/s] vocab.txt: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 232k/232k [00:00<00:00, 60.3MB/s] tokenizer.json: 0%| | 0.00/711k [00:00<?, ?B/s] tokenizer.json: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 711k/711k [00:00<00:00, 1.38MB/s] special_tokens_map.json: 0%| | 0.00/125 [00:00<?, ?B/s] special_tokens_map.json: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 125/125 [00:00<00:00, 1.26MB/s] config.json: 0%| | 0.00/4.56k [00:00<?, ?B/s] config.json: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 4.56k/4.56k [00:00<00:00, 21.7MB/s] pytorch_model.bin: 0%| | 0.00/990M [00:00<?, ?B/s] pytorch_model.bin: 4%|▍ | 41.9M/990M [00:01<00:28, 32.9MB/s] pytorch_model.bin: 54%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ– | 539M/990M [00:02<00:01, 277MB/s]  pytorch_model.bin: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‰| 990M/990M [00:02<00:00, 354MB/s] model.safetensors: 0%| | 0.00/990M [00:00<?, ?B/s] model.safetensors: 3%|β–Ž | 25.0M/990M [00:01<00:45, 21.2MB/s] model.safetensors: 23%|β–ˆβ–ˆβ–Ž | 224M/990M [00:02<00:06, 117MB/s]  model.safetensors: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–‰| 990M/990M [00:02<00:00, 330MB/s] Traceback (most recent call last): File "/home/user/app/app.py", line 43, in <module> iface.launch(share=True, allow_api=True) # API aktivieren TypeError: Blocks.launch() got an unexpected keyword argument 'allow_api'

Container logs:

Fetching error logs...