runtime error
Exit code: 1. Reason: https://github.com/pytorch/pytorch/issues. warnings.warn( preprocessor_config.json: 0%| | 0.00/450 [00:00<?, ?B/s][A preprocessor_config.json: 100%|ββββββββββ| 450/450 [00:00<00:00, 3.11MB/s] Using a slow image processor as `use_fast` is unset and a slow processor was saved with this model. `use_fast=True` will be the default behavior in v4.52, even if the model was saved with a slow processor. This will result in minor differences in outputs. You'll still be able to use a slow processor with `use_fast=False`. tokenizer_config.json: 0%| | 0.00/1.24k [00:00<?, ?B/s][A tokenizer_config.json: 100%|ββββββββββ| 1.24k/1.24k [00:00<00:00, 7.35MB/s] tokenizer.json: 0%| | 0.00/113k [00:00<?, ?B/s][A tokenizer.json: 100%|ββββββββββ| 113k/113k [00:00<00:00, 8.98MB/s] special_tokens_map.json: 0%| | 0.00/964 [00:00<?, ?B/s][A special_tokens_map.json: 100%|ββββββββββ| 964/964 [00:00<00:00, 6.33MB/s] config.json: 0%| | 0.00/1.57k [00:00<?, ?B/s][A config.json: 100%|ββββββββββ| 1.57k/1.57k [00:00<00:00, 12.4MB/s] ./encoder_model.onnx: 0%| | 0.00/87.5M [00:00<?, ?B/s][A ./encoder_model.onnx: 100%|ββββββββββ| 87.5M/87.5M [00:01<00:00, 76.8MB/s][A ./encoder_model.onnx: 100%|ββββββββββ| 87.5M/87.5M [00:01<00:00, 76.8MB/s] ./decoder_model.onnx: 0%| | 0.00/32.0M [00:00<?, ?B/s][A ./decoder_model.onnx: 100%|ββββββββββ| 32.0M/32.0M [00:00<00:00, 82.6MB/s] generation_config.json: 0%| | 0.00/211 [00:00<?, ?B/s][A generation_config.json: 100%|ββββββββββ| 211/211 [00:00<00:00, 1.24MB/s] β ORTModelForVision2Seq and TrOCRProcessor initialized successfully for equation conversion. Traceback (most recent call last): File "/app/app.py", line 320, in <module> demo.launch( TypeError: Blocks.launch() got an unexpected keyword argument 'enable_queue'
Container logs:
Fetching error logs...