runtime error
Exit code: 1. Reason: | 0.00/16.7k [00:00<?, ?B/s][A tokenizer_config.json: 100%|██████████| 16.7k/16.7k [00:00<00:00, 38.2MB/s] tokenizer.json: 0%| | 0.00/12.8M [00:00<?, ?B/s][A tokenizer.json: 100%|██████████| 12.8M/12.8M [00:00<00:00, 27.6MB/s] video_preprocessor_config.json: 0%| | 0.00/385 [00:00<?, ?B/s][A video_preprocessor_config.json: 100%|██████████| 385/385 [00:00<00:00, 1.73MB/s] `torch_dtype` is deprecated! Use `dtype` instead! model.safetensors.index.json: 0%| | 0.00/50.9k [00:00<?, ?B/s][A model.safetensors.index.json: 100%|██████████| 50.9k/50.9k [00:00<00:00, 60.0MB/s] model.safetensors-00001-of-00001.safeten(…): 0%| | 0.00/1.75G [00:00<?, ?B/s][A model.safetensors-00001-of-00001.safeten(…): 0%| | 0.00/1.75G [00:03<?, ?B/s][A model.safetensors-00001-of-00001.safeten(…): 100%|██████████| 1.75G/1.75G [00:04<00:00, 1.48GB/s][A model.safetensors-00001-of-00001.safeten(…): 100%|██████████| 1.75G/1.75G [00:04<00:00, 416MB/s] The fast path is not available because one of the required library is not installed. Falling back to torch implementation. To install follow https://github.com/fla-org/flash-linear-attention#installation and https://github.com/Dao-AILab/causal-conv1d Loading weights: 0%| | 0/473 [00:00<?, ?it/s][A Loading weights: 31%|███ | 146/473 [00:01<00:02, 145.70it/s][A Loading weights: 76%|███████▌ | 360/473 [00:02<00:00, 185.31it/s][A Loading weights: 100%|██████████| 473/473 [00:02<00:00, 206.01it/s] Traceback (most recent call last): File "/app/app.py", line 53, in <module> chatbot = gr.Chatbot(elem_id="chatbot", height=520, type="messages") File "/usr/local/lib/python3.13/site-packages/gradio/component_meta.py", line 194, in wrapper return fn(self, **kwargs) TypeError: Chatbot.__init__() got an unexpected keyword argument 'type'
Container logs:
Fetching error logs...