runtime error
Exit code: 1. Reason: ��āāāāāā| 190/190 [00:00<00:00, 919kB/s] ā Successfully created ChromaDB with 4282 chunks! š Database saved to: chroma_db ā Database created successfully! š¤ Loading AI Model (google/flan-t5-small)... tokenizer_config.json: 0%| | 0.00/2.54k [00:00<?, ?B/s][A tokenizer_config.json: 100%|āāāāāāāāāā| 2.54k/2.54k [00:00<00:00, 2.89MB/s] spiece.model: 0%| | 0.00/792k [00:00<?, ?B/s][A spiece.model: 100%|āāāāāāāāāā| 792k/792k [00:00<00:00, 4.24MB/s] tokenizer.json: 0%| | 0.00/2.42M [00:00<?, ?B/s][A tokenizer.json: 100%|āāāāāāāāāā| 2.42M/2.42M [00:00<00:00, 112MB/s] special_tokens_map.json: 0%| | 0.00/2.20k [00:00<?, ?B/s][A special_tokens_map.json: 100%|āāāāāāāāāā| 2.20k/2.20k [00:00<00:00, 9.36MB/s] config.json: 0%| | 0.00/1.40k [00:00<?, ?B/s][A config.json: 100%|āāāāāāāāāā| 1.40k/1.40k [00:00<00:00, 1.33MB/s] `torch_dtype` is deprecated! Use `dtype` instead! model.safetensors: 0%| | 0.00/308M [00:00<?, ?B/s][A model.safetensors: 35%|āāāā | 108M/308M [00:01<00:02, 99.0MB/s][A model.safetensors: 100%|āāāāāāāāāā| 308M/308M [00:01<00:00, 217MB/s] generation_config.json: 0%| | 0.00/147 [00:00<?, ?B/s][A generation_config.json: 100%|āāāāāāāāāā| 147/147 [00:00<00:00, 628kB/s] Device set to use cpu ā AI Model loaded successfully! Traceback (most recent call last): File "/app/app.py", line 97, in <module> demo.launch() File "/usr/local/lib/python3.10/site-packages/gradio/blocks.py", line 2666, in launch ) = http_server.start_server( File "/usr/local/lib/python3.10/site-packages/gradio/http_server.py", line 182, in start_server raise OSError( OSError: Cannot find empty port in range: 7860-7860. You can specify a different port by setting the GRADIO_SERVER_PORT environment variable or passing the `server_port` parameter to `launch()`.
Container logs:
Fetching error logs...