runtime error

Exit code: 1. Reason: šŸ”„ Loading the AI model... This will take a moment on the first run. Warning: You are sending unauthenticated requests to the HF Hub. Please set a HF_TOKEN to enable higher rate limits and faster downloads. config.json: 0%| | 0.00/660 [00:00<?, ?B/s] config.json: 100%|ā–ˆā–ˆā–ˆā–ˆā–ˆā–ˆā–ˆā–ˆā–ˆā–ˆ| 660/660 [00:00<00:00, 3.01MB/s] tokenizer_config.json: 0.00B [00:00, ?B/s] tokenizer_config.json: 7.30kB [00:00, 25.8MB/s] vocab.json: 0.00B [00:00, ?B/s] vocab.json: 2.78MB [00:00, 134MB/s] merges.txt: 0.00B [00:00, ?B/s] merges.txt: 1.67MB [00:00, 149MB/s] tokenizer.json: 0.00B [00:00, ?B/s] tokenizer.json: 7.03MB [00:00, 183MB/s] `torch_dtype` is deprecated! Use `dtype` instead! model.safetensors: 0%| | 0.00/3.09G [00:00<?, ?B/s] model.safetensors: 3%|ā–Ž | 78.9M/3.09G [00:01<00:42, 71.5MB/s] model.safetensors: 12%|ā–ˆā– | 355M/3.09G [00:02<00:15, 172MB/s]  model.safetensors: 33%|ā–ˆā–ˆā–ˆā–Ž | 1.03G/3.09G [00:04<00:09, 223MB/s] model.safetensors: 100%|ā–ˆā–ˆā–ˆā–ˆā–ˆā–ˆā–ˆā–ˆā–ˆā–ˆ| 3.09G/3.09G [00:05<00:00, 582MB/s] Loading weights: 0%| | 0/338 [00:00<?, ?it/s] Loading weights: 0%| | 1/338 [00:01<09:18, 1.66s/it] Loading weights: 100%|ā–ˆā–ˆā–ˆā–ˆā–ˆā–ˆā–ˆā–ˆā–ˆā–ˆ| 338/338 [00:01<00:00, 193.27it/s] generation_config.json: 0%| | 0.00/242 [00:00<?, ?B/s] generation_config.json: 100%|ā–ˆā–ˆā–ˆā–ˆā–ˆā–ˆā–ˆā–ˆā–ˆā–ˆ| 242/242 [00:00<00:00, 1.19MB/s] āœ… AI Model loaded and ready! Traceback (most recent call last): File "/app/app.py", line 54, in <module> demo = gr.ChatInterface( fn=chat_with_ai, ...<3 lines>... examples=["What is the capital of France?", "Explain quantum computing in simple terms.", "Write a short poem about coding."] ) TypeError: ChatInterface.__init__() got an unexpected keyword argument 'theme'

Container logs:

Fetching error logs...