runtime error
Exit code: 1. Reason: š Loading the AI model... This will take a moment on the first run. Warning: You are sending unauthenticated requests to the HF Hub. Please set a HF_TOKEN to enable higher rate limits and faster downloads. config.json: 0%| | 0.00/660 [00:00<?, ?B/s][A config.json: 100%|āāāāāāāāāā| 660/660 [00:00<00:00, 3.01MB/s] tokenizer_config.json: 0.00B [00:00, ?B/s][A tokenizer_config.json: 7.30kB [00:00, 25.8MB/s] vocab.json: 0.00B [00:00, ?B/s][A vocab.json: 2.78MB [00:00, 134MB/s] merges.txt: 0.00B [00:00, ?B/s][A merges.txt: 1.67MB [00:00, 149MB/s] tokenizer.json: 0.00B [00:00, ?B/s][A tokenizer.json: 7.03MB [00:00, 183MB/s] `torch_dtype` is deprecated! Use `dtype` instead! model.safetensors: 0%| | 0.00/3.09G [00:00<?, ?B/s][A model.safetensors: 3%|ā | 78.9M/3.09G [00:01<00:42, 71.5MB/s][A model.safetensors: 12%|āā | 355M/3.09G [00:02<00:15, 172MB/s] [A model.safetensors: 33%|āāāā | 1.03G/3.09G [00:04<00:09, 223MB/s][A model.safetensors: 100%|āāāāāāāāāā| 3.09G/3.09G [00:05<00:00, 582MB/s] Loading weights: 0%| | 0/338 [00:00<?, ?it/s][A Loading weights: 0%| | 1/338 [00:01<09:18, 1.66s/it][A Loading weights: 100%|āāāāāāāāāā| 338/338 [00:01<00:00, 193.27it/s] generation_config.json: 0%| | 0.00/242 [00:00<?, ?B/s][A generation_config.json: 100%|āāāāāāāāāā| 242/242 [00:00<00:00, 1.19MB/s] ā AI Model loaded and ready! Traceback (most recent call last): File "/app/app.py", line 54, in <module> demo = gr.ChatInterface( fn=chat_with_ai, ...<3 lines>... examples=["What is the capital of France?", "Explain quantum computing in simple terms.", "Write a short poem about coding."] ) TypeError: ChatInterface.__init__() got an unexpected keyword argument 'theme'
Container logs:
Fetching error logs...