runtime error
Exit code: 1. Reason: 99%|█████████▉| 252/254 [00:00<00:00, 723.66it/s, Materializing param=model.layers.27.self_attn.q_proj.weight][A Loading weights: 100%|█████████▉| 253/254 [00:00<00:00, 725.99it/s, Materializing param=model.layers.27.self_attn.v_proj.weight][A Loading weights: 100%|█████████▉| 253/254 [00:00<00:00, 725.88it/s, Materializing param=model.layers.27.self_attn.v_proj.weight][A Loading weights: 100%|██████████| 254/254 [00:00<00:00, 728.32it/s, Materializing param=model.norm.weight] [A Loading weights: 100%|██████████| 254/254 [00:00<00:00, 728.19it/s, Materializing param=model.norm.weight][A Loading weights: 100%|██████████| 254/254 [00:00<00:00, 727.87it/s, Materializing param=model.norm.weight] generation_config.json: 0%| | 0.00/230 [00:00<?, ?B/s][A generation_config.json: 100%|██████████| 230/230 [00:00<00:00, 1.15MB/s] Loading LoRA adapters: SalmanSakibSrizon/dse_quant_enhanced adapter_config.json: 0%| | 0.00/1.19k [00:00<?, ?B/s][A adapter_config.json: 100%|██████████| 1.19k/1.19k [00:00<00:00, 4.64MB/s] adapter_model.safetensors: 0%| | 0.00/97.3M [00:00<?, ?B/s][A adapter_model.safetensors: 0%| | 45.8k/97.3M [00:01<35:26, 45.7kB/s][A adapter_model.safetensors: 100%|██████████| 97.3M/97.3M [00:01<00:00, 54.0MB/s] /usr/local/lib/python3.13/site-packages/peft/tuners/lora/bnb.py:397: UserWarning: Merge lora module to 4-bit linear may get different generations due to rounding errors. warnings.warn( Model loaded and merged successfully! Traceback (most recent call last): File "/app/app.py", line 96, in <module> chatbot = gr.ChatInterface( respond, ...<12 lines>... description="Chat with your fine-tuned Llama 3.2 model for DSE market insights.", ) TypeError: ChatInterface.__init__() got an unexpected keyword argument 'type'
Container logs:
Fetching error logs...