runtime error

Exit code: 1. Reason: e(torch._C._get_default_device()), # torch.device('cpu'), tokenizer_config.json: 0%| | 0.00/200 [00:00<?, ?B/s] tokenizer_config.json: 100%|██████████| 200/200 [00:00<00:00, 827kB/s] config.json: 0%| | 0.00/1.35k [00:00<?, ?B/s] config.json: 100%|██████████| 1.35k/1.35k [00:00<00:00, 6.82MB/s] vocab.json: 0%| | 0.00/798k [00:00<?, ?B/s] vocab.json: 100%|██████████| 798k/798k [00:00<00:00, 23.2MB/s] merges.txt: 0%| | 0.00/456k [00:00<?, ?B/s] merges.txt: 100%|██████████| 456k/456k [00:00<00:00, 400MB/s] special_tokens_map.json: 0%| | 0.00/90.0 [00:00<?, ?B/s] special_tokens_map.json: 100%|██████████| 90.0/90.0 [00:00<00:00, 470kB/s] The `load_in_4bit` and `load_in_8bit` arguments are deprecated and will be removed in the future versions. Please, pass a `BitsAndBytesConfig` object in `quantization_config` argument instead. Traceback (most recent call last): File "/home/user/app/app.py", line 71, in <module> generator = load_model() File "/home/user/app/app.py", line 55, in load_model model = AutoModelForCausalLM.from_pretrained(REPO_ID, **model_kwargs) File "/usr/local/lib/python3.10/site-packages/transformers/models/auto/auto_factory.py", line 600, in from_pretrained return model_class.from_pretrained( File "/usr/local/lib/python3.10/site-packages/transformers/modeling_utils.py", line 317, in _wrapper return func(*args, **kwargs) File "/usr/local/lib/python3.10/site-packages/transformers/modeling_utils.py", line 4887, in from_pretrained hf_quantizer.validate_environment( File "/usr/local/lib/python3.10/site-packages/transformers/quantizers/quantizer_bnb_4bit.py", line 88, in validate_environment raise ImportError( ImportError: The installed version of bitsandbytes (<0.43.1) requires CUDA, but CUDA is not available. You may need to install PyTorch with CUDA support or upgrade bitsandbytes to >=0.43.1.

Container logs:

Fetching error logs...