runtime error

Exit code: 1. Reason: *lora_kargs │ │ 173 │ │ logging.info('Loading LLAMA') │ │ ā± 174 │ │ llama_tokenizer = LlamaTokenizer.from_pretrained("Vision-CAIR/ │ │ 175 │ │ llama_tokenizer.pad_token = "$$" │ │ 176 │ │ │ │ 177 │ │ if low_resource: │ │ │ │ /usr/local/lib/python3.10/site-packages/transformers/tokenization_utils_base │ │ .py:1809 in from_pretrained │ │ │ │ 1806 │ │ │ ) │ │ 1807 │ │ │ │ 1808 │ │ if all(full_file_name is None for full_file_name in resolved_ │ │ ā± 1809 │ │ │ raise EnvironmentError( │ │ 1810 │ │ │ │ f"Can't load tokenizer for '{pretrained_model_name_or │ │ 1811 │ │ │ │ "'https://huggingface.co/models', make sure you don't │ │ 1812 │ │ │ │ f"Otherwise, make sure '{pretrained_model_name_or_pat │ ╰──────────────────────────────────────────────────────────────────────────────╯ OSError: Can't load tokenizer for 'Vision-CAIR/llama-2-7b-chat-pytorch'. If you were trying to load it from 'https://huggingface.co/models', make sure you don't have a local directory with the same name. Otherwise, make sure 'Vision-CAIR/llama-2-7b-chat-pytorch' is the correct path to a directory containing all relevant files for a LlamaTokenizer tokenizer.

Container logs:

Fetching error logs...