runtime error
Exit code: 1. Reason: nsors: 0%| | 0.00/90.9M [00:00<?, ?B/s][A model.safetensors: 100%|█████████▉| 90.9M/90.9M [00:00<00:00, 296MB/s] tokenizer_config.json: 0%| | 0.00/350 [00:00<?, ?B/s][A tokenizer_config.json: 100%|██████████| 350/350 [00:00<00:00, 2.25MB/s] vocab.txt: 0%| | 0.00/232k [00:00<?, ?B/s][A vocab.txt: 100%|██████████| 232k/232k [00:00<00:00, 2.07MB/s] tokenizer.json: 0%| | 0.00/466k [00:00<?, ?B/s][A tokenizer.json: 100%|██████████| 466k/466k [00:00<00:00, 67.6MB/s] special_tokens_map.json: 0%| | 0.00/112 [00:00<?, ?B/s][A special_tokens_map.json: 100%|██████████| 112/112 [00:00<00:00, 979kB/s] config.json: 0%| | 0.00/190 [00:00<?, ?B/s][A config.json: 100%|██████████| 190/190 [00:00<00:00, 1.67MB/s] Traceback (most recent call last): File "/home/user/app/app.py", line 126, in <module> answer = query_llama(prompt) File "/home/user/app/app.py", line 90, in query_llama response = client.chat.completions.create( File "/usr/local/lib/python3.10/site-packages/groq/resources/chat/completions.py", line 322, in create return self._post( File "/usr/local/lib/python3.10/site-packages/groq/_base_client.py", line 1225, in post return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)) File "/usr/local/lib/python3.10/site-packages/groq/_base_client.py", line 917, in request return self._request( File "/usr/local/lib/python3.10/site-packages/groq/_base_client.py", line 1020, in _request raise self._make_status_error_from_response(err.response) from None groq.BadRequestError: Error code: 400 - {'error': {'message': 'The model `llama3-8b-8192` has been decommissioned and is no longer supported. Please refer to https://console.groq.com/docs/deprecations for a recommendation on which model to use instead.', 'type': 'invalid_request_error', 'code': 'model_decommissioned'}}
Container logs:
Fetching error logs...