ariG23498 HF Staff commited on
Commit
bcd787a
·
verified ·
1 Parent(s): 9237a37

Upload meta-llama_Llama-3.1-8B-Instruct_1.txt with huggingface_hub

Browse files
meta-llama_Llama-3.1-8B-Instruct_1.txt CHANGED
@@ -39,7 +39,7 @@ Traceback (most recent call last):
39
  ~~~~~~~~~~~~~~~~~~~^^^^^^^^^^
40
  File "/tmp/.cache/uv/environments-v2/8c467f4d75ae1bd9/lib/python3.13/site-packages/huggingface_hub/utils/_http.py", line 802, in hf_raise_for_status
41
  raise _format(HfHubHTTPError, message, response) from e
42
- huggingface_hub.errors.HfHubHTTPError: (Request ID: Root=1-69a93029-61586cef0bd9991262c36762;807e3d68-cba8-49cd-bb00-b752b86c6328)
43
 
44
  403 Forbidden: Please enable access to public gated repositories in your fine-grained token settings to view this repository..
45
  Cannot access content at: https://huggingface.co/meta-llama/Llama-3.1-8B-Instruct/resolve/main/config.json.
@@ -78,7 +78,7 @@ huggingface_hub.errors.LocalEntryNotFoundError: An error happened while trying t
78
  The above exception was the direct cause of the following exception:
79
 
80
  Traceback (most recent call last):
81
- File "/tmp/meta-llama_Llama-3.1-8B-Instruct_1BGdThD.py", line 26, in <module>
82
  pipe = pipeline("text-generation", model="meta-llama/Llama-3.1-8B-Instruct")
83
  File "/tmp/.cache/uv/environments-v2/8c467f4d75ae1bd9/lib/python3.13/site-packages/transformers/pipelines/__init__.py", line 699, in pipeline
84
  config = AutoConfig.from_pretrained(
 
39
  ~~~~~~~~~~~~~~~~~~~^^^^^^^^^^
40
  File "/tmp/.cache/uv/environments-v2/8c467f4d75ae1bd9/lib/python3.13/site-packages/huggingface_hub/utils/_http.py", line 802, in hf_raise_for_status
41
  raise _format(HfHubHTTPError, message, response) from e
42
+ huggingface_hub.errors.HfHubHTTPError: (Request ID: Root=1-69aa8145-2ffd094e20ac2b62525fddf3;e92b1d86-e24f-4017-8807-a6c8d5d77fb2)
43
 
44
  403 Forbidden: Please enable access to public gated repositories in your fine-grained token settings to view this repository..
45
  Cannot access content at: https://huggingface.co/meta-llama/Llama-3.1-8B-Instruct/resolve/main/config.json.
 
78
  The above exception was the direct cause of the following exception:
79
 
80
  Traceback (most recent call last):
81
+ File "/tmp/meta-llama_Llama-3.1-8B-Instruct_1Mx2ixO.py", line 26, in <module>
82
  pipe = pipeline("text-generation", model="meta-llama/Llama-3.1-8B-Instruct")
83
  File "/tmp/.cache/uv/environments-v2/8c467f4d75ae1bd9/lib/python3.13/site-packages/transformers/pipelines/__init__.py", line 699, in pipeline
84
  config = AutoConfig.from_pretrained(