Can't load model
I was able to use this model months ago, but it seems now it's broken:
ValueError: Unrecognized configuration class <class 'transformers_modules.leolee99.InjecGuard.ec04dac95d2a00214efceb6f1b52bbecbbd24c77.modeling_injecguard.InjecGuardConfig'> to build an AutoTokenizer.
I am using the sample code in the README, what has changed?
Hi, thanks for your interest in our work. I reconfigured the environment just now and found that it still works well. Actually, the code has not been changed in the past two months. I think some environmental differences might have caused this error. Below are the versions of some possible important dependencies I just tested:
python=3.10
pytorch=2.2.2
transformers=4.52.4
numpy=1.26
Could you please share your detailed environments information? then maybe i can provide some help for this issue.
I do have these:
Python: 3.13.3 (main, Apr 9 2025, 07:44:25) [GCC 14.2.1 20250207]
PyTorch: 2.7.1+cu126
Transformers: 4.52.4
NumPy: 2.3.0
Transformers is the same.
i evaluate the same configuration as you provided, and it works in my environments. It seems that the bug is caused by other reasons. Below is the minimal working environment I just evaluated. Maybe you can try this minimal configuration and check if the bug still occurs.
certifi==2025.6.15
charset-normalizer==3.4.2
filelock==3.18.0
fsspec==2025.5.1
hf-xet==1.1.5
huggingface-hub==0.33.1
idna==3.10
Jinja2==3.1.6
MarkupSafe==3.0.2
mpmath==1.3.0
networkx==3.5
numpy==2.3.0
nvidia-cublas-cu12==12.6.4.1
nvidia-cuda-cupti-cu12==12.6.80
nvidia-cuda-nvrtc-cu12==12.6.77
nvidia-cuda-runtime-cu12==12.6.77
nvidia-cudnn-cu12==9.5.1.17
nvidia-cufft-cu12==11.3.0.4
nvidia-cufile-cu12==1.11.1.6
nvidia-curand-cu12==10.3.7.77
nvidia-cusolver-cu12==11.7.1.2
nvidia-cusparse-cu12==12.5.4.2
nvidia-cusparselt-cu12==0.6.3
nvidia-nccl-cu12==2.26.2
nvidia-nvjitlink-cu12==12.6.85
nvidia-nvtx-cu12==12.6.77
packaging==25.0
pillow==11.2.1
PyYAML==6.0.2
regex==2024.11.6
requests==2.32.4
safetensors==0.5.3
setuptools==78.1.1
sympy==1.14.0
tokenizers==0.21.2
torch==2.7.1
torchaudio==2.7.1
torchvision==0.22.1
tqdm==4.67.1
transformers==4.52.4
triton==3.3.1
typing_extensions==4.14.0
urllib3==2.5.0
wheel==0.45.1
The issue seems to be loading the tokenizer. Got another person exactly with the same issue.
Can you please show me how you're running this?
This is the shell output, injeguard.py is the sample code:
(base) [hao@ip-172-31-33-234 ~]$ conda activate injecguard
(injecguard) [hao@ip-172-31-33-234 ~]$ python
Python 3.13.3 | packaged by Anaconda, Inc. | (main, Jun 4 2025, 14:12:13) [GCC 11.2.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>>
(injecguard) [hao@ip-172-31-33-234 ~]$ pip list
Package Version
------------------------ ---------
certifi 2025.6.15
charset-normalizer 3.4.2
filelock 3.18.0
fsspec 2025.5.1
hf-xet 1.1.5
huggingface-hub 0.33.1
idna 3.10
Jinja2 3.1.6
MarkupSafe 3.0.2
mpmath 1.3.0
networkx 3.5
numpy 2.3.0
nvidia-cublas-cu12 12.6.4.1
nvidia-cuda-cupti-cu12 12.6.80
nvidia-cuda-nvrtc-cu12 12.6.77
nvidia-cuda-runtime-cu12 12.6.77
nvidia-cudnn-cu12 9.5.1.17
nvidia-cufft-cu12 11.3.0.4
nvidia-cufile-cu12 1.11.1.6
nvidia-curand-cu12 10.3.7.77
nvidia-cusolver-cu12 11.7.1.2
nvidia-cusparse-cu12 12.5.4.2
nvidia-cusparselt-cu12 0.6.3
nvidia-nccl-cu12 2.26.2
nvidia-nvjitlink-cu12 12.6.85
nvidia-nvtx-cu12 12.6.77
packaging 25.0
pillow 11.2.1
pip 25.1
PyYAML 6.0.2
regex 2024.11.6
requests 2.32.4
safetensors 0.5.3
setuptools 78.1.1
sympy 1.14.0
tokenizers 0.21.2
torch 2.7.1
torchaudio 2.7.1
torchvision 0.22.1
tqdm 4.67.1
transformers 4.52.4
triton 3.3.1
typing_extensions 4.14.0
urllib3 2.5.0
wheel 0.45.1
(injecguard) [hao@ip-172-31-33-234 ~]$ python injecguard.py
tokenizer_config.json: 1.28kB [00:00, 8.70MB/s]
spm.model: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 132/132 [00:00<00:00, 1.88MB/s]
tokenizer.json: 8.66MB [00:00, 249MB/s]
added_tokens.json: 100%|ββββββββββββββββββββββββββββββββββββββββββββββ| 23.0/23.0 [00:00<00:00, 266kB/s]
special_tokens_map.json: 100%|βββββββββββββββββββββββββββββββββββββββββ| 286/286 [00:00<00:00, 3.77MB/s]
config.json: 1.08kB [00:00, 5.11MB/s]
modeling_injecguard.py: 100%|ββββββββββββββββββββββββββββββββββββββββββ| 991/991 [00:00<00:00, 15.2MB/s]
A new version of the following files was downloaded from https://huggingface.co/leolee99/InjecGuard:
- modeling_injecguard.py
. Make sure to double-check they do not contain any added malicious code. To avoid downloading new versions of the code file, you can pin a revision.
model.safetensors: 100%|ββββββββββββββββββββββββββββββββββββββββββββββ| 738M/738M [00:02<00:00, 275MB/s]
Device set to use cuda:0
Asking to truncate to max_length but no maximum length is provided and the model has no predefined maximum length. Default to no truncation.
[{'label': 'benign', 'score': 0.7738722562789917}, {'label': 'injection', 'score': 0.9999356269836426}]
(injecguard) [hao@ip-172-31-33-234 ~]$