| # SAE: Sustainable Adversarial Example Evaluation Framework for Class-Incremental Learning | |
| ## π Overview | |
| _**News**: This work has been accepted as a poster by [AAAI 2026](https://aaai.org/conference/aaai/aaai-26/)._ | |
| **SAE (Sustainable Adversarial Example)** is a *universal adversarial attack framework* targeting **Class-Incremental Learning (CIL)**. This repository provides a comprehensive pipeline for both CIL training and benchmarking multiple attack methods, including our proposed SAE approach. | |
| The project integrates with [PyCIL: A Python Toolbox for Class-Incremental Learning](https://github.com/LAMDA-CL/PyCIL) for CIL model training. It also supports benchmarking several attack baselines alongside SAE, enabling fair and reproducible evaluations of adversarial robustness across CIL methods. | |
| If you are interested in our work, please refer to: | |
| ``` | |
| @inproceedings{liu2026SAE, | |
| title={Improving Sustainability of Adversarial Examples in Class-Incremental Learning}, | |
| author={Taifeng Liu, Xinjing Liu, Liangqiu Dong, Yang Liu, Yilong Yang, Zhuo Ma}, | |
| booktitle={Proceedings of the AAAI Conference on Artificial Intelligence}, | |
| year={2026} | |
| } | |
| ``` | |
| --- | |
| ## βοΈ Environment Setup | |
| ### Project Layout | |
| * `attacks/`: Implementations of all attack baselines including **MIFGSM**, **Gaker**, **AIM**, **CGNC**, **CleanSheet**, **UnivIntruder**, and **SAE**. | |
| * `convs/`: Backbone definitions for CIL models and CLIP model, including `resnet32`, `resnet50`, `cosine_resnet32`, and `cosine_resnet50`. | |
| * **Pre-trained CLIP** is downloaded from [HuggingFace](https://huggingface.co/laion/CLIP-ViT-B-32-laion2B-s34B-b79K). | |
| * `datasets/`: Dataset management for CIFAR-100 (32x32) and ImageNet-100 (224x224). | |
| * **CIFAR-100** is automatically downloaded at runtime. | |
| * **ImageNet-100** should be manually extracted from ImageNet-1K using `create_imagenet100_from_imagenet.py`, which parses `train.txt` and `eval.txt` to extract relevant classes. | |
| * `exps/`: JSON configuration files for various CIL training methods. | |
| * `logs/`: Stores trained CIL model checkpoints and evaluation results of attacks. | |
| * `models/`: Contains implementations of **9** CIL algorithms: **BiC**, **DER**, **Finetune**, **Foster**, **iCaRL**, **MEMO**, **PodNet**, **Replay**, **WA**. | |
| * `scripts/`: Shell scripts for CIL training and attack benchmarking. | |
| * `utils/`: Utility functions for augmentation, logging, dataset processing, visualization, etc. | |
| * `attack.py`: Main entry point to run adversarial attacks. | |
| * `trainCIL.py`: Main entry point to train CIL models. | |
| ### Dependency Requirements | |
| * **OS**: Ubuntu 22.04 | |
| * **Python**: 3.12 | |
| * **PyTorch**: β₯ 2.1 | |
| * **GPU**: NVIDIA RTX 4090 (24GB VRAM) | |
| To set up the environment: | |
| ```bash | |
| conda env create -f environment.yml | |
| conda activate SAE | |
| ``` | |
| --- | |
| ## π Result Reproduction | |
| ### Step 1: CIL Training | |
| First, you need to train the target CIL models follow the instruction below. | |
| Example: training a single CIL method (e.g., **iCaRL**) on **CIFAR-100**: | |
| ```bash | |
| python trainCIL.py --config exps/icarl.json | |
| ``` | |
| Example: training a single CIL method (e.g., **iCaRL**) on **ImageNet-100**: | |
| ```bash | |
| python trainCIL.py --config exps/icarl-imagenet100.json | |
| ``` | |
| We also provide scripts to train all 9 CIL methods on **CIFAR-100**: | |
| ```bash | |
| ./scripts/trainCIL-CIFAR100.sh | |
| ``` | |
| Train all 9 CIL methods on **ImageNet-100**: | |
| ```bash | |
| ./scripts/trainCIL-ImageNet100.sh | |
| ``` | |
| All model checkpoints will be saved under the `logs/` directory, organized by method and dataset. | |
| --- | |
| ### Step 2: Adversarial Attack Benchmarking | |
| _Note: assume that you have already prepared the dataset, the CIL model, and the CLIP model._ | |
| To launch an **SAE** attack targeting class **0** on a CIL model trained on **CIFAR-100**: | |
| ```bash | |
| python attack.py --config exps/icarl.json --attack_method SAE --target_class 0 | |
| ``` | |
| To test a different attack baseline, simply change the `--attack_method` argument. | |
| You can reproduce the overall evaluation results by runing the below scripts (referring to the **Table 1** in our paper). | |
| ```bash | |
| ./scripts/attacks-CIFAR100.sh | |
| ./scripts/attacks-ImageNet100.sh | |
| ``` | |
| #### Benchmark Results | |
| All benchmark results are available in the `appendix.pdf` of Supplementary Material. |