Amnesia as a Catalyst for Enhancing Black Box Pixel Attacks in Image Classification and Object Detection
Paper β’ 2502.07821 β’ Published
Part of the ANIMA Perception Suite by Robot Flow Labs.
RFPAR: Remember and Forget Pixel Attack using Reinforcement Learning (ArXiv 2502.07821) Dongsu Song, Daehwa Ko, Jay Hoon Jung β Korea Aerospace University
RFPAR uses a REINFORCE policy network (Conv2d + FC) to select optimal pixel perturbations for black-box adversarial attacks. The Remember and Forget process:
CUDA-accelerated pixel perturbation kernels (sm_89, L4) for parallel sampling and batch reward computation.
| Metric | Our Result | Paper |
|---|---|---|
| Attack Success Rate | 94.0% | ~93% |
| Mean L0 | 151.7 | 138 |
| Mean L2 | 6.41 | β |
| Average Queries | 454 | ~500 |
| Forget Iterations | 100 | 100 |
Campaign running β results will be updated.
Paper reference: RM=0.91, mAP=0.111, L0=2043, Queries=1254
| Format | Classification | Use Case |
|---|---|---|
| PyTorch (.pth) | pytorch/rfpar_cls_v1.pth |
Training, fine-tuning |
| SafeTensors | pytorch/rfpar_cls_v1.safetensors |
Fast loading, safe |
| ONNX | onnx/rfpar_cls_v1.onnx |
Cross-platform inference |
| TensorRT FP16 | tensorrt/rfpar_cls_v1_fp16.trt |
Edge deployment (Jetson/L4) |
| TensorRT FP32 | tensorrt/rfpar_cls_v1_fp32.trt |
Full precision inference |
import torch
from anima_rfpar.agent import REINFORCEAgent
agent = REINFORCEAgent(224, 224, 3, detector_mode=False)
ckpt = torch.load("pytorch/rfpar_cls_v1.pth", weights_only=False)
agent.load_state_dict(ckpt["agent_state_dict"])
agent.eval()
image = torch.randn(1, 3, 224, 224) # [0, 1] normalized
action_mean, action_std = agent(image)
# action_mean: (1, 5) -> sigmoid -> (x, y, r, g, b)
ATLAS / ORACLE β Defense Marketplace
Apache 2.0 β Robot Flow Labs / AIFLOW LABS LIMITED
Built with ANIMA by Robot Flow Labs