A Sanity Check for AI-generated Image Detection
Paper • 2406.19435 • Published
This repository packages the 50-epoch AIDE checkpoint from:
/home/meet/Aivsre_001/AIDE/output_pico_balanced_49k_run1/checkpoint-50.pth
The model is exported as:
model.safetensors for safer deploymentcheckpoint-50.pth as the original PyTorch training snapshotThis is the same hybrid AIDE architecture used in the original repository:
The model predicts:
0 -> real1 -> fakeInference follows the original AIDE pipeline:
[0, 1]DCT_base_Rec_Module[x_minmin, x_maxmax, x_minmin1, x_maxmax1, x_0]
The provided inference.py keeps the exact same preparation logic.
This packaged checkpoint comes from the local run:
data_path=/home/meet/Aivsre_001/aide_data/train_pico_balanced_49k_v1eval_data_path=/home/meet/Aivsre_001/aide_data/eval_pico_balanced_49k_v1batch_size=8blr=1e-4weight_decay=0.0epochs=52checkpoint exported here: epoch 50From output_pico_balanced_49k_run1/log.txt:
99.207699.10610.073099.1264This repository specifically publishes checkpoint-50 because that is the requested 50-epoch model snapshot.
model.safetensorscheckpoint-50.pthconfig.jsonmodel.jsonpreprocessor_config.jsoninference.pymodels/data/requirements.txtLICENSEInstall requirements:
pip install -r requirements.txt
Run local inference:
python inference.py --repo_dir . --image /path/to/image.jpg
Or use it programmatically:
from PIL import Image
from inference import load_model, predict_pil_images
model = load_model(".")
image = Image.open("example.jpg").convert("RGB")
result = predict_pil_images(model, [image])[0]
print(result)
transformers AutoModelDerived from the original AIDE implementation: