50 Epoch AIDE

This repository packages the 50-epoch AIDE checkpoint from:

/home/meet/Aivsre_001/AIDE/output_pico_balanced_49k_run1/checkpoint-50.pth

The model is exported as:

  • model.safetensors for safer deployment
  • checkpoint-50.pth as the original PyTorch training snapshot

Model

This is the same hybrid AIDE architecture used in the original repository:

  • fixed 30-filter SRM high-pass module
  • dual ResNet-50-style forensic encoders
  • frozen OpenCLIP ConvNeXt-XXL visual trunk
  • final MLP fusion head for binary classification

The model predicts:

  • 0 -> real
  • 1 -> fake

Input Preparation

Inference follows the original AIDE pipeline:

  1. convert image to RGB
  2. convert to tensor in [0, 1]
  3. generate four DCT-based reconstructed views with DCT_base_Rec_Module
  4. normalize all five views with ImageNet mean/std
  5. stack in this order:
[x_minmin, x_maxmax, x_minmin1, x_maxmax1, x_0]

The provided inference.py keeps the exact same preparation logic.

Training Run

This packaged checkpoint comes from the local run:

  • data_path=/home/meet/Aivsre_001/aide_data/train_pico_balanced_49k_v1
  • eval_data_path=/home/meet/Aivsre_001/aide_data/eval_pico_balanced_49k_v1
  • batch_size=8
  • blr=1e-4
  • weight_decay=0.0
  • epochs=52
  • checkpoint exported here: epoch 50

Metrics

From output_pico_balanced_49k_run1/log.txt:

  • epoch 49 validation/top-1 accuracy: 99.2076
  • epoch 50 validation/top-1 accuracy: 99.1061
  • epoch 50 validation loss: 0.0730
  • epoch 51 validation/top-1 accuracy: 99.1264

This repository specifically publishes checkpoint-50 because that is the requested 50-epoch model snapshot.

Files In This Repo

  • model.safetensors
  • checkpoint-50.pth
  • config.json
  • model.json
  • preprocessor_config.json
  • inference.py
  • models/
  • data/
  • requirements.txt
  • LICENSE

Usage

Install requirements:

pip install -r requirements.txt

Run local inference:

python inference.py --repo_dir . --image /path/to/image.jpg

Or use it programmatically:

from PIL import Image

from inference import load_model, predict_pil_images

model = load_model(".")
image = Image.open("example.jpg").convert("RGB")
result = predict_pil_images(model, [image])[0]
print(result)

Notes

  • This repository is for weight hosting and local inference
  • the architecture is custom and not a native transformers AutoModel
  • use the provided code files with the exported weights

Credits

Derived from the original AIDE implementation:

Downloads last month
-
Safetensors
Model size
0.9B params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Paper for meet4150/50_epoch_aide