QuarkGluonConvAE โ Checkpoint
Convolutional autoencoder trained on 125 000+ quark/gluon jet images (3-channel ECAL/HCAL/Tracks, 125ร125 px).
Checkpoint format
{
"model": OrderedDict, # load with model.load_state_dict()
"optimizer": OrderedDict,
"epoch": int,
}
Loading example
import torch
from src.autoencoder import ConvAutoencoder
ckpt = torch.load("ae_checkpoint.pth", map_location="cpu")
model = ConvAutoencoder()
model.load_state_dict(ckpt["model"])
model.eval()
# encode a batch of jet images (B, 3, 125, 125) โ (B, 512) latents
z = model.encode(jet_images)
Training
MSE reconstruction loss, Adam lr=1e-3, 30 epochs, batch size 64. Input preprocessing: log1p + per-channel 99th-percentile normalization.
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐ Ask for provider support