QuarkGluonConvAE โ€” Checkpoint

Convolutional autoencoder trained on 125 000+ quark/gluon jet images (3-channel ECAL/HCAL/Tracks, 125ร—125 px).

Checkpoint format

{
    "model":     OrderedDict,   # load with model.load_state_dict()
    "optimizer": OrderedDict,
    "epoch":     int,
}

Loading example

import torch
from src.autoencoder import ConvAutoencoder

ckpt  = torch.load("ae_checkpoint.pth", map_location="cpu")
model = ConvAutoencoder()
model.load_state_dict(ckpt["model"])
model.eval()

# encode a batch of jet images (B, 3, 125, 125) โ†’ (B, 512) latents
z = model.encode(jet_images)

Training

MSE reconstruction loss, Adam lr=1e-3, 30 epochs, batch size 64. Input preprocessing: log1p + per-channel 99th-percentile normalization.

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support