Rectified LP-JEPA Checkpoints

This repository contains checkpoints for Rectified LP-JEPA models trained for 1000 epochs (unless otherwise noted). These models correspond to the results presented in Table 1 of our paper.

For the full training code and implementation details, please refer to our main codebase: rectified-lp-jepa.

Performance Card

The following table summarizes the performance of the provided checkpoints on ImageNet-1k (val_acc1) and other metrics:

Name val_acc1 val_proj_acc1 train_l1_sparsity train_l0_sparsity
p2.0-mu1.0 85.28 79.64 0.6291 0.8678
p2.0-mu0.5 85.40 80.02 0.4899 0.8024
p2.0-mu0.75 85.10 80.96 0.5619 0.8360
p2.0-mu0.25 84.94 80.10 0.4164 0.7685
p2.0-mu0.0 84.58 80.30 0.3424 0.7301
p2.0-mu-0.5 83.98 79.88 0.2079 0.6681
p2.0-mu-0.25 84.70 79.24 0.2716 0.6955
p2.0-mu-0.75 84.90 79.30 0.1529 0.6370
p2.0-mu-1.0 84.48 79.54 0.1068 0.6042
p2.0-mu-1.25 84.50 79.00 0.0717 0.4839
p2.0-mu-1.50 84.58 77.84 0.0469 0.2807
p2.0-mu-2.50 82.48 68.42 0.0126 0.0206
p2.0-mu-2.0 83.06 72.38 0.0194 0.0420
p2.0-mu-2.25 82.22 71.08 0.0158 0.0290
p2.0-mu-2.75 81.14 64.02 0.0128 0.0189
p1.0-mu-1.75 84.68 77.84 0.0300 0.1244
p1.0-mu-2.0 84.06 76.00 0.0248 0.0742
p1.0-mu-2.75 83.10 72.88 0.0068 0.0119
p2.0-mu-1.75 83.62 76.10 0.0290 0.0898
p1.0-mu0.0 84.76 80.14 0.2743 0.6943
p1.0-mu0.25 85.80 80.40 0.3752 0.7444
p1.0-mu-1.50 84.98 78.38 0.0397 0.2329
p1.0-mu-0.25 85.02 79.88 0.1982 0.6461
p1.0-mu-2.50 84.42 73.86 0.0134 0.0269
p1.0-mu-2.25 84.20 74.50 0.0207 0.0507
p1.0-mu-0.5 84.87 79.48 0.1430 0.6012
p1.0-mu1.0 85.23 79.52 0.6474 0.8770
p2.0-mu-3.0 78.44 56.40 0.0047 0.0060
p1.0-mu-1.0 84.64 79.66 0.0745 0.4784
p1.0-mu-1.25 84.54 78.60 0.0538 0.3675
p1.0-mu-0.75 84.48 79.42 0.1036 0.5463
p1.0-mu0.5 84.42 80.70 0.4774 0.7986
p1.0-mu0.75 85.58 80.38 0.5703 0.8423
p1.0-mu-3.0 82.66 72.32 0.0062 0.0109

Note: The p1.0-mu-3.0 checkpoint is currently at 934 epochs. We plan to update it to the full 1000-epoch version soon.

Usage

To use these checkpoints, first clone the original codebase:

git clone https://github.com/YilunKuang/rectified-lp-jepa
cd rectified-lp-jepa

Loading a Checkpoint for Inference

Each model directory contains a .ckpt file and its corresponding args.json. You can load them as follows:

import json
import torch
from pathlib import Path
from omegaconf import OmegaConf
from solo.methods import METHODS

# Setup paths (example for p2.0-mu-1.75)
ckpt_path = "p2.0-mu-1.75-1000epoch/p2.0-mu-1.75-1000epoch.ckpt"
args_path = "p2.0-mu-1.75-1000epoch/args.json"

# 1. Load Config
with open(args_path) as f:
    method_args = json.load(f)
cfg = OmegaConf.create(method_args)

# 2. Load Model
print(f"Loading model from {ckpt_path}...")
model = METHODS[method_args["method"]].load_from_checkpoint(ckpt_path, strict=False, cfg=cfg)
model.cuda().eval()

# 3. Inference
# Input should be a normalized 224x224 tensor
with torch.no_grad():
    features = model.encoder(input_tensor)
    if hasattr(model, "projector"):
        projected = model.projector(features)

Acknowledgements

This codebase is built upon the solo-learn framework. We thank the solo-learn authors for releasing their code under the MIT license.

Citation

Please cite our work if you find it helpful:

@misc{kuang2026rectifiedlpjepajointembeddingpredictive,
      title={Rectified LpJEPA: Joint-Embedding Predictive Architectures with Sparse and Maximum-Entropy Representations}, 
      author={Yilun Kuang and Yash Dagade and Tim G. J. Rudner and Randall Balestriero and Yann LeCun},
      year={2026},
      eprint={2602.01456},
      archivePrefix={arXiv},
      primaryClass={cs.LG},
      url={https://arxiv.org/abs/2602.01456}, 
}
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Paper for YashDagade/rectified-lp-jepa