| --- |
| license: mit |
| tags: |
| - video-compression |
| - implicit-neural-representations |
| - hypernetwork |
| - pytorch |
| --- |
| |
| # TeCoNeRV Model Checkpoints |
|
|
| TeCoNeRV uses hypernetworks to predict implicit neural representation (INR) weights for video compression. A patch-tubelet decomposition enables hypernetworks to scale to high-resolution video prediction. The temporal coherence objective reduces redundancy across consecutive clips, enabling compact residual encoding of per-clip parameters. |
|
|
| This repository contains hypernetwork training checkpoints for the three model families described in the paper. |
|
|
| ## Model families |
|
|
| `nervenc` — Baseline NeRVEnc hypernetwork. Predicts full-resolution clip reconstructions directly. |
|
|
| `patch_tubelet` — Proposed patch-tubelet hypernetwork. Predicts parameters for spatial tubelets; full frames are reconstructed by tiling. Supports resolution-independent inference. |
|
|
| `teconerv` — Proposed method. Initialized from a `patch_tubelet` checkpoint and finetuned with a temporal coherence objective. |
|
|
| ## Getting started |
|
|
| See the [GitHub repository](https://github.com/namithap10/TeCoNeRV) for full documentation on setup, training, and evaluation. Checkpoint download instructions are in `docs/models.md`. |
|
|
| ```bash |
| git lfs install |
| git clone https://huggingface.co/namithap/teconerv-models |
| ``` |
|
|
| ## Citation |
|
|
| ```bibtex |
| @article{padmanabhan2026teconerv, |
| title={TeCoNeRV: Leveraging Temporal Coherence for Compressible Neural Representations for Videos}, |
| author={Padmanabhan, Namitha and Gwilliam, Matthew and Shrivastava, Abhinav}, |
| journal={arXiv preprint arXiv:2602.16711}, |
| year={2026} |
| } |
| ``` |
|
|