File size: 1,633 Bytes
f9fe548 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 | ---
license: mit
tags:
- video-compression
- implicit-neural-representations
- hypernetwork
- pytorch
---
# TeCoNeRV Model Checkpoints
TeCoNeRV uses hypernetworks to predict implicit neural representation (INR) weights for video compression. A patch-tubelet decomposition enables hypernetworks to scale to high-resolution video prediction. The temporal coherence objective reduces redundancy across consecutive clips, enabling compact residual encoding of per-clip parameters.
This repository contains hypernetwork training checkpoints for the three model families described in the paper.
## Model families
`nervenc` — Baseline NeRVEnc hypernetwork. Predicts full-resolution clip reconstructions directly.
`patch_tubelet` — Proposed patch-tubelet hypernetwork. Predicts parameters for spatial tubelets; full frames are reconstructed by tiling. Supports resolution-independent inference.
`teconerv` — Proposed method. Initialized from a `patch_tubelet` checkpoint and finetuned with a temporal coherence objective.
## Getting started
See the [GitHub repository](https://github.com/namithap10/TeCoNeRV) for full documentation on setup, training, and evaluation. Checkpoint download instructions are in `docs/models.md`.
```bash
git lfs install
git clone https://huggingface.co/namithap/teconerv-models
```
## Citation
```bibtex
@article{padmanabhan2026teconerv,
title={TeCoNeRV: Leveraging Temporal Coherence for Compressible Neural Representations for Videos},
author={Padmanabhan, Namitha and Gwilliam, Matthew and Shrivastava, Abhinav},
journal={arXiv preprint arXiv:2602.16711},
year={2026}
}
```
|