YearCLIP / README.md
Morris0401's picture
Create Readme.md
35b2dcb verified
---
license: mit
tags:
- vision
- image-regression
- building-age
- clip
- ordinal-regression
library_name: pytorch
pipeline_tag: image-feature-extraction
---
# YearCLIP: Beyond Memorization
This is the official checkpoint for **YearCLIP**, introduced in the paper [Beyond Memorization: A Multi-Modal Ordinal Regression Benchmark to Expose Popularity Bias in Vision-Language Models](https://arxiv.org/abs/2512.21337).
## Model Details
- **Model Architecture**: YearCLIP (CLIP-based with Ordinal Regression Head)
- **Task**: Building Age Estimation (Year Prediction)
- **Dataset**: [YearGuessr](https://huggingface.co/datasets/Morris0401/Year-Guessr-Dataset)
- **Performance**: MAE 39.26 years (on YearGuessr Test Split)
## Usage
Please refer to our [GitHub Repository](https://github.com/Sytwu/BeyondMemo) for installation and inference instructions.
To download this checkpoint manually in python:
```python
from huggingface_hub import hf_hub_download
checkpoint_path = hf_hub_download(repo_id="Morris0401/YearCLIP", filename="yearclip_best.pt")
print(f"Model downloaded to: {checkpoint_path}")
```
## Citation
If you find this dataset helpful, please consider citing:
```bibtex
@misc{szutu2025memorizationmultimodalordinalregression,
title={Beyond Memorization: A Multi-Modal Ordinal Regression Benchmark to Expose Popularity Bias in Vision-Language Models},
author={Li-Zhong Szu-Tu and Ting-Lin Wu and Chia-Jui Chang and He Syu and Yu-Lun Liu},
year={2025},
eprint={2512.21337},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={[https://arxiv.org/abs/2512.21337](https://arxiv.org/abs/2512.21337)},
}
```