Datasets:
Tasks:
Other
Languages:
English
Size:
100K<n<1M
ArXiv:
Tags:
computer-vision
image-processing
egocentric-vision
representation-learning
self-supervised-learning
robustness
DOI:
License:
File size: 9,551 Bytes
0d84985 8927c02 d1ece97 3a9b063 afce51e 8927c02 afce51e 0d84985 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 | ---
pretty_name: CatVision
language:
- en
size_categories:
- 100K<n<1M
license: other
task_categories:
- other
tags:
- computer-vision
- image-processing
- egocentric-vision
- representation-learning
- self-supervised-learning
- robustness
- vision-transformer
- convolutional-neural-networks
- biologically-inspired-vision
- neuroscience
- image-pairs
dataset_info:
features:
- name: pair_id
dtype: string
- name: video_id
dtype: string
- name: frame_filename
dtype: string
- name: human_frame
dtype: string
- name: cat_frame
dtype: string
- name: human_width
dtype: int32
- name: human_height
dtype: int32
- name: cat_width
dtype: int32
- name: cat_height
dtype: int32
- name: human_original_size
dtype: int64
- name: cat_original_size
dtype: int64
- name: human_compressed_size
dtype: int64
- name: cat_compressed_size
dtype: int64
- name: human_compression_ratio
dtype: float32
- name: cat_compression_ratio
dtype: float32
- name: image_format
dtype: string
- name: human_original_path
dtype: string
- name: cat_original_path
dtype: string
splits:
- name: train_video1
num_bytes: 277438635
num_examples: 2579
- name: train_video10
num_bytes: 731631560
num_examples: 3000
download_size: 2015873708
dataset_size: 1009070195
configs:
- config_name: default
data_files:
- split: train_video1
path: data/train_video1-*
- split: train_video10
path: data/train_video10-*
---
# Purrturbed but Stable: Human–Cat Paired Egocentric Frames
This dataset contains strictly paired image frames that support cross species comparisons between human style and cat style visual inputs. The corpus is constructed from point of view videos of domestic cats and a biologically informed cat vision filter that approximates key properties of feline early vision.
The dataset was introduced in the paper:
> Purrturbed but Stable: Human–Cat Invariant Representations Across CNNs, ViTs and Self Supervised ViTs, 2025.
Paper website: [(Purrturbed But Stable)](https://aryashah.me/Purrturbed-but-Stable)
Python package: [(CatVision)](https://pypi.org/project/catvision/)
Read the paper here: [(Arxiv)](https://arxiv.org/abs/2511.02404)
Please cite the paper if you use this dataset in your work.
```bibtex
@misc{shah2025purrturbedstablehumancatinvariant,
title = {Purrturbed but Stable: Human-Cat Invariant Representations Across CNNs, ViTs and Self-Supervised ViTs},
author = {Arya Shah and Vaibhav Tripathi},
year = {2025},
eprint = {2511.02404},
archivePrefix= {arXiv},
primaryClass = {cs.CV},
url = {https://arxiv.org/abs/2511.02404}
}
```
## Dataset summary
- **Modality**
- RGB images.
- **Domains**
- Human like original frames.
- Cat vision filtered frames.
- **Source**
- Public point of view videos of domestic cats with a camera attached to the neck.
- **Structure**
- 191 videos.
- Over 300000 human–cat frame pairs.
- One to one pairing at the filename level.
- Mirrored directory structures across human and cat domains.
- **Use case**
- Analysis of representation invariances under cross species viewing conditions.
- Comparison across CNNs, ViTs, and self supervised ViTs.
- Robustness and invariance benchmarks with strict pairing.
The core design principle is to hold scene content fixed while changing the visual domain. Every frame in the human domain has at most one corresponding cat vision frame. Pairs with missing or corrupted counterparts are excluded. Identifiers are stable across the pipeline to enable reproducible joins and cross model analyses.
## Directory structure
The dataset is organized as follows:
- **`frames/`**
- Original video frames in human like form.
- Subdirectories per video: `video1/`, `video2/`, …, `video191/`.
- Inside each subdirectory: individual JPEG frames.
- **`cat_frames/`**
- Cat vision filtered frames produced by the biologically motivated transformation.
- Mirrored subdirectory structure: `video1/`, `video2/`, …, `video191/`.
- File names match the corresponding entries in `frames/`.
Example layout:
```text
dataset_root/
frames/
video1/
frame_000001.jpg
frame_000002.jpg
...
cat_frames/
video1/
frame_000001.jpg
frame_000002.jpg
...
cat_vision_pairs_metadata.csv
```
The metadata CSV file lists only those pairs for which both domains are present.
## Metadata CSV
We provide a CSV file that encodes stable metadata for each paired frame:
- **File**
- `cat_vision_pairs_metadata.csv`
- **Columns**
- `pair_id` – stable identifier for the pair, combining video id and frame filename.
- `video_id` – video identifier, for example `video42`.
- `frame_filename` – frame file name, for example `frame_000123.jpg`.
- `human_frame` – relative path to the human like frame, for example `frames/video42/frame_000123.jpg`.
- `cat_frame` – relative path to the cat vision frame, for example `cat_frames/video42/frame_000123.jpg`.
Paths are relative to the dataset root directory.
The CSV only includes pairs where both the human frame and the cat vision frame are present and valid RGB images.
## Cat vision filter
The cat vision frames in `cat_frames/` are generated using a biologically informed transformation that approximates several aspects of feline early vision and optics. The implementation is provided as the script `cat_vision_filter.py` in this repository.
The filter models:
- **Spectral sensitivity with rod dominance**
- Smooth spectral sensitivity curves for short wavelength cones, long wavelength cones, and rods.
- Approximate peaks around 450 nm for S cones, 556 nm for L cones, and 498 nm for rods.
- Rod dominated weighting with a rod–cone ratio of 25:1.
- Reduced long wavelength (red) sensitivity and enhanced blue–green sensitivity.
- **Spatial acuity and peripheral falloff**
- Frequency domain low pass filtering that reduces high spatial frequencies.
- Effective spatial acuity set to about one sixth of typical human high contrast acuity.
- Center–surround acuity mapping that keeps the center relatively sharper and blurs the periphery.
- **Geometric optics and field of view**
- Vertical slit pupil approximation with a 3:1 vertical aspect ratio.
- Barrel like distortion that broadens the effective field of view.
- Field of view parameters around 200 degrees horizontal and 140 degrees vertical.
- **Temporal sensitivity and flicker fusion**
- Temporal response that peaks near 10 Hz.
- Reduced gain above roughly 50 to 60 Hz, consistent with elevated flicker fusion threshold in cats.
- Temporal processing operates on sequences of frames and modulates motion related changes.
- **Motion sensitivity with horizontal bias**
- Optical flow estimation with Lucas–Kanade style updates.
- Motion magnitude and direction are combined with a bias toward horizontal motion.
- Direction dependent gain favors horizontally oriented motion vectors.
- **Tapetum lucidum low light enhancement**
- Luminance dependent gain modulation that boosts responses in low light scenes.
- Additional blue–green tint that mimics the reflective properties of the tapetum lucidum.
The filter is presented as an engineering approximation rather than a fully detailed optical retinal cortical model. It omits wavelength dependent blur, detailed retinal mosaics, chromatic aberrations, and dynamic pupil control. It is intended as a biologically motivated stressor that modifies images in a way that is qualitatively consistent with feline visual characteristics while remaining computationally tractable.
## Intended uses
- **Primary uses**
- Studying representational alignment and invariance across models and architectures.
- Comparing CNNs, supervised ViTs, and self supervised ViTs under cross species visual conditions.
- Probing how models respond to changes in low level statistics, spectral content, and motion cues that mimic feline vision.
- **Potential downstream tasks**
- Analysis of invariance with strictly paired inputs.
- Egocentric vision studies using animal mounted cameras.
- Robustness analysis for models under structured shifts in early vision.
The dataset does not include semantic labels. Models are evaluated using representations extracted from frozen encoders and analyzed with similarity and alignment measures.
## Data collection and ethics
- Frames are derived from publicly available in the wild recordings of domestic cats with cameras attached to the neck.
- Personal identifiers are not present in the dataset as curated for the experiments.
- Frames are used only for representational analyses and not for identity recognition.
Users are responsible for ensuring that their own use complies with local regulations and with the terms of the original video sources.
## Citation
If you use this dataset, please cite:
```bibtex
@misc{shah2025purrturbedstablehumancatinvariant,
title = {Purrturbed but Stable: Human-Cat Invariant Representations Across CNNs, ViTs and Self-Supervised ViTs},
author = {Arya Shah and Vaibhav Tripathi},
year = {2025},
eprint = {2511.02404},
archivePrefix= {arXiv},
primaryClass = {cs.CV},
url = {https://arxiv.org/abs/2511.02404}
} |