|
|
--- |
|
|
license: cc-by-4.0 |
|
|
task_categories: |
|
|
- image-classification |
|
|
- question-answering |
|
|
- object-detection |
|
|
tags: |
|
|
- benchmark |
|
|
- camera_parameters |
|
|
- exposure |
|
|
size_categories: |
|
|
- 10K<n<100K |
|
|
language: |
|
|
- en |
|
|
configs: |
|
|
- config_name: default |
|
|
data_files: |
|
|
- split: test |
|
|
path: "data.csv" |
|
|
--- |
|
|
|
|
|
# SNAP Benchmark |
|
|
|
|
|
**Code and annotations**: [https://github.com/ykotseruba/SNAP] |
|
|
|
|
|
**SNAP** (stands for **S**hutter speed, **I**SO se**N**sitivity, and **AP**erture) is a new benchmark consisting of images of objects taken under controlled lighting conditions and with densely sampled camera settings. |
|
|
|
|
|
This benchmark allows testing the effects of capture bias, which includes camera settings and illumination, on performance of vision algorithms. |
|
|
|
|
|
SNAP contains **37,558** images of 100 scenes (10 scenes per 10 object categories) uniformly distributed across sensor settings and annotations for the following tasks: |
|
|
|
|
|
- image classification; |
|
|
- object detection; |
|
|
- instance segmentation; |
|
|
- visual question answering (VQA). |
|
|
|
|
|
<!-- ## Citation |
|
|
|
|
|
**BibTeX:** --> |
|
|
|
|
|
- **Curated by:** Iuliia Kotseruba |
|
|
- **Shared by:** Iuliia Kotseruba |
|
|
- **Language(s) (NLP):** English |
|
|
- **License:** CC-by-4.0 |