|
|
--- |
|
|
dataset_info: |
|
|
features: |
|
|
- name: subject_idx |
|
|
dtype: string |
|
|
- name: model |
|
|
dtype: string |
|
|
- name: image |
|
|
dtype: image |
|
|
splits: |
|
|
- name: train |
|
|
num_bytes: 4842899994.08 |
|
|
num_examples: 1460 |
|
|
download_size: 5131790381 |
|
|
dataset_size: 4842899994.08 |
|
|
configs: |
|
|
- config_name: default |
|
|
data_files: |
|
|
- split: train |
|
|
path: data/train-* |
|
|
--- |
|
|
|
|
|
# Dataset Card for PairCams |
|
|
|
|
|
<!-- Provide a quick summary of the dataset. --> |
|
|
|
|
|
PairCams consists of 730 pairs of images of the same subject, with one photo captured with a smartphone and the other captured with an older digital camera. |
|
|
|
|
|
## Dataset Details |
|
|
|
|
|
### Dataset Description |
|
|
|
|
|
<!-- Provide a longer summary of what this dataset is. --> |
|
|
|
|
|
|
|
|
|
|
|
- **Curated by:** Ryan Ramos, Vladan Stojnic, Giorgos Kordopatis-Zilos, Yuta Nakashima, Giorgos Tolias, Noa Garcia |
|
|
- **Funded by:** Dataset collection was supported by JSPS KAKENHI No. JP23H00497 and JP22K12091, JST CREST Grant No. JPMJCR20D3, and JST FOREST Grant No. JPMJFR216O. |
|
|
<!-- - **Shared by [optional]:** [More Information Needed] --> |
|
|
<!-- - **Language(s) (NLP):** [More Information Needed] --> |
|
|
- **License:** [CC BY 4.0](https://creativecommons.org/licenses/by/4.0/) |
|
|
|
|
|
### Dataset Sources |
|
|
|
|
|
<!-- Provide the basic links for the dataset. --> |
|
|
|
|
|
- **Repository:** https://github.com/ryan-caesar-ramos/visual-encoder-traces |
|
|
- **Paper:** [Processing and acquisition traces in visual encoders: What does CLIP know about your camera?](https://openaccess.thecvf.com/content/ICCV2025/html/Ramos_Processing_and_acquisition_traces_in_visual_encoders_What_does_CLIP_ICCV_2025_paper.html) |
|
|
<!-- - **Demo [optional]:** [More Information Needed] --> |
|
|
- **Summary thread:** [by Vladan Stojnić, co-first author](https://bsky.app/profile/stojnicv.xyz/post/3lwo7xswiu22n) |
|
|
|
|
|
## Uses |
|
|
|
|
|
<!-- Address questions around how the dataset is intended to be used. --> |
|
|
|
|
|
### Direct Use |
|
|
|
|
|
<!-- This section describes suitable use cases for the dataset. --> |
|
|
|
|
|
This dataset is intended to be used for near-duplicate image retrieval. |
|
|
|
|
|
### Out-of-Scope Use |
|
|
|
|
|
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. --> |
|
|
|
|
|
This dataset is not intended to be used for anything outside of near-duplicate image retrieval. |
|
|
|
|
|
## Dataset Structure |
|
|
|
|
|
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. --> |
|
|
|
|
|
- subject_idx: an ID identify an image subject and can be used to identify which images are paired together |
|
|
- model: the camera used to capture the image |
|
|
- image: the image |
|
|
|
|
|
## Dataset Creation |
|
|
|
|
|
### Curation Rationale |
|
|
|
|
|
<!-- Motivation for the creation of this dataset. --> |
|
|
|
|
|
We created this dataset because we did not only need a near-duplicate retrieval benchmark, but we specifically needed one where the images came from two distinct family of cameras. |
|
|
|
|
|
### Source Data |
|
|
|
|
|
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). --> |
|
|
|
|
|
We captured these images ourselves using our own cameras. |
|
|
|
|
|
#### Data Collection and Processing |
|
|
|
|
|
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. --> |
|
|
|
|
|
We captured these images by capturing the same subject in similar conditions using two cameras in succession, once with a smartphone and once with an older digital camera. |
|
|
|
|
|
#### Who are the source data producers? |
|
|
|
|
|
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. --> |
|
|
|
|
|
The authors of the [paper](https://openaccess.thecvf.com/content/ICCV2025/html/Ramos_Processing_and_acquisition_traces_in_visual_encoders_What_does_CLIP_ICCV_2025_paper.html) this dataset was proposed in are the original producers of this dataset, as we captured all photos ourselves. |
|
|
|
|
|
### Annotations [optional] |
|
|
|
|
|
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. --> |
|
|
|
|
|
#### Annotation process |
|
|
|
|
|
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. --> |
|
|
|
|
|
We used [Pillow](https://pillow.readthedocs.io/en/stable/) to extract the EXIF metadata of the images in order to identify the cameras used to capture the images. |
|
|
|
|
|
#### Who are the annotators? |
|
|
|
|
|
<!-- This section describes the people or systems who created the annotations. --> |
|
|
|
|
|
Annotations were provided by the authors of the [paper](https://openaccess.thecvf.com/content/ICCV2025/html/Ramos_Processing_and_acquisition_traces_in_visual_encoders_What_does_CLIP_ICCV_2025_paper.html) this dataset was proposed in. |
|
|
|
|
|
#### Personal and Sensitive Information |
|
|
|
|
|
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. --> |
|
|
|
|
|
As far as we are aware, there is no personal or sensitive information contained within this dataset. |
|
|
|
|
|
## Bias, Risks, and Limitations |
|
|
|
|
|
<!-- This section is meant to convey both technical and sociotechnical limitations. --> |
|
|
|
|
|
Given that there are no human subjects in this dataset, we are not aware of any risks targetting humans. However the images in this dataset are heavily biased in terms of geographic location. |
|
|
|
|
|
### Recommendations |
|
|
|
|
|
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> |
|
|
|
|
|
Because of the geographical bias, we do not recommend that this dataset be used in any applications outside its original scope if such application requres unbiased data. |
|
|
|
|
|
## Citation [optional] |
|
|
|
|
|
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. --> |
|
|
|
|
|
**BibTeX:** |
|
|
|
|
|
``` |
|
|
@InProceedings{Ramos_2025_ICCV, |
|
|
author = {Ramos, Ryan and Stojni\'c, Vladan and Kordopatis-Zilos, Giorgos and Nakashima, Yuta and Tolias, Giorgos and Garcia, Noa}, |
|
|
title = {Processing and acquisition traces in visual encoders: What does CLIP know about your camera?}, |
|
|
booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)}, |
|
|
month = {October}, |
|
|
year = {2025}, |
|
|
pages = {17056-17066} |
|
|
} |
|
|
``` |
|
|
|
|
|
**APA:** |
|
|
|
|
|
Ramos, R., Stojnić, V., Kordopatis-Zilos, G., Nakashima, Y., Tolias, G., & Garcia, N. (2025). Processing and acquisition traces in visual encoders: What does CLIP know about your camera?. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV) (pp. 17056-17066). |
|
|
|
|
|
<!-- ## Glossary [optional] --> |
|
|
|
|
|
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. --> |
|
|
|
|
|
<!-- [More Information Needed] --> |
|
|
|
|
|
<!-- ## More Information [optional] --> |
|
|
|
|
|
<!-- [More Information Needed] --> |
|
|
|
|
|
## Dataset Card Authors |
|
|
|
|
|
Ryan Ramos |
|
|
|
|
|
## Dataset Card Contact |
|
|
|
|
|
ryanramos@is.ids.osaka-u.ac.jp |