You need to agree to share your contact information to access this dataset

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this dataset content.

Dataset Card for PairCams

PairCams consists of 730 pairs of images of the same subject, with one photo captured with a smartphone and the other captured with an older digital camera.

Dataset Details

Dataset Description

  • Curated by: Ryan Ramos, Vladan Stojnic, Giorgos Kordopatis-Zilos, Yuta Nakashima, Giorgos Tolias, Noa Garcia
  • Funded by: Dataset collection was supported by JSPS KAKENHI No. JP23H00497 and JP22K12091, JST CREST Grant No. JPMJCR20D3, and JST FOREST Grant No. JPMJFR216O.
  • License: CC BY 4.0

Dataset Sources

Uses

Direct Use

This dataset is intended to be used for near-duplicate image retrieval.

Out-of-Scope Use

This dataset is not intended to be used for anything outside of near-duplicate image retrieval.

Dataset Structure

  • subject_idx: an ID identify an image subject and can be used to identify which images are paired together
  • model: the camera used to capture the image
  • image: the image

Dataset Creation

Curation Rationale

We created this dataset because we did not only need a near-duplicate retrieval benchmark, but we specifically needed one where the images came from two distinct family of cameras.

Source Data

We captured these images ourselves using our own cameras.

Data Collection and Processing

We captured these images by capturing the same subject in similar conditions using two cameras in succession, once with a smartphone and once with an older digital camera.

Who are the source data producers?

The authors of the paper this dataset was proposed in are the original producers of this dataset, as we captured all photos ourselves.

Annotations [optional]

Annotation process

We used Pillow to extract the EXIF metadata of the images in order to identify the cameras used to capture the images.

Who are the annotators?

Annotations were provided by the authors of the paper this dataset was proposed in.

Personal and Sensitive Information

As far as we are aware, there is no personal or sensitive information contained within this dataset.

Bias, Risks, and Limitations

Given that there are no human subjects in this dataset, we are not aware of any risks targetting humans. However the images in this dataset are heavily biased in terms of geographic location.

Recommendations

Because of the geographical bias, we do not recommend that this dataset be used in any applications outside its original scope if such application requres unbiased data.

Citation [optional]

BibTeX:

@InProceedings{Ramos_2025_ICCV,
    author    = {Ramos, Ryan and Stojni\'c, Vladan and Kordopatis-Zilos, Giorgos and Nakashima, Yuta and Tolias, Giorgos and Garcia, Noa},
    title     = {Processing and acquisition traces in visual encoders: What does CLIP know about your camera?},
    booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)},
    month     = {October},
    year      = {2025},
    pages     = {17056-17066}
}

APA:

Ramos, R., Stojnić, V., Kordopatis-Zilos, G., Nakashima, Y., Tolias, G., & Garcia, N. (2025). Processing and acquisition traces in visual encoders: What does CLIP know about your camera?. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV) (pp. 17056-17066).

Dataset Card Authors

Ryan Ramos

Dataset Card Contact

ryanramos@is.ids.osaka-u.ac.jp

Downloads last month
40