File size: 3,349 Bytes
54271be bbf67b6 54271be bbf67b6 54271be a6bad0b bbf67b6 54271be bbf67b6 54271be |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 |
---
language: en
tags:
- image-retrieval
- copydays
---
# Dataset Card for Copydays
## Dataset Description
**Copydays** is a dataset designed for evaluating copy detection and near-duplicate image retrieval algorithms. It contains images with various modifications to test the robustness of retrieval systems.
- **copydays_original**: Original, unmodified images.
- **copydays_strong**: Images with strong modifications (e.g., cropping, rotation, compression).
These datasets are widely used for benchmarking image retrieval systems under challenging conditions.
## Dataset Features
Each example contains:
- `image` (`Image`): An image file (JPEG or PNG).
- `filename` (`string`): The original filename of the image (e.g., `200000.jpg`).
- `split_type` (`string`): The type of split the image belongs to (`original` or `strong`).
- `block_id` (`int32`): The first 4 digits of the filename, representing the block ID (e.g., `2000` for `200000.jpg`).
- `query_id` (`int32`): The query ID for query images (-1 for database images). Digits 5 and 6 of an image name (e.g., `01` for `200001.jpg`).
## Dataset Splits
- **queries**: Query images with modifications for evaluation. Also includes the original images.
- **database**: Original images used as the database for retrieval.
To tell if something is an original image or a strongly modified image, refer to a given images `split_type` field. An example is shown in the `Example Usage` below.
## Dataset Versions
- Version 1.0.0
## Example Usage
Use the Hugging Face `datasets` library to load one of the configs:
```python
import datasets
# Name of the dataset
dataset_name = "randall-lab/INRIA-CopyDays"
# Load query images
query_dataset = datasets.load_dataset(
dataset_name,
split="queries",
trust_remote_code=True,
)
# Load database images
db_dataset = datasets.load_dataset(
dataset_name,
split="database",
trust_remote_code=True,
)
# Print the length of the query dataset -- should be 386, since it includes all 229 strong AND all 157 original queries
print(f"Number of query images: {len(query_dataset)}")
# You can tell if it is a strong or an original query by checking the `split_type` field on a given image
example_query = query_dataset[0] # Get any desired query image
print(f"Example Query - Filename: {example_query['filename']}")
print(f"Example Query - Split Type: {example_query['split_type']}")
# Print the length of the database dataset -- should be 157, since it includes all 157 original images
print(f"Number of database images: {len(db_dataset)}")
```
## Dataset Structure
- The datasets consist of images downloaded and extracted from official URLs hosted by the Copydays project.
- The `copydays_original` dataset contains unmodified images.
- The `copydays_strong` dataset contains images with strong modifications.
## Dataset Citation
If you use this dataset, please cite the original paper:
```bibtex
@inproceedings{jegou2008hamming,
title={Hamming embedding and weak geometric consistency for large scale image search},
author={Jegou, Herve and Douze, Matthijs and Schmid, Cordelia},
booktitle={European conference on computer vision},
pages={304--317},
year={2008},
organization={Springer}
}
```
## Dataset Homepage
[Copydays project page](https://thoth.inrialpes.fr/~jegou/data.php.html#copydays)
|