mafeng ianhajra commited on
Commit
9be3e17
·
verified ·
0 Parent(s):

Duplicate from randall-lab/revisitop

Browse files

Co-authored-by: Ian Hajra <ianhajra@users.noreply.huggingface.co>

Files changed (3) hide show
  1. .gitattributes +59 -0
  2. README.md +107 -0
  3. revisitop.py +247 -0
.gitattributes ADDED
@@ -0,0 +1,59 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.7z filter=lfs diff=lfs merge=lfs -text
2
+ *.arrow filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.bz2 filter=lfs diff=lfs merge=lfs -text
5
+ *.ckpt filter=lfs diff=lfs merge=lfs -text
6
+ *.ftz filter=lfs diff=lfs merge=lfs -text
7
+ *.gz filter=lfs diff=lfs merge=lfs -text
8
+ *.h5 filter=lfs diff=lfs merge=lfs -text
9
+ *.joblib filter=lfs diff=lfs merge=lfs -text
10
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
+ *.lz4 filter=lfs diff=lfs merge=lfs -text
12
+ *.mds filter=lfs diff=lfs merge=lfs -text
13
+ *.mlmodel filter=lfs diff=lfs merge=lfs -text
14
+ *.model filter=lfs diff=lfs merge=lfs -text
15
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
16
+ *.npy filter=lfs diff=lfs merge=lfs -text
17
+ *.npz filter=lfs diff=lfs merge=lfs -text
18
+ *.onnx filter=lfs diff=lfs merge=lfs -text
19
+ *.ot filter=lfs diff=lfs merge=lfs -text
20
+ *.parquet filter=lfs diff=lfs merge=lfs -text
21
+ *.pb filter=lfs diff=lfs merge=lfs -text
22
+ *.pickle filter=lfs diff=lfs merge=lfs -text
23
+ *.pkl filter=lfs diff=lfs merge=lfs -text
24
+ *.pt filter=lfs diff=lfs merge=lfs -text
25
+ *.pth filter=lfs diff=lfs merge=lfs -text
26
+ *.rar filter=lfs diff=lfs merge=lfs -text
27
+ *.safetensors filter=lfs diff=lfs merge=lfs -text
28
+ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
29
+ *.tar.* filter=lfs diff=lfs merge=lfs -text
30
+ *.tar filter=lfs diff=lfs merge=lfs -text
31
+ *.tflite filter=lfs diff=lfs merge=lfs -text
32
+ *.tgz filter=lfs diff=lfs merge=lfs -text
33
+ *.wasm filter=lfs diff=lfs merge=lfs -text
34
+ *.xz filter=lfs diff=lfs merge=lfs -text
35
+ *.zip filter=lfs diff=lfs merge=lfs -text
36
+ *.zst filter=lfs diff=lfs merge=lfs -text
37
+ *tfevents* filter=lfs diff=lfs merge=lfs -text
38
+ # Audio files - uncompressed
39
+ *.pcm filter=lfs diff=lfs merge=lfs -text
40
+ *.sam filter=lfs diff=lfs merge=lfs -text
41
+ *.raw filter=lfs diff=lfs merge=lfs -text
42
+ # Audio files - compressed
43
+ *.aac filter=lfs diff=lfs merge=lfs -text
44
+ *.flac filter=lfs diff=lfs merge=lfs -text
45
+ *.mp3 filter=lfs diff=lfs merge=lfs -text
46
+ *.ogg filter=lfs diff=lfs merge=lfs -text
47
+ *.wav filter=lfs diff=lfs merge=lfs -text
48
+ # Image files - uncompressed
49
+ *.bmp filter=lfs diff=lfs merge=lfs -text
50
+ *.gif filter=lfs diff=lfs merge=lfs -text
51
+ *.png filter=lfs diff=lfs merge=lfs -text
52
+ *.tiff filter=lfs diff=lfs merge=lfs -text
53
+ # Image files - compressed
54
+ *.jpg filter=lfs diff=lfs merge=lfs -text
55
+ *.jpeg filter=lfs diff=lfs merge=lfs -text
56
+ *.webp filter=lfs diff=lfs merge=lfs -text
57
+ # Video files - compressed
58
+ *.mp4 filter=lfs diff=lfs merge=lfs -text
59
+ *.webm filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,107 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language: en
3
+ tags:
4
+ - image-retrieval
5
+ - oxford5k
6
+ - paris6k
7
+ - revisitop1m
8
+ ---
9
+
10
+ # Dataset Card for RevisitOP (Oxford5k, Paris6k, RevisitOP1M)
11
+
12
+ ## Dataset Description
13
+
14
+ **RevisitOP** provides popular benchmark datasets for large-scale image retrieval research:
15
+
16
+ - **roxford5k**: Oxford 5k buildings dataset containing ~5,000 images.
17
+ - **rparis6k**: Paris 6k buildings dataset with ~6,000 images.
18
+ - **revisitop1m**: RevisitOP 1M distractor dataset with ~1 million distractor images.
19
+ - **oxfordparis**: Combination of Oxford 5k and Paris 6k datasets.
20
+
21
+ These datasets are widely used for evaluating image retrieval algorithms and contain real-world building photographs and distractors.
22
+
23
+ ## Dataset Features
24
+
25
+ Each example contains:
26
+
27
+ - `image` (`Image`): An image file (JPEG or PNG).
28
+ - `filename` (`string`): The original filename of the image.
29
+ - `dataset` (`string`): The source dataset the image belongs to (`roxford5k`, `rparis6k`, or `revisitop1m`).
30
+ - `query_id` (`int32`): Query ID for query images (-1 for database images).
31
+ - `bbx` (`Sequence[float32]`): Bounding box coordinates [x1, y1, x2, y2] for query images.
32
+ - `easy` (`Sequence[int32]`): Easy relevant images for queries.
33
+ - `hard` (`Sequence[int32]`): Hard relevant images for queries.
34
+ - `junk` (`Sequence[int32]`): Junk images for queries.
35
+
36
+ ## Dataset Splits
37
+
38
+ - **qimlist**: Query images with ground truth annotations (bounding boxes and relevance labels).
39
+ - **imlist**: Database images for retrieval.
40
+
41
+ ## Dataset Versions
42
+
43
+ - Version 1.0.0
44
+
45
+ ## Example Usage
46
+
47
+ Use the Hugging Face `datasets` library to load one of the configs:
48
+
49
+ ```python
50
+ import datasets
51
+ from aiohttp import ClientTimeout
52
+
53
+ dataset_name = "randall-lab/revisitop"
54
+ timeout_period = 500000 # very long timeout to prevent timeouts
55
+ storage_options = {"client_kwargs": {"timeout": ClientTimeout(total=timeout_period)}}
56
+
57
+ # These are the config names defined in the script
58
+ dataset_configs = ["roxford5k", "rparis6k", "oxfordparis"] # "revisitop1m" is large and may take a long time to load
59
+
60
+ # Load query split for evaluation
61
+ for i, config_name in enumerate(dataset_configs, start=1):
62
+ # Load query images
63
+ query_dataset = datasets.load_dataset(
64
+ path=dataset_name,
65
+ name=config_name,
66
+ split="qimlist",
67
+ trust_remote_code=True,
68
+ storage_options=storage_options,
69
+ )
70
+
71
+ # Load database images
72
+ db_dataset = datasets.load_dataset(
73
+ path=dataset_name,
74
+ name=config_name,
75
+ split="imlist",
76
+ trust_remote_code=True,
77
+ storage_options=storage_options,
78
+ )
79
+
80
+
81
+ # Example query image
82
+ query_example = query_dataset[0]
83
+ ```
84
+
85
+ ## Dataset Structure
86
+
87
+ - The datasets consist of images downloaded and extracted from official URLs hosted by the Oxford Visual Geometry Group and the RevisitOP project.
88
+ - The `roxford5k` and `rparis6k` datasets come from `.tgz` archives with corresponding `.pkl` ground truth files.
89
+ - The `revisitop1m` dataset consists of 100 `.tar.gz` archives with JPEG images as distractors.
90
+ - The combined `oxfordparis` dataset merges the Oxford and Paris sets.
91
+ - Ground truth files contain query lists, database lists, and annotations (bounding boxes, easy/hard/junk labels).
92
+
93
+ ## Dataset Citation
94
+
95
+ If you use this dataset, please cite the original paper:
96
+
97
+ ```bibtex
98
+ @inproceedings{Radenovic2018RevisitingOP,
99
+ title={Revisiting Oxford and Paris: Large-Scale Image Retrieval Benchmarking},
100
+ author={Filip Radenovic and Ahmet Iscen and Giorgos Tolias and Yannis Avrithis and Ondrej Chum},
101
+ year={2018}
102
+ }
103
+ ```
104
+
105
+ ## Dataset Homepage
106
+
107
+ [RevisitOP project page](http://cmp.felk.cvut.cz/revisitop/)
revisitop.py ADDED
@@ -0,0 +1,247 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import os
2
+ import tarfile
3
+ import urllib.request
4
+ import pickle
5
+ import datasets
6
+
7
+ _VERSION = datasets.Version("1.0.0")
8
+
9
+ _URLS = {
10
+ "roxford5k": {
11
+ "images": [
12
+ "https://www.robots.ox.ac.uk/~vgg/data/oxbuildings/oxbuild_images-v1.tgz"
13
+ ],
14
+ "ground_truth": [
15
+ "http://cmp.felk.cvut.cz/revisitop/data/datasets/roxford5k/gnd_roxford5k.pkl"
16
+ ],
17
+ },
18
+ "rparis6k": {
19
+ "images": [
20
+ "https://www.robots.ox.ac.uk/~vgg/data/parisbuildings/paris_1-v1.tgz",
21
+ "https://www.robots.ox.ac.uk/~vgg/data/parisbuildings/paris_2-v1.tgz",
22
+ ],
23
+ "ground_truth": [
24
+ "http://cmp.felk.cvut.cz/revisitop/data/datasets/rparis6k/gnd_rparis6k.pkl"
25
+ ],
26
+ },
27
+ "revisitop1m": {
28
+ "images": [
29
+ f"http://ptak.felk.cvut.cz/revisitop/revisitop1m/jpg/revisitop1m.{i+1}.tar.gz"
30
+ for i in range(100)
31
+ ]
32
+ },
33
+ }
34
+
35
+ _DESCRIPTION = (
36
+ "Oxford5k, Paris6k, and RevisitOP1M benchmark datasets for image retrieval."
37
+ )
38
+
39
+ _CITATION = """\
40
+ @inproceedings{Radenovic2018RevisitingOP,
41
+ title={Revisiting Oxford and Paris: Large-Scale Image Retrieval Benchmarking},
42
+ author={Filip Radenovic and Ahmet Iscen and Giorgos Tolias and Yannis Avrithis and Ondrej Chum},
43
+ year={2018}
44
+ }
45
+ """
46
+
47
+ BUILDER_CONFIGS = [
48
+ datasets.BuilderConfig(
49
+ name="roxford5k",
50
+ version=_VERSION,
51
+ description="Oxford 5k image retrieval dataset.",
52
+ ),
53
+ datasets.BuilderConfig(
54
+ name="rparis6k",
55
+ version=_VERSION,
56
+ description="Paris 6k image retrieval dataset.",
57
+ ),
58
+ datasets.BuilderConfig(
59
+ name="revisitop1m",
60
+ version=_VERSION,
61
+ description="RevisitOP 1M distractor images.",
62
+ ),
63
+ datasets.BuilderConfig(
64
+ name="oxfordparis",
65
+ version=_VERSION,
66
+ description="Oxford + Paris combined dataset.",
67
+ ),
68
+ ]
69
+
70
+
71
+ class RevisitOP(datasets.GeneratorBasedBuilder):
72
+ BUILDER_CONFIGS = BUILDER_CONFIGS
73
+ DEFAULT_CONFIG_NAME = "roxford5k"
74
+
75
+ def _info(self):
76
+ return datasets.DatasetInfo(
77
+ description=_DESCRIPTION,
78
+ features=datasets.Features(
79
+ {
80
+ "image": datasets.Image(),
81
+ "filename": datasets.Value("string"),
82
+ "dataset": datasets.Value("string"),
83
+ "query_id": datasets.Value("int32"),
84
+ "bbx": datasets.Sequence(
85
+ datasets.Value("float32")
86
+ ), # bounding box [x1, y1, x2, y2]
87
+ "easy": datasets.Sequence(
88
+ datasets.Value("int32")
89
+ ), # easy relevant images
90
+ "hard": datasets.Sequence(
91
+ datasets.Value("int32")
92
+ ), # hard relevant images
93
+ "junk": datasets.Sequence(datasets.Value("int32")), # junk images
94
+ }
95
+ ),
96
+ supervised_keys=None,
97
+ homepage="http://cmp.felk.cvut.cz/revisitop/",
98
+ citation=_CITATION,
99
+ )
100
+
101
+ def _split_generators(self, dl_manager):
102
+ cfg_name = self.config.name
103
+
104
+ if cfg_name == "revisitop1m":
105
+ urls = _URLS[cfg_name]["images"]
106
+ archive_paths = dl_manager.download(urls)
107
+ extracted_paths = dl_manager.extract(archive_paths)
108
+
109
+ return [
110
+ datasets.SplitGenerator(
111
+ name="imlist",
112
+ gen_kwargs={
113
+ "image_dirs": (
114
+ extracted_paths
115
+ if isinstance(extracted_paths, list)
116
+ else [extracted_paths]
117
+ ),
118
+ "ground_truth_file": None,
119
+ "split_type": "imlist",
120
+ "dataset_name": cfg_name,
121
+ },
122
+ )
123
+ ]
124
+
125
+ if cfg_name == "oxfordparis":
126
+ # Handle combined dataset
127
+ image_urls = _URLS["roxford5k"]["images"] + _URLS["rparis6k"]["images"]
128
+ gt_urls = (
129
+ _URLS["roxford5k"]["ground_truth"] + _URLS["rparis6k"]["ground_truth"]
130
+ )
131
+ else:
132
+ image_urls = _URLS[cfg_name]["images"]
133
+ gt_urls = _URLS[cfg_name]["ground_truth"]
134
+
135
+ # Download and extract image archives
136
+ archive_paths = dl_manager.download(image_urls)
137
+ extracted_paths = dl_manager.extract(archive_paths)
138
+
139
+ # Download ground truth files
140
+ gt_paths = dl_manager.download(gt_urls)
141
+
142
+ # Normalize lists if single items
143
+ if not isinstance(extracted_paths, list):
144
+ extracted_paths = [extracted_paths]
145
+ if not isinstance(gt_paths, list):
146
+ gt_paths = [gt_paths]
147
+
148
+ return [
149
+ datasets.SplitGenerator(
150
+ name="qimlist",
151
+ gen_kwargs={
152
+ "image_dirs": extracted_paths,
153
+ "ground_truth_files": gt_paths,
154
+ "split_type": "qimlist",
155
+ "dataset_name": cfg_name,
156
+ },
157
+ ),
158
+ datasets.SplitGenerator(
159
+ name="imlist",
160
+ gen_kwargs={
161
+ "image_dirs": extracted_paths,
162
+ "ground_truth_files": gt_paths,
163
+ "split_type": "imlist",
164
+ "dataset_name": cfg_name,
165
+ },
166
+ ),
167
+ ]
168
+
169
+ def _generate_examples(
170
+ self, image_dirs, ground_truth_files, split_type, dataset_name
171
+ ):
172
+ # Build image path mapping
173
+ image_path_mapping = {}
174
+ for image_dir in image_dirs:
175
+ for root, _, files in os.walk(image_dir):
176
+ for fname in files:
177
+ if fname.lower().endswith((".jpg", ".jpeg", ".png")):
178
+ fpath = os.path.join(root, fname)
179
+ # Remove extension for mapping
180
+ fname_no_ext = os.path.splitext(fname)[0]
181
+ image_path_mapping[fname_no_ext] = fpath
182
+
183
+ # Handle revisitop1m case (no ground truth)
184
+ if ground_truth_files is None:
185
+ key = 0
186
+ for fname_no_ext, fpath in image_path_mapping.items():
187
+ yield key, {
188
+ "image": fpath,
189
+ "filename": fname_no_ext + ".jpg",
190
+ "dataset": dataset_name,
191
+ "query_id": -1,
192
+ "bbx": [],
193
+ "easy": [],
194
+ "hard": [],
195
+ "junk": [],
196
+ }
197
+ key += 1
198
+ return
199
+
200
+ # Load ground truth files
201
+ ground_truth_data = []
202
+ for gt_file in ground_truth_files:
203
+ with open(gt_file, "rb") as f:
204
+ gt_data = pickle.load(f)
205
+ ground_truth_data.append(gt_data)
206
+
207
+ key = 0
208
+
209
+ for gt_data in ground_truth_data:
210
+ imlist = gt_data["imlist"]
211
+ qimlist = gt_data["qimlist"]
212
+ gnd = gt_data["gnd"]
213
+
214
+ if split_type == "qimlist":
215
+ # Generate query examples
216
+ for i, query_name in enumerate(qimlist):
217
+ query_name_no_ext = os.path.splitext(query_name)[0]
218
+ if query_name_no_ext in image_path_mapping:
219
+ query_gnd = gnd[i]
220
+ yield key, {
221
+ "image": image_path_mapping[query_name_no_ext],
222
+ "filename": query_name,
223
+ "dataset": dataset_name,
224
+ "query_id": i,
225
+ "bbx": query_gnd.get("bbx", []),
226
+ "easy": query_gnd.get("easy", []),
227
+ "hard": query_gnd.get("hard", []),
228
+ "junk": query_gnd.get("junk", []),
229
+ }
230
+ key += 1
231
+
232
+ elif split_type == "imlist":
233
+ # Generate image pool examples
234
+ for i, image_name in enumerate(imlist):
235
+ image_name_no_ext = os.path.splitext(image_name)[0]
236
+ if image_name_no_ext in image_path_mapping:
237
+ yield key, {
238
+ "image": image_path_mapping[image_name_no_ext],
239
+ "filename": image_name,
240
+ "dataset": dataset_name,
241
+ "query_id": -1, # Not a query image
242
+ "bbx": [],
243
+ "easy": [],
244
+ "hard": [],
245
+ "junk": [],
246
+ }
247
+ key += 1