Upload folder using huggingface_hub
Browse filesThis view is limited to 50 files because it contains too many changes.
See raw diff
- .DS_Store +0 -0
- README.md +234 -3
- download_images.py +316 -0
- images.zip +3 -0
- metadata/10287726@N02.jsonl +0 -0
- metadata/10297518@N00.jsonl +0 -0
- metadata/10299779@N03.jsonl +0 -0
- metadata/12276997@N06.jsonl +0 -0
- metadata/12734746@N00.jsonl +0 -0
- metadata/13101981@N03.jsonl +0 -0
- metadata/14580956@N08.jsonl +0 -0
- metadata/14737255@N00.jsonl +0 -0
- metadata/14798958@N08.jsonl +0 -0
- metadata/15352839@N00.jsonl +0 -0
- metadata/15803691@N00.jsonl +0 -0
- metadata/18383978@N00.jsonl +0 -0
- metadata/21435131@N06.jsonl +0 -0
- metadata/21895046@N08.jsonl +0 -0
- metadata/22017657@N05.jsonl +0 -0
- metadata/22526649@N03.jsonl +0 -0
- metadata/22736462@N07.jsonl +0 -0
- metadata/23090753@N06.jsonl +0 -0
- metadata/23518714@N00.jsonl +0 -0
- metadata/23736466@N00.jsonl +0 -0
- metadata/24232779@N00.jsonl +0 -0
- metadata/24413182@N00.jsonl +0 -0
- metadata/24468935@N03.jsonl +0 -0
- metadata/24736216@N07.jsonl +0 -0
- metadata/24819841@N06.jsonl +0 -0
- metadata/25367139@N00.jsonl +0 -0
- metadata/25652622@N00.jsonl +0 -0
- metadata/25899413@N04.jsonl +0 -0
- metadata/27550543@N02.jsonl +0 -0
- metadata/27634886@N00.jsonl +0 -0
- metadata/27637456@N06.jsonl +0 -0
- metadata/28157992@N03.jsonl +0 -0
- metadata/28495173@N00.jsonl +0 -0
- metadata/30872191@N00.jsonl +0 -0
- metadata/31058815@N00.jsonl +0 -0
- metadata/35032604@N00.jsonl +0 -0
- metadata/39979407@N05.jsonl +0 -0
- metadata/40817698@N07.jsonl +0 -0
- metadata/41610421@N05.jsonl +0 -0
- metadata/41838028@N00.jsonl +0 -0
- metadata/43145783@N00.jsonl +0 -0
- metadata/47554402@N00.jsonl +0 -0
- metadata/47642109@N04.jsonl +0 -0
- metadata/49475364@N00.jsonl +0 -0
- metadata/49645113@N07.jsonl +0 -0
- metadata/54368512@N00.jsonl +0 -0
.DS_Store
ADDED
|
Binary file (6.15 kB). View file
|
|
|
README.md
CHANGED
|
@@ -1,3 +1,234 @@
|
|
| 1 |
-
|
| 2 |
-
|
| 3 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# DISBench: DeepImageSearch Benchmark
|
| 2 |
+
|
| 3 |
+
DISBench is the first benchmark for context-aware image retrieval over visual histories. It contains 122 queries across 57 users and 109,467 photos, requiring multi-step reasoning over corpus-level context.
|
| 4 |
+
|
| 5 |
+
## Download
|
| 6 |
+
|
| 7 |
+
```bash
|
| 8 |
+
# Option 1: Hugging Face
|
| 9 |
+
huggingface-cli download xxx/DISBench --local-dir .
|
| 10 |
+
|
| 11 |
+
# Option 2: Download images from YFCC100M
|
| 12 |
+
python download_images.py --photo-ids-path photo_ids --images-path images
|
| 13 |
+
```
|
| 14 |
+
|
| 15 |
+
## File Structure
|
| 16 |
+
|
| 17 |
+
```
|
| 18 |
+
DISBench/
|
| 19 |
+
├── queries.jsonl # 122 annotated queries
|
| 20 |
+
├── metadata/
|
| 21 |
+
│ └── {user_id}.jsonl # Photo metadata per user
|
| 22 |
+
├── images/
|
| 23 |
+
│ └── {user_id}/
|
| 24 |
+
│ └── {photo_id}.jpg # Photo files
|
| 25 |
+
├── photo_ids/
|
| 26 |
+
│ └── {user_id}.txt # Photo IDs and hashes per user
|
| 27 |
+
├── evaluate.py # Evaluation script
|
| 28 |
+
└── download_images.py # Image download script
|
| 29 |
+
```
|
| 30 |
+
|
| 31 |
+
## Data Format
|
| 32 |
+
|
| 33 |
+
### queries.jsonl
|
| 34 |
+
|
| 35 |
+
Each line is a JSON object representing one query:
|
| 36 |
+
|
| 37 |
+
```json
|
| 38 |
+
{
|
| 39 |
+
"query_id": "q001",
|
| 40 |
+
"user_id": "12345678",
|
| 41 |
+
"query": "Find photos from the musical performance identified by the blue and white event logo on site, where only the lead singer appears on stage.",
|
| 42 |
+
"answer": ["98765432", "98765433", "98765434"],
|
| 43 |
+
"event_type": "intra-event"
|
| 44 |
+
}
|
| 45 |
+
```
|
| 46 |
+
|
| 47 |
+
| Field | Type | Description |
|
| 48 |
+
|:------|:-----|:------------|
|
| 49 |
+
| `query_id` | string | Unique query identifier |
|
| 50 |
+
| `user_id` | string | User whose photo collection to search |
|
| 51 |
+
| `query` | string | Natural language query (text-only) |
|
| 52 |
+
| `answer` | list[string] | Ground-truth target photo IDs |
|
| 53 |
+
| `event_type` | string | `"intra-event"` or `"inter-event"` |
|
| 54 |
+
|
| 55 |
+
### metadata/{user_id}.jsonl
|
| 56 |
+
|
| 57 |
+
Each line is a JSON object representing one photo's metadata:
|
| 58 |
+
|
| 59 |
+
```json
|
| 60 |
+
{
|
| 61 |
+
"photo_id": "98765432",
|
| 62 |
+
"metadata": {
|
| 63 |
+
"taken_time": "2012-08-03 14:32:10",
|
| 64 |
+
"longitude": -1.8808,
|
| 65 |
+
"latitude": 50.7192,
|
| 66 |
+
"accuracy": 16.0,
|
| 67 |
+
"address": "Bournemouth, Dorset, England",
|
| 68 |
+
"capturedevice": "Canon EOS 550D"
|
| 69 |
+
}
|
| 70 |
+
}
|
| 71 |
+
```
|
| 72 |
+
|
| 73 |
+
| Field | Type | Description |
|
| 74 |
+
|:------|:-----|:------------|
|
| 75 |
+
| `photo_id` | string | Unique photo identifier |
|
| 76 |
+
| `metadata.taken_time` | string | Capture time in `YY-MM-DD HH:MM:SS` format |
|
| 77 |
+
| `metadata.longitude` | float | GPS longitude. **Missing if unavailable.** |
|
| 78 |
+
| `metadata.latitude` | float | GPS latitude. **Missing if unavailable.** |
|
| 79 |
+
| `metadata.accuracy` | float | GPS accuracy level. **Missing if unavailable.** |
|
| 80 |
+
| `metadata.address` | string | Reverse-geocoded address. **Missing if unavailable.** |
|
| 81 |
+
| `metadata.capturedevice` | string | Camera/device name. **Missing if unavailable.** |
|
| 82 |
+
|
| 83 |
+
> **Note:** Optional fields (`longitude`, `latitude`, `accuracy`, `address`, `capturedevice`) are omitted entirely when unavailable — they will not appear as keys in the JSON object.
|
| 84 |
+
|
| 85 |
+
### images/{user_id}/{photo_id}.jpg
|
| 86 |
+
|
| 87 |
+
Photo files organized by user. Each user's collection contains approximately 2,000 photos accumulated chronologically from their photosets.
|
| 88 |
+
|
| 89 |
+
### photo_ids/{user_id}.txt
|
| 90 |
+
|
| 91 |
+
Each line represents one photo ID and its hash on aws storage in the format `{photo_id}\t{hash}`:
|
| 92 |
+
```
|
| 93 |
+
1205732595 c45044fd7b5c9450b2a11adc6b42d
|
| 94 |
+
```
|
| 95 |
+
|
| 96 |
+
| Field | Type | Description |
|
| 97 |
+
|:------|:-----|:------------|
|
| 98 |
+
| `photo_id` | string | Unique photo identifier |
|
| 99 |
+
| `hash` | string | Hashed value of the photo on aws storage |
|
| 100 |
+
|
| 101 |
+
## Evaluation
|
| 102 |
+
|
| 103 |
+
### Agent Evaluation
|
| 104 |
+
|
| 105 |
+
For agent systems that predict a set of photo IDs per query:
|
| 106 |
+
|
| 107 |
+
```bash
|
| 108 |
+
python evaluate.py \
|
| 109 |
+
--mode agent \
|
| 110 |
+
--dataset_path . \
|
| 111 |
+
--prediction_path /path/to/predictions.jsonl \
|
| 112 |
+
--output_dir /path/to/run_id_dir/
|
| 113 |
+
```
|
| 114 |
+
|
| 115 |
+
**Prediction format** — a JSONL file where each line is:
|
| 116 |
+
|
| 117 |
+
```json
|
| 118 |
+
{
|
| 119 |
+
"query_id": "q001",
|
| 120 |
+
"prediction": ["98765432", "98765433"]
|
| 121 |
+
}
|
| 122 |
+
```
|
| 123 |
+
|
| 124 |
+
**Output files:**
|
| 125 |
+
|
| 126 |
+
`eval_samples.jsonl` — per-query results:
|
| 127 |
+
|
| 128 |
+
```json
|
| 129 |
+
{
|
| 130 |
+
"query_id": "q001",
|
| 131 |
+
"user_id": "12345678",
|
| 132 |
+
"query": "Find photos from the musical performance...",
|
| 133 |
+
"answer": ["98765432", "98765433", "98765434"],
|
| 134 |
+
"prediction": ["98765432", "98765433"],
|
| 135 |
+
"tp": 2, "fp": 0, "fn": 1,
|
| 136 |
+
"em": 0,
|
| 137 |
+
"iou": 0.667,
|
| 138 |
+
"precision": 1.0,
|
| 139 |
+
"recall": 0.667,
|
| 140 |
+
"f1": 0.8
|
| 141 |
+
}
|
| 142 |
+
```
|
| 143 |
+
|
| 144 |
+
`eval_summary.json` — aggregated results:
|
| 145 |
+
|
| 146 |
+
```json
|
| 147 |
+
{
|
| 148 |
+
"n_samples": {
|
| 149 |
+
"total": 122,
|
| 150 |
+
"inter_event": 65,
|
| 151 |
+
"intra_event": 57
|
| 152 |
+
},
|
| 153 |
+
"aggregation": {
|
| 154 |
+
"total": {
|
| 155 |
+
"macro": { "em": 0.271, "iou": 0.45, "precision": 0.55, "recall": 0.52, "f1": 0.513 },
|
| 156 |
+
"micro": { "tp": 230, "fp": 45, "fn": 60, "iou": 0.42, "precision": 0.836, "recall": 0.793, "f1": 0.814 }
|
| 157 |
+
},
|
| 158 |
+
"inter_event": { "macro": { "..." : "..." }, "micro": { "..." : "..." } },
|
| 159 |
+
"intra_event": { "macro": { "..." : "..." }, "micro": { "..." : "..." } }
|
| 160 |
+
},
|
| 161 |
+
"set_size": {
|
| 162 |
+
"avg_answer_size": { "total": 3.76, "inter_event": 3.52, "intra_event": 4.04 },
|
| 163 |
+
"avg_pred_size": { "total": 3.10, "inter_event": 2.80, "intra_event": 3.44 }
|
| 164 |
+
},
|
| 165 |
+
"notes": {
|
| 166 |
+
"empty_pred_count": { "total": 5, "inter_event": 3, "intra_event": 2 }
|
| 167 |
+
}
|
| 168 |
+
}
|
| 169 |
+
```
|
| 170 |
+
|
| 171 |
+
### Retriever Baseline Evaluation
|
| 172 |
+
|
| 173 |
+
For embedding-based retrieval that returns a ranked list:
|
| 174 |
+
|
| 175 |
+
```bash
|
| 176 |
+
python evaluate.py \
|
| 177 |
+
--mode retriever \
|
| 178 |
+
--dataset_path . \
|
| 179 |
+
--prediction_path /path/to/predictions.jsonl \
|
| 180 |
+
--output_dir /path/to/retriever_dir/
|
| 181 |
+
```
|
| 182 |
+
|
| 183 |
+
**Prediction format** — a JSONL file where each line is:
|
| 184 |
+
|
| 185 |
+
```json
|
| 186 |
+
{
|
| 187 |
+
"query_id": "q001",
|
| 188 |
+
"prediction": ["photo_rank1", "photo_rank2", "photo_rank3", "..."]
|
| 189 |
+
}
|
| 190 |
+
```
|
| 191 |
+
|
| 192 |
+
> Predictions should be ordered by descending relevance score.
|
| 193 |
+
|
| 194 |
+
**Output files:**
|
| 195 |
+
|
| 196 |
+
`eval_samples.jsonl` — per-query results with MAP@k, Recall@k, NDCG@k for k ∈ {1, 3, 5, 10}.
|
| 197 |
+
|
| 198 |
+
`eval_summary.json` — aggregated metrics:
|
| 199 |
+
|
| 200 |
+
```json
|
| 201 |
+
{
|
| 202 |
+
"n_samples": 122,
|
| 203 |
+
"aggregation": {
|
| 204 |
+
"MAP@1": 0.123, "MAP@3": 0.105, "MAP@5": 0.120, "MAP@10": 0.133,
|
| 205 |
+
"Recall@1": 0.054, "Recall@3": 0.116, "Recall@5": 0.170, "Recall@10": 0.250,
|
| 206 |
+
"NDCG@1": 0.123, "NDCG@3": 0.133, "NDCG@5": 0.157, "NDCG@10": 0.188
|
| 207 |
+
},
|
| 208 |
+
"notes": {
|
| 209 |
+
"incomplete_pred_count": 0
|
| 210 |
+
}
|
| 211 |
+
}
|
| 212 |
+
```
|
| 213 |
+
|
| 214 |
+
## Dataset Statistics
|
| 215 |
+
|
| 216 |
+
| Statistic | Value |
|
| 217 |
+
|:----------|:------|
|
| 218 |
+
| Total Queries | 122 |
|
| 219 |
+
| Intra-Event Queries | 57 (46.7%) |
|
| 220 |
+
| Inter-Event Queries | 65 (53.3%) |
|
| 221 |
+
| Total Users | 57 |
|
| 222 |
+
| Total Photos | 109,467 |
|
| 223 |
+
| Avg. Targets per Query | 3.76 |
|
| 224 |
+
| Avg. History Span | 3.4 years |
|
| 225 |
+
| Query Retention Rate | 6.1% (122 / 2,000 candidates) |
|
| 226 |
+
| Inter-Annotator IoU | 0.91 |
|
| 227 |
+
|
| 228 |
+
## Data Source
|
| 229 |
+
|
| 230 |
+
DISBench is constructed from [YFCC100M](https://multimediacommons.wordpress.com/yfcc100m-core-dataset/), which preserves a hierarchical structure of users → photosets → photos. All images are publicly shared under Creative Commons licenses. Photoset boundaries are used during construction but are **not** provided to models during evaluation.
|
| 231 |
+
|
| 232 |
+
## License
|
| 233 |
+
|
| 234 |
+
The DISBench dataset follows the Creative Commons licensing terms of the underlying YFCC100M data. Please refer to individual image licenses for specific usage terms.
|
download_images.py
ADDED
|
@@ -0,0 +1,316 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
"""
|
| 2 |
+
fetch photos of selected users from yfcc.
|
| 3 |
+
|
| 4 |
+
arguments:
|
| 5 |
+
--dataset-path: path to the DISBench dataset
|
| 6 |
+
--max-workers: maximum number of worker threads for downloading
|
| 7 |
+
--clear: if set, the downloaded status will be cleared and you can start downloading from scratch.
|
| 8 |
+
|
| 9 |
+
input:
|
| 10 |
+
- DISBench/photo_ids/<uid>.txt
|
| 11 |
+
- each line: {photo_id}\t{hash} or {photo_id}\t{hash}\t{status}
|
| 12 |
+
|
| 13 |
+
output:
|
| 14 |
+
- DISBench/images/<uid>/{photo_id}.jpg
|
| 15 |
+
- DISBench/photo_ids/<uid>.txt
|
| 16 |
+
- each line: {photo_id}\t{hash}\t{status}
|
| 17 |
+
|
| 18 |
+
description:
|
| 19 |
+
- read the photo ids file, and get the photo ids, hashes, and statuses.
|
| 20 |
+
- if clear is set, clear the status, save the file.
|
| 21 |
+
- for each photo, if the status is:
|
| 22 |
+
- "valid": skip this.
|
| 23 |
+
- "success": try to load this file, if the file is valid, set the status to "valid". else, set the status to "error".
|
| 24 |
+
- "error" or None: try to download this photo again.
|
| 25 |
+
- download photos from s3.
|
| 26 |
+
- url: https://multimedia-commons.s3-us-west-2.amazonaws.com/data/images/<first 3 chars in hash>/<next 3 chars in hash>/<hash>.jpg. for example, if the hash is 00024a73d1a4c32fb29732d56a2, the url is https://multimedia-commons.s3-us-west-2.amazonaws.com/data/images/000/24a/00024a73d1a4c32fb29732d56a2.jpg.
|
| 27 |
+
- save the photo to the photos folder.
|
| 28 |
+
- download with multithreading, workers set to 16 in main().
|
| 29 |
+
- if download is successful, set the status to "success". else, set the status to "error".
|
| 30 |
+
- save the status to the photo ids file for each user.
|
| 31 |
+
"""
|
| 32 |
+
import argparse
|
| 33 |
+
import os
|
| 34 |
+
import time
|
| 35 |
+
import requests
|
| 36 |
+
from pathlib import Path
|
| 37 |
+
from tqdm import tqdm
|
| 38 |
+
from concurrent.futures import ThreadPoolExecutor, as_completed
|
| 39 |
+
from PIL import Image
|
| 40 |
+
|
| 41 |
+
|
| 42 |
+
def construct_s3_url(hash_value):
|
| 43 |
+
"""Construct S3 URL from hash value"""
|
| 44 |
+
# Get first 3 chars and next 3 chars
|
| 45 |
+
first_3 = hash_value[:3]
|
| 46 |
+
next_3 = hash_value[3:6]
|
| 47 |
+
return f"https://multimedia-commons.s3-us-west-2.amazonaws.com/data/images/{first_3}/{next_3}/{hash_value}.jpg"
|
| 48 |
+
|
| 49 |
+
|
| 50 |
+
def validate_image_file(image_path):
|
| 51 |
+
"""Validate if an image file exists and can be loaded"""
|
| 52 |
+
if not image_path.exists():
|
| 53 |
+
return False
|
| 54 |
+
try:
|
| 55 |
+
with Image.open(image_path) as img:
|
| 56 |
+
img.load()
|
| 57 |
+
return True
|
| 58 |
+
except Exception:
|
| 59 |
+
return False
|
| 60 |
+
|
| 61 |
+
|
| 62 |
+
def download_photo(photo_id, hash_value, output_path, uid):
|
| 63 |
+
"""Download a single photo from S3 with retry logic"""
|
| 64 |
+
max_retries = 4
|
| 65 |
+
last_error = None
|
| 66 |
+
|
| 67 |
+
for attempt in range(max_retries):
|
| 68 |
+
try:
|
| 69 |
+
url = construct_s3_url(hash_value)
|
| 70 |
+
response = requests.get(url, timeout=30, stream=True)
|
| 71 |
+
response.raise_for_status()
|
| 72 |
+
|
| 73 |
+
# Save the photo
|
| 74 |
+
output_path.parent.mkdir(parents=True, exist_ok=True)
|
| 75 |
+
with open(output_path, 'wb') as f:
|
| 76 |
+
for chunk in response.iter_content(chunk_size=8192):
|
| 77 |
+
f.write(chunk)
|
| 78 |
+
|
| 79 |
+
return photo_id, "success", None
|
| 80 |
+
except Exception as e:
|
| 81 |
+
last_error = e
|
| 82 |
+
# Retry for other errors
|
| 83 |
+
if attempt < max_retries - 1:
|
| 84 |
+
time.sleep(1) # Brief delay before retry
|
| 85 |
+
continue
|
| 86 |
+
return photo_id, "error", str(e)
|
| 87 |
+
|
| 88 |
+
# If we exhausted all retries, return the last error
|
| 89 |
+
return photo_id, "error", str(last_error) if last_error else "Unknown error after retries"
|
| 90 |
+
|
| 91 |
+
|
| 92 |
+
def process_photo_ids_file(photo_ids_file, photos_base_dir, uid):
|
| 93 |
+
"""Process a single photo_ids file and return tasks and photo records"""
|
| 94 |
+
photos_dir = photos_base_dir / uid
|
| 95 |
+
photos_dir.mkdir(parents=True, exist_ok=True)
|
| 96 |
+
|
| 97 |
+
# Read all photo records
|
| 98 |
+
photo_records = [] # List of (photo_id, hash, status)
|
| 99 |
+
tasks = [] # List of (photo_id, hash, output_path) for photos to download
|
| 100 |
+
has_updates = False # Track if any status was updated
|
| 101 |
+
|
| 102 |
+
with open(photo_ids_file, 'r', encoding='utf-8') as f:
|
| 103 |
+
for line in f:
|
| 104 |
+
line = line.strip()
|
| 105 |
+
if not line:
|
| 106 |
+
continue
|
| 107 |
+
|
| 108 |
+
parts = line.split('\t')
|
| 109 |
+
if len(parts) < 2:
|
| 110 |
+
continue
|
| 111 |
+
|
| 112 |
+
photo_id = parts[0].strip()
|
| 113 |
+
hash_value = parts[1].strip()
|
| 114 |
+
status = parts[2].strip() if len(parts) >= 3 else None
|
| 115 |
+
|
| 116 |
+
if not photo_id or not hash_value:
|
| 117 |
+
continue
|
| 118 |
+
|
| 119 |
+
output_path = photos_dir / f"{photo_id}.jpg"
|
| 120 |
+
|
| 121 |
+
# Store the record
|
| 122 |
+
photo_records.append({
|
| 123 |
+
'photo_id': photo_id,
|
| 124 |
+
'hash': hash_value,
|
| 125 |
+
'status': status,
|
| 126 |
+
'output_path': output_path
|
| 127 |
+
})
|
| 128 |
+
|
| 129 |
+
# Process based on status according to description:
|
| 130 |
+
# - "valid": skip this
|
| 131 |
+
if status == "valid":
|
| 132 |
+
continue
|
| 133 |
+
|
| 134 |
+
# - "success": try to load this file, if valid set to "valid", else set to "error"
|
| 135 |
+
if status == "success":
|
| 136 |
+
if validate_image_file(output_path):
|
| 137 |
+
photo_records[-1]['status'] = "valid"
|
| 138 |
+
has_updates = True
|
| 139 |
+
else:
|
| 140 |
+
photo_records[-1]['status'] = "error"
|
| 141 |
+
has_updates = True
|
| 142 |
+
# Need to download again
|
| 143 |
+
tasks.append((photo_id, hash_value, output_path))
|
| 144 |
+
continue
|
| 145 |
+
|
| 146 |
+
# - "error" or None: try to download this photo again
|
| 147 |
+
if status == "error" or status is None:
|
| 148 |
+
tasks.append((photo_id, hash_value, output_path))
|
| 149 |
+
|
| 150 |
+
return tasks, photo_records, has_updates
|
| 151 |
+
|
| 152 |
+
|
| 153 |
+
def save_photo_ids_file(photo_ids_file, photo_records):
|
| 154 |
+
"""Save updated photo records back to the photo_ids file"""
|
| 155 |
+
with open(photo_ids_file, 'w', encoding='utf-8') as f:
|
| 156 |
+
for record in photo_records:
|
| 157 |
+
photo_id = record['photo_id']
|
| 158 |
+
hash_value = record['hash']
|
| 159 |
+
status = record['status']
|
| 160 |
+
|
| 161 |
+
# Always write status if it exists
|
| 162 |
+
if status:
|
| 163 |
+
f.write(f"{photo_id}\t{hash_value}\t{status}\n")
|
| 164 |
+
else:
|
| 165 |
+
f.write(f"{photo_id}\t{hash_value}\n")
|
| 166 |
+
|
| 167 |
+
|
| 168 |
+
def download_photos_for_uid(uid, photo_ids_file, photos_base_dir, max_workers):
|
| 169 |
+
"""Download all photos for a single user ID"""
|
| 170 |
+
tasks, photo_records, has_updates = process_photo_ids_file(photo_ids_file, photos_base_dir, uid)
|
| 171 |
+
|
| 172 |
+
# Create a mapping from photo_id to record for easy updates
|
| 173 |
+
photo_id_to_record = {record['photo_id']: record for record in photo_records}
|
| 174 |
+
|
| 175 |
+
# Count initial statistics
|
| 176 |
+
valid_count = sum(1 for r in photo_records if r['status'] == "valid")
|
| 177 |
+
skipped_count = sum(1 for r in photo_records if r['status'] and r['status'] != "error")
|
| 178 |
+
|
| 179 |
+
if not tasks:
|
| 180 |
+
print(f"{uid}: No photos to download ({skipped_count} already processed, {valid_count} valid)")
|
| 181 |
+
# Save the file if statuses were updated
|
| 182 |
+
if has_updates:
|
| 183 |
+
save_photo_ids_file(photo_ids_file, photo_records)
|
| 184 |
+
return
|
| 185 |
+
|
| 186 |
+
print(f"Processing {uid}: {len(tasks)} photos to download ({skipped_count} skipped, {valid_count} valid)")
|
| 187 |
+
|
| 188 |
+
success_count = 0
|
| 189 |
+
error_count = 0
|
| 190 |
+
|
| 191 |
+
with ThreadPoolExecutor(max_workers=max_workers) as executor:
|
| 192 |
+
futures = {
|
| 193 |
+
executor.submit(download_photo, photo_id, hash_value, output_path, uid): (photo_id, hash_value)
|
| 194 |
+
for photo_id, hash_value, output_path in tasks
|
| 195 |
+
}
|
| 196 |
+
|
| 197 |
+
for future in tqdm(as_completed(futures), total=len(futures), desc=f"Downloading {uid}"):
|
| 198 |
+
photo_id, hash_value = futures[future]
|
| 199 |
+
try:
|
| 200 |
+
result_photo_id, status, error = future.result()
|
| 201 |
+
|
| 202 |
+
# Update the record with the download status
|
| 203 |
+
if result_photo_id in photo_id_to_record:
|
| 204 |
+
photo_id_to_record[result_photo_id]['status'] = status
|
| 205 |
+
has_updates = True
|
| 206 |
+
|
| 207 |
+
if status == "success":
|
| 208 |
+
success_count += 1
|
| 209 |
+
elif status == "error":
|
| 210 |
+
error_count += 1
|
| 211 |
+
if error:
|
| 212 |
+
print(f"\nError downloading {result_photo_id}: {error}")
|
| 213 |
+
|
| 214 |
+
# Sleep for 2 seconds every 50 requests
|
| 215 |
+
if (success_count + error_count) % 50 == 0:
|
| 216 |
+
time.sleep(2)
|
| 217 |
+
|
| 218 |
+
except Exception as e:
|
| 219 |
+
error_count += 1
|
| 220 |
+
print(f"\nException for {photo_id}: {e}")
|
| 221 |
+
|
| 222 |
+
# Save updated photo_ids file with download statuses only if there were updates
|
| 223 |
+
if has_updates:
|
| 224 |
+
save_photo_ids_file(photo_ids_file, photo_records)
|
| 225 |
+
|
| 226 |
+
print(f"Completed {uid}: {success_count} downloaded, {error_count} errors, {skipped_count} skipped, {valid_count} valid")
|
| 227 |
+
|
| 228 |
+
|
| 229 |
+
def clear_statuses(photo_ids_file):
|
| 230 |
+
"""Clear all statuses from photo_ids file and save"""
|
| 231 |
+
photo_records = []
|
| 232 |
+
|
| 233 |
+
# Read all photo records
|
| 234 |
+
with open(photo_ids_file, 'r', encoding='utf-8') as f:
|
| 235 |
+
for line in f:
|
| 236 |
+
line = line.strip()
|
| 237 |
+
|
| 238 |
+
parts = line.split('\t')
|
| 239 |
+
|
| 240 |
+
photo_id = parts[0].strip()
|
| 241 |
+
hash_value = parts[1].strip()
|
| 242 |
+
|
| 243 |
+
# Clear status (set to None)
|
| 244 |
+
photo_records.append({
|
| 245 |
+
'photo_id': photo_id,
|
| 246 |
+
'hash': hash_value,
|
| 247 |
+
'status': None
|
| 248 |
+
})
|
| 249 |
+
|
| 250 |
+
# Save with cleared statuses
|
| 251 |
+
save_photo_ids_file(photo_ids_file, photo_records)
|
| 252 |
+
return
|
| 253 |
+
|
| 254 |
+
|
| 255 |
+
def main():
|
| 256 |
+
parser = argparse.ArgumentParser(description="Fetch photos of selected users from YFCC")
|
| 257 |
+
parser.add_argument(
|
| 258 |
+
"--photo-ids-path",
|
| 259 |
+
type=str,
|
| 260 |
+
default="photo_ids",
|
| 261 |
+
help="Path to the DISBench dataset"
|
| 262 |
+
)
|
| 263 |
+
parser.add_argument(
|
| 264 |
+
"--images-path",
|
| 265 |
+
type=str,
|
| 266 |
+
default="images",
|
| 267 |
+
help="Path to the images directory"
|
| 268 |
+
)
|
| 269 |
+
parser.add_argument(
|
| 270 |
+
"--max-workers",
|
| 271 |
+
type=int,
|
| 272 |
+
default=16,
|
| 273 |
+
help="Maximum number of worker threads for downloading"
|
| 274 |
+
)
|
| 275 |
+
parser.add_argument(
|
| 276 |
+
"--clear",
|
| 277 |
+
action="store_true",
|
| 278 |
+
help="Clear the downloaded status and exit"
|
| 279 |
+
)
|
| 280 |
+
|
| 281 |
+
args = parser.parse_args()
|
| 282 |
+
|
| 283 |
+
photo_ids_path = Path(args.photo_ids_path)
|
| 284 |
+
images_path = Path(args.images_path)
|
| 285 |
+
max_workers = args.max_workers
|
| 286 |
+
|
| 287 |
+
# Get all photo_ids files
|
| 288 |
+
photo_ids_files = sorted([f for f in photo_ids_path.glob("*.txt")])
|
| 289 |
+
|
| 290 |
+
if not photo_ids_files:
|
| 291 |
+
print(f"No photo_ids files found in {photo_ids_path}")
|
| 292 |
+
return
|
| 293 |
+
|
| 294 |
+
# If clear is set, clear statuses and exit
|
| 295 |
+
if args.clear:
|
| 296 |
+
print(f"Clearing statuses for {len(photo_ids_files)} photo_ids files...")
|
| 297 |
+
for photo_ids_file in photo_ids_files:
|
| 298 |
+
uid = photo_ids_file.stem
|
| 299 |
+
print(f"Clearing statuses for {uid}...")
|
| 300 |
+
clear_statuses(photo_ids_file)
|
| 301 |
+
print("All statuses cleared!")
|
| 302 |
+
return
|
| 303 |
+
|
| 304 |
+
print(f"Found {len(photo_ids_files)} photo_ids files to process")
|
| 305 |
+
|
| 306 |
+
# Process each user's photo_ids file
|
| 307 |
+
for photo_ids_file in photo_ids_files:
|
| 308 |
+
uid = photo_ids_file.stem # Get filename without extension
|
| 309 |
+
download_photos_for_uid(uid, photo_ids_file, images_path, max_workers)
|
| 310 |
+
|
| 311 |
+
print("All downloads completed!")
|
| 312 |
+
return
|
| 313 |
+
|
| 314 |
+
|
| 315 |
+
if __name__ == '__main__':
|
| 316 |
+
main()
|
images.zip
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:648d788d398bb15393c1262cb4971d13cd0fbf489738ae2c43a825a05e9765d0
|
| 3 |
+
size 14434611540
|
metadata/10287726@N02.jsonl
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
metadata/10297518@N00.jsonl
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
metadata/10299779@N03.jsonl
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
metadata/12276997@N06.jsonl
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
metadata/12734746@N00.jsonl
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
metadata/13101981@N03.jsonl
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
metadata/14580956@N08.jsonl
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
metadata/14737255@N00.jsonl
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
metadata/14798958@N08.jsonl
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
metadata/15352839@N00.jsonl
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
metadata/15803691@N00.jsonl
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
metadata/18383978@N00.jsonl
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
metadata/21435131@N06.jsonl
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
metadata/21895046@N08.jsonl
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
metadata/22017657@N05.jsonl
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
metadata/22526649@N03.jsonl
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
metadata/22736462@N07.jsonl
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
metadata/23090753@N06.jsonl
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
metadata/23518714@N00.jsonl
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
metadata/23736466@N00.jsonl
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
metadata/24232779@N00.jsonl
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
metadata/24413182@N00.jsonl
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
metadata/24468935@N03.jsonl
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
metadata/24736216@N07.jsonl
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
metadata/24819841@N06.jsonl
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
metadata/25367139@N00.jsonl
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
metadata/25652622@N00.jsonl
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
metadata/25899413@N04.jsonl
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
metadata/27550543@N02.jsonl
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
metadata/27634886@N00.jsonl
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
metadata/27637456@N06.jsonl
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
metadata/28157992@N03.jsonl
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
metadata/28495173@N00.jsonl
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
metadata/30872191@N00.jsonl
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
metadata/31058815@N00.jsonl
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
metadata/35032604@N00.jsonl
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
metadata/39979407@N05.jsonl
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
metadata/40817698@N07.jsonl
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
metadata/41610421@N05.jsonl
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
metadata/41838028@N00.jsonl
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
metadata/43145783@N00.jsonl
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
metadata/47554402@N00.jsonl
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
metadata/47642109@N04.jsonl
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
metadata/49475364@N00.jsonl
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
metadata/49645113@N07.jsonl
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
metadata/54368512@N00.jsonl
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|