Datasets:

Modalities:
Image
Languages:
English
Size:
< 1K
Libraries:
Datasets
License:
File size: 27,494 Bytes
faac361
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
{
 "cells": [
  {
   "cell_type": "markdown",
   "id": "362e9f36-2300-4851-8f49-b952e62a2c78",
   "metadata": {},
   "source": [
    "<img src=\"./figs/IOAI-Logo.png\" alt=\"IOAI Logo\" width=\"200\" height=\"auto\">\n",
    "\n",
    "[IOAI 2025 (Beijing, China), Individual Contest](https://ioai-official.org/china-2025)\n",
    "\n",
    "[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/IOAI-official/IOAI-2025/blob/main/Individual-Contest/Pixel/Pixel.ipynb)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "4509a190",
   "metadata": {},
   "source": [
    "# Pixel Efficiency\n",
    "\n",
    "## 1. Problem Description\n",
    "\n",
    "You are a student in wildlife biology, working on a groundbreaking research project at the Starr Park Research Center. Your team has deployed thousands of camera traps across remote wilderness areas to monitor endangered species populations. However, the satellite internet connections in these remote locations have extremely limited bandwidth. Your job is to write code that identifies the most critical pixels in each wildlife photograph so that only the essential visual information needs to be transmitted back to headquarters.\n",
    "\n",
    "<img src=\"./figs/Pixel Fig 1.png\" width=\"400\">\n",
    "\n",
    "## 2. Dataset\n",
    "\n",
    "The dataset consists of a training set and a test set. Datasets are loaded using `load_from_disk`, and are in the format of `datasets`. Test set is not visible to the contestants.\n",
    "\n",
    "In the dataset there are the following fields:\n",
    "\n",
    "- `image`: the image are RGB full color images in PIL format, the size of each image is (224, 224)\n",
    "- `name`: the animal species label\n",
    "- `idx`: unique identifiers used to track the records.\n",
    "\n",
    "1. **Training Set (`train_dataset` folder)**:\n",
    "    - The training set is used for training your models/ doing experimentations on and can be accessed and downloaded directly during the competition.\n",
    "    - There are 700 images in the training set.\n",
    "\n",
    "2. **Test Set (`test_dataset` folder)**: \n",
    "    - These follow the same format as the training set but do not contain the `name` field.\n",
    "    - There are 698 images in test set, which had been separated into 2 testing sets within the ratio of 3:7, i.e. 30% of the data would be used to calculate the Leaderboard A score, another 70% data would be used to calculate the Leaderboard B score.\n",
    "    - The testing set is used to calculate the Leaderboard A score and the Leaderboard B score and is not directly accessible during the competition. Contestants can access the result on Leaderboard A , but cannot access the result on Leaderboard B. The final score would be counted using Leaderboard B only. The subsets for Leaderboard A and Leaderboard B are completely distinct.\n",
    "    \n",
    "\n",
    "## 3. Task\n",
    "You are given a dataset of animal photographs and a CLIP model that can do a zero-shot classification of animal species. To conserve bandwidth, you need to retain at most **6.25%** of the pixels of each image, while keeping classification accuracy as high as possible.\n",
    "\n",
    "More specifically, your task is to return **one rectangle mask** for each image, which contain a single rectangular area indicating the area to keep. Each mask is defined by two coordinate tuples: one for the top-left corner and one for the bottom-right corner of the rectangle. Below is a visualization of what the image would look like after applying a rectangular mask using the process from the baseline:\n",
    "\n",
    "<img src=\"./figs/Pixel Fig 2.png\" width=\"400\">\n",
    "\n",
    "**Coordinate Convention:**\n",
    "- Top-left corner coordinates are **inclusive** (the pixel at this position is included in the mask) \n",
    "- Bottom-right corner coordinates are **exclusive** (the pixel at this position is NOT included in the mask)\n",
    "\n",
    "For example, if you specify coordinates `((10, 20), (15, 25))`, the mask will cover pixels from row 10 to 14 (inclusive) and column 20 to 24 (inclusive), for a total area of 25 pixels.\n",
    "\n",
    "As an illustration, if an image size is 3x3 and we wanted to keep only the top-right pixel using coordinates `((0, 2), (1, 3))`, the resulting binary mask would be:\n",
    "\n",
    "```\n",
    "[[0, 0, 1],\n",
    " [0, 0, 0],\n",
    " [0, 0, 0]]\n",
    "```\n",
    "\n",
    "Below is a summary of the requirements for your masks:\n",
    "\n",
    "- Return one rectangle mask defined by coordinate tuples: `((top, left), (bottom, right))`\n",
    "- Top-left corner coordinates are inclusive, bottom-right corner coordinates are exclusive\n",
    "- The rectangle mask should cover at most *6.25%* of the original pixels (minimum 93.75% reduction of the original pixels)\n",
    "- All images are of size (224, 224), so coordinate values should be within the range [0, 224]\n",
    "\n",
    "\n",
    "Images would be masked using the mask you created, outside the masked rectangle, all pixels outside the masked rectangle will be replaced with RGB(0, 0, 0) (black) values. The masked image will be then passed through the CLIP model during evaluation, and your task is to keep the classification accuracy of the CLIP model on these masked images as high as possible. **An additional `other` class would be added into the classes for classification** to ensure that your masked image retains actual useful information for the researchers back at Starr Park Headquarters. So for example, if your image doesn't contain any animal information, the model will predict the `others` class instead of predicting a random animal and having a chance of getting it correct.\n",
    "\n",
    "You need to work only with the provided CLIP model and dataset. As a reminder, CLIP generates representations for both text and image, and it can compute a similarity score between them. So if you have ten animal classes, CLIP can look at the provided image and decide which text (class) is closest to the image. \n",
    "\n",
    "To ensure that your solution would handle the traffic of images for the research center, your code should run in **UNDER 8 MINUTES for the 698 images in the test dataset**. It is recommended that you test your solution on the training set first, which contain 700 images, to understand how much time your solution takes (testing set would take slightly longer due to dataset loading).\n",
    "\n",
    "## 4. Submission\n",
    "\n",
    "Contestants need to submit a notebook file named `submission.ipynb`. The file should output a `.jsonl` file titled `submission.jsonl`, which contains all the generated masks for the dataset split. Each mask in the `submission.jsonl` file should be stored as a tuple of two coordinate tuples: `((top, left), (bottom, right))`, where the top-left corner is inclusive and the bottom-right corner is exclusive.\n",
    "\n",
    "Contestants don't need to separate test sets into Leaderboard A and Leaderboard B, the evaluation machine will read `submission.jsonl` and automatically calculate the scores for Leaderboard A and Leaderboard B based on the prediction results and true labels. \n",
    "\n",
    "The submission files must strictly follow the above format and naming; otherwise, the system will not be able to read them correctly. \n",
    "\n",
    "## 5. Score\n",
    "\n",
    "The evaluation metric will be **classification accuracy**, defined as the proportion of correctly predicted samples over the total number of evaluated samples.\n",
    "\n",
    "Your score is the zero-shot classification accuracy of CLIP on the masked test images. **If a submitted mask for an image is invalid (wrong shape, more than 6.25% pixels retained, etc.), that image is counted as incorrect. A sample script is provided to compute the training split score.**\n",
    "\n",
    "\n",
    "## 6. Baseline and Training Set\n",
    "\n",
    "- Below you can find the baseline solution.\n",
    "- The dataset is in `training_set` folder.\n",
    "- The highest score by the Scientific Committee for this task is 0.83 in Leader Board B, this score is used for score unification.\n",
    "- The baseline score by the Scientific Committee for this task is 0.19 in Leader Board B, this score is used for score unification."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "9d1ad03b-ba1e-4c24-b866-fe6a138b58c9",
   "metadata": {},
   "outputs": [],
   "source": [
    "import random\n",
    "import numpy as np\n",
    "import torch\n",
    "\n",
    "seed = 42\n",
    "\n",
    "random.seed(seed)                  # Python built-in random\n",
    "np.random.seed(seed)               # NumPy\n",
    "torch.manual_seed(seed)            # PyTorch (CPU)\n",
    "torch.cuda.manual_seed(seed)       # PyTorch (single GPU)\n",
    "torch.cuda.manual_seed_all(seed)   # PyTorch (all GPUs)\n",
    "\n",
    "# Ensures deterministic behavior\n",
    "torch.backends.cudnn.deterministic = True\n",
    "torch.backends.cudnn.benchmark = False"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "af37a8ed",
   "metadata": {},
   "source": [
    "### Dependencies and Config Variables"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "23b68a41",
   "metadata": {},
   "outputs": [],
   "source": [
    "import os\n",
    "import matplotlib.pyplot as plt\n",
    "import numpy as np\n",
    "from collections import Counter\n",
    "from PIL import Image\n",
    "from tqdm import tqdm\n",
    "import glob\n",
    "import json\n",
    "import math\n",
    "import torch\n",
    "import matplotlib.pyplot as plt\n",
    "from datasets import load_dataset, load_from_disk\n",
    "from transformers import CLIPProcessor, CLIPModel\n",
    "from PIL import Image\n",
    "from tqdm.auto import tqdm \n",
    "\n",
    "TRAIN_PATH = \"./training_set/\"\n",
    "# The training set is deployed automatically in the testing machine. \n",
    "# You notebook can access the TRAIN_PATH even if you do not mount it along with notebook.\n",
    "\n",
    "MODEL_PATH = \"./clip-vit-large-patch14\"\n",
    "# The clip model is deployed automatically in the testing machine. \n",
    "# You notebook can access the MODEL_PATH even if you do not mount it along with notebook.\n",
    "\n",
    "DATASET_PATH = TRAIN_PATH + \"train_dataset\"\n",
    "SPLIT = \"train\"\n",
    "DEVICE = \"cuda\" if torch.cuda.is_available() else \"cpu\"\n",
    "BACKGROUND_CLASS = \"other\" # Class used to catch masked images that have no useful information, preventing completely off masks from \"guessing\" the answer from the 10 classes\n",
    "\n",
    "# Image and Masking Configuration\n",
    "HEIGHT = 224\n",
    "WIDTH = 224\n",
    "RETAIN_RATIO = 0.0625  # Retain 6.25% of pixels\n",
    "MEAN_COLOR = (0, 0, 0)  # RGB mean values for masked out areas\n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "b888c040",
   "metadata": {},
   "source": [
    "### Dataset loading\n",
    "\n",
    "Let's first load the dataset in and see what's in it:\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "857ab7ff",
   "metadata": {},
   "outputs": [],
   "source": [
    "# Load the dataset\n",
    "print(\"Loading dataset...\")\n",
    "dataset_whole = load_from_disk(DATASET_PATH)\n",
    "dataset = dataset_whole[SPLIT]\n",
    "\n",
    "# Print first item to check available fields\n",
    "print(\"\\nFirst item keys:\")\n",
    "print(dataset_whole[SPLIT][0].keys())\n",
    "\n",
    "# Show basic dataset statistics without converting fields yet\n",
    "print(f\"\\nDataset loaded successfully!\")\n",
    "print(f\"Total samples: {len(dataset)}\")\n",
    "\n",
    "print(f\"\\nSample item structure:\")\n",
    "sample_item = dataset[0]\n",
    "print(f\"  Keys: {list(sample_item.keys())}\")\n",
    "print(f\"  Image type: {type(sample_item['image'])}\")\n",
    "print(f\"  Image size: {sample_item['image'].size}\")\n",
    "print(f\"  Index: {sample_item['idx']}\")\n",
    "\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "37c2dbfa",
   "metadata": {},
   "outputs": [],
   "source": [
    "# Visualize first 10 samples\n",
    "fig, axes = plt.subplots(2, 5, figsize=(15, 8))\n",
    "axes = axes.flatten()\n",
    "\n",
    "print(\"Visualizing first 10 samples...\")\n",
    "\n",
    "for i in range(10):\n",
    "    sample = dataset[i]\n",
    "    image = sample['image']\n",
    "    label = sample['name']\n",
    "    \n",
    "    axes[i].imshow(image)\n",
    "    axes[i].set_title(f\"{label}\\n\", fontsize=12)\n",
    "    axes[i].axis('off')\n",
    "\n",
    "plt.tight_layout()\n",
    "plt.suptitle('First 10 Samples from Dataset', fontsize=16, y=1.02)\n",
    "plt.show()\n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "24cee96a",
   "metadata": {},
   "source": [
    "### Model\n",
    "\n",
    "Now let's load the model and see some predictions:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "fd92b376",
   "metadata": {},
   "outputs": [],
   "source": [
    "print(f\"Loading CLIP model and processor: {MODEL_PATH}...\")\n",
    "model = CLIPModel.from_pretrained(MODEL_PATH).to(DEVICE)\n",
    "processor = CLIPProcessor.from_pretrained(MODEL_PATH)\n",
    "print(\"Model and processor loaded successfully.\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "a2e25924",
   "metadata": {},
   "outputs": [],
   "source": [
    "image = dataset[0]['image']\n",
    "# Visualize the image with its true label\n",
    "plt.figure(figsize=(8, 6))\n",
    "plt.imshow(image)\n",
    "plt.title(f\"Sample Image\\nTrue Label: {dataset[0]['name']}\", fontsize=14)\n",
    "plt.axis('off')\n",
    "plt.show()\n",
    "\n",
    "\n",
    "labels = sorted(list(set(dataset['name']))) + [BACKGROUND_CLASS]\n",
    "text_inputs = processor(text=labels, return_tensors=\"pt\", padding=True).to(DEVICE)\n",
    "image_processed = processor(images=image, return_tensors=\"pt\").to(DEVICE)\n",
    "pixel_values = image_processed['pixel_values']\n",
    "outputs_full = model(pixel_values=pixel_values, **text_inputs)\n",
    "logits_full = outputs_full.logits_per_image  # Shape: (1, num_styles)\n",
    "predicted_index_full = logits_full.argmax(dim=-1).item()\n",
    "\n",
    "print(f\"Predicted label: {labels[predicted_index_full]}\")\n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "54467f0c",
   "metadata": {},
   "source": [
    "### Baseline: A trivial masking method\n",
    "\n",
    "We will now be implementing a trivial masking solution, one that randomly masks out 90% of the pixels."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "0ee24b90",
   "metadata": {},
   "outputs": [],
   "source": [
    "def generate_center_crop_coordinates(image):\n",
    "    \"\"\"\n",
    "    Generate coordinates for a center crop mask.\n",
    "    \n",
    "    Returns:\n",
    "        tuple: ((top, left), (bottom, right)) coordinates for the crop\n",
    "    \"\"\"\n",
    "    H, W = image.size\n",
    "    total_px = H * W\n",
    "    k = int(total_px * RETAIN_RATIO)\n",
    "    \n",
    "    # Calculate side length of the square crop\n",
    "    side_length = int(np.sqrt(k))\n",
    "    \n",
    "    # Calculate center coordinates\n",
    "    center_h, center_w = H // 2, W // 2\n",
    "    \n",
    "    # Calculate crop boundaries\n",
    "    half_side = side_length // 2\n",
    "    top = max(0, center_h - half_side)\n",
    "    left = max(0, center_w - half_side)\n",
    "    bottom = min(H, top + side_length)\n",
    "    right = min(W, left + side_length)\n",
    "    \n",
    "    return ((top, left), (bottom, right))\n",
    "\n",
    "def generate_mask_from_coordinates(image, coordinates):\n",
    "    \"\"\"\n",
    "    Generate a binary mask from crop coordinates.\n",
    "    \n",
    "    Parameters:\n",
    "        image: PIL Image\n",
    "        coordinates: tuple of ((top, left), (bottom, right))\n",
    "    \n",
    "    Returns:\n",
    "        numpy array: Binary mask with 1s in the crop area\n",
    "    \"\"\"\n",
    "    H, W = image.size\n",
    "    mask = np.zeros((H, W), dtype=np.int8)\n",
    "    \n",
    "    (top, left), (bottom, right) = coordinates\n",
    "    mask[top:bottom, left:right] = 1\n",
    "    \n",
    "    return mask"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "e87f8f85",
   "metadata": {},
   "outputs": [],
   "source": [
    "def apply_mask_with_mean(image, mask, mean_rgb=MEAN_COLOR):\n",
    "    \"\"\"\n",
    "    Apply arbitrary binary mask to image, replacing masked areas with mean values\n",
    "\n",
    "    Parameters:\n",
    "    - image: PIL Image (224x224)\n",
    "    - mask: Binary numpy array or PIL Image (224x224) where 0 is the area to drop and 1 is the area to keep\n",
    "    - mean_rgb: RGB mean values to use (default: from config)\n",
    "\n",
    "    Returns: Modified PIL Image\n",
    "    \"\"\"\n",
    "    # Convert images to numpy arrays\n",
    "    img_array = np.array(image).copy()\n",
    "\n",
    "    # Ensure mask is numpy array\n",
    "    if isinstance(mask, Image.Image):\n",
    "        mask_array = np.array(mask.convert('L')) > 127  # Convert to binary\n",
    "    else:\n",
    "        mask_array = mask > 0\n",
    "\n",
    "    # Reshape mask for broadcasting with RGB\n",
    "    mask_3d = np.stack([mask_array] * 3, axis=2)\n",
    "\n",
    "    # Convert mean values to 0-255 range\n",
    "    mean_values = np.array([int(m * 255) for m in mean_rgb])\n",
    "    # Apply mask - replace areas where mask is 0 (drop) with mean values, keep areas where mask is 1\n",
    "    img_array = np.where(mask_3d, img_array, mean_values.reshape(1, 1, 3))\n",
    "\n",
    "    return Image.fromarray(img_array.astype(np.uint8))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "a84ed28b",
   "metadata": {},
   "outputs": [],
   "source": [
    "image = dataset[0]['image']\n",
    "# Visualize the image with its true label\n",
    "plt.figure(figsize=(8, 6))\n",
    "plt.imshow(image)\n",
    "plt.title(f\"Sample Image\\nTrue Label: {dataset[0]['name']}\", fontsize=14)\n",
    "plt.axis('off')\n",
    "plt.show()\n",
    "\n",
    "\n",
    "labels = sorted(list(set(dataset['name']))) + [BACKGROUND_CLASS]\n",
    "text_inputs = processor(text=labels, return_tensors=\"pt\", padding=True).to(DEVICE)\n",
    "\n",
    "mask = generate_mask_from_coordinates(image, generate_center_crop_coordinates(image))\n",
    "image_masked = apply_mask_with_mean(image, mask)\n",
    "\n",
    "plt.figure(figsize=(8, 6))\n",
    "plt.imshow(image_masked)\n",
    "plt.title(f\"Masked Image\\nTrue Label: {dataset[0]['name']}\", fontsize=14)\n",
    "plt.axis('off')\n",
    "plt.show()\n",
    "\n",
    "image_processed = processor(images=image_masked, return_tensors=\"pt\").to(DEVICE)\n",
    "pixel_values = image_processed['pixel_values']\n",
    "outputs_full = model(pixel_values=pixel_values, **text_inputs)\n",
    "logits_full = outputs_full.logits_per_image  # Shape: (1, num_styles)\n",
    "predicted_index_full = logits_full.argmax(dim=-1).item()\n",
    "\n",
    "print(f\"Predicted label: {labels[predicted_index_full]}\")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "d6987536",
   "metadata": {},
   "source": [
    "### Exporting the masks\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "8aa65a80",
   "metadata": {},
   "outputs": [],
   "source": [
    "#DATA_PATH is the secret environment variable to point the address of the validation set and test set on the testing machine. \n",
    "#Contestants cannot access this address locally.\n",
    "import os\n",
    "if os.environ.get('DATA_PATH'):\n",
    "    TEST_PATH = os.environ.get(\"DATA_PATH\") + \"/\" \n",
    "else:\n",
    "    TEST_PATH = \"\"  # Fallback for local testing\n",
    "\n",
    "dataset = load_from_disk(TEST_PATH + \"test_dataset\")\n",
    "split = \"test\"\n",
    "dataset = dataset[split]"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "5cae01f0",
   "metadata": {},
   "outputs": [],
   "source": [
    "## Exporting results and validating on full dataset\n",
    "RETAIN_RATIO = 0.0625\n",
    "\n",
    "masks = {}\n",
    "for item in tqdm(dataset):\n",
    "    image = item['image']\n",
    "\n",
    "    ## you should replace mask generation with your function\n",
    "    coordinates = generate_center_crop_coordinates(image)\n",
    "    \n",
    "    # don't need to change below, it's just saving to file\n",
    "    idx = item['idx']\n",
    "    # For validation, we still need to generate the full mask\n",
    "    mask = generate_mask_from_coordinates(image, coordinates)\n",
    "    assert mask.shape == (224, 224), \"Mask should be 224x224\"\n",
    "    assert mask.sum() <= RETAIN_RATIO * 224 * 224, \"You should leave only 6.25% of pixels\"\n",
    "    \n",
    "    # Save only the coordinates (topleft, bottomright) instead of the full mask\n",
    "    masks[idx] = coordinates\n",
    "\n",
    "# Save as JSONL (one JSON object per line) - much safer than pickle\n",
    "with open('submission.jsonl', 'w') as f:\n",
    "    for idx, coordinates in masks.items():\n",
    "        json.dump({\"idx\": idx, \"coordinates\": coordinates}, f)\n",
    "        f.write('\\n')\n",
    "\n",
    "print(\"Masks saved to masks.jsonl\")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "ed60e044",
   "metadata": {},
   "source": [
    "### Validation"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "3d1ef680",
   "metadata": {},
   "outputs": [],
   "source": [
    "# # Validation code for generated masks\n",
    "\n",
    "# def check_validity(coordinates):\n",
    "#     \"\"\"\n",
    "#     Check if coordinates are valid according to the requirements.\n",
    "#     Returns True if valid, False otherwise.\n",
    "#     \"\"\"\n",
    "#     try:\n",
    "#         # Check if coordinates is a tuple of two tuples\n",
    "#         if not isinstance(coordinates, tuple) or len(coordinates) != 2:\n",
    "#             print(f\"Coordinates is not a tuple of two tuples\")\n",
    "#             return False\n",
    "        \n",
    "#         (top, left), (bottom, right) = coordinates\n",
    "        \n",
    "#         # Check if all coordinates are integers\n",
    "#         if not all(isinstance(coord, (int, np.integer)) for coord in [top, left, bottom, right]):\n",
    "#             print(f\"Coordinates are not integers\")\n",
    "#             return False\n",
    "        \n",
    "#         # Check if coordinates are within image bounds\n",
    "#         # For slicing mask[top:bottom, left:right], valid ranges are:\n",
    "#         # top, left: [0, 223] (inclusive)\n",
    "#         # bottom, right: [1, 224] (inclusive) since we need top < bottom and left < right\n",
    "#         if not (0 <= top < 224 and 0 <= left < 224 and 1 <= bottom <= 224 and 1 <= right <= 224):\n",
    "#             print(f\"Coordinates are not within image bounds\")\n",
    "#             return False\n",
    "        \n",
    "#         # Check if top-left is actually top-left of bottom-right (proper ordering)\n",
    "#         if not (top < bottom and left < right):\n",
    "#             print(f\"Top-left is not actually top-left of bottom-right\")\n",
    "#             return False\n",
    "        \n",
    "#         # Check that the crop area doesn't exceed RETAIN_RATIO\n",
    "#         crop_area = (bottom - top) * (right - left)\n",
    "#         max_area = RETAIN_RATIO * 224 * 224\n",
    "#         if crop_area > max_area:\n",
    "#             print(f\"Crop area {crop_area} exceeds max area {max_area}\")\n",
    "#             return False\n",
    "        \n",
    "#         return True\n",
    "#     except Exception:\n",
    "#         return False\n",
    "\n",
    "\n",
    "\n",
    "\n",
    "# def validate_masks(masks):\n",
    "#     \"\"\"Simple validation of generated masks on the dataset\"\"\"\n",
    "#     correct = 0\n",
    "#     total = 0\n",
    "\n",
    "#     labels = sorted(list(set(dataset['name']))) + ['other']\n",
    "#     text_inputs = processor(text=labels, return_tensors=\"pt\", padding=True).to(DEVICE)\n",
    "\n",
    "#     with torch.no_grad():\n",
    "#         for item in tqdm(dataset, desc=\"Validating masks\"):\n",
    "#             idx = item['idx']\n",
    "#             if idx not in masks:\n",
    "#                 continue\n",
    "\n",
    "#             if not check_validity(masks[idx]):\n",
    "#                 continue\n",
    "                \n",
    "#             mask_coordinates = masks[idx]\n",
    "#             image = item['image']\n",
    "#             true_label = item['name']\n",
    "            \n",
    "#             # Apply mask to image\n",
    "#             if image.mode != \"RGB\":\n",
    "#                 image = image.convert(\"RGB\")\n",
    "            \n",
    "#             mask = generate_mask_from_coordinates(image, mask_coordinates)\n",
    "\n",
    "#             # Apply mask with mean color replacement\n",
    "#             img_array = np.array(image).copy()\n",
    "#             mask_array = mask > 0\n",
    "#             mask_3d = np.stack([mask_array] * 3, axis=2)\n",
    "#             mean_values = np.array([0, 0, 0])  # Black mean color\n",
    "#             img_array = np.where(mask_3d, img_array, mean_values.reshape(1, 1, 3))\n",
    "#             masked_image = Image.fromarray(img_array.astype(np.uint8))\n",
    "            \n",
    "#             # Get prediction on masked image\n",
    "#             image_processed = processor(images=masked_image, return_tensors=\"pt\").to(DEVICE)\n",
    "#             pixel_values = image_processed['pixel_values']\n",
    "#             outputs = model(pixel_values=pixel_values, **text_inputs)\n",
    "#             logits = outputs.logits_per_image\n",
    "#             predicted_idx = logits.argmax(dim=-1).item()\n",
    "#             predicted_label = labels[predicted_idx]\n",
    "            \n",
    "#             # Check if prediction is correct\n",
    "#             if predicted_label == true_label:\n",
    "#                 correct += 1\n",
    "#             total += 1\n",
    "    \n",
    "#     accuracy = correct / total if total > 0 else 0\n",
    "#     print(f\"Validation Results:\")\n",
    "#     print(f\"Total samples: {total}\")\n",
    "#     print(f\"Correct predictions: {correct}\")\n",
    "#     print(f\"Accuracy: {accuracy:.4f} ({accuracy*100:.2f}%)\")\n",
    "    \n",
    "#     return accuracy\n",
    "\n",
    "# # Run validation\n",
    "# accuracy = validate_masks(masks)\n"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3 (ipykernel)",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.12.9"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 5
}