Datasets:
File size: 6,050 Bytes
dc680bd ddee7c6 e9fa137 ddee7c6 02b6118 e9fa137 02b6118 dc680bd eb38c5e ddee7c6 eb38c5e e9fa137 a1cff38 ddee7c6 d87dfbe e9fa137 ddee7c6 e9fa137 ddee7c6 e9fa137 ddee7c6 e9fa137 a1cff38 e9fa137 eb38c5e e9fa137 ea61b33 a1cff38 ddee7c6 609dd1f db5dde4 a1cff38 db5dde4 609dd1f db5dde4 a1cff38 db5dde4 a1cff38 609dd1f a1cff38 ddee7c6 db5dde4 a1cff38 e9fa137 527984b e9fa137 a1cff38 e9fa137 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 | ---
license: mit
language:
- en
pretty_name: Hidden-Objects
size_categories:
- 10K<n<100K
task_categories:
- object-detection
- visual-question-answering
tags:
- computer-vision
- diffusion-priors
- spatial-reasoning
configs:
- config_name: default
data_files:
- split: train
path: "ho_irany_train_rel_full.jsonl"
- split: test
path: "ho_irany_test_rel_full.jsonl"
---
# Dataset Card: Hidden-Objects
## 📌 Overview
It provides image-object pairs with localized bounding boxes, designed to help models learn realistic object placement and spatial relationships within background scenes.
* **Project Page:** [https://hidden-objects.github.io/](https://hidden-objects.github.io/)
* **Background Source:** [Places365 Dataset](http://places2.csail.mit.edu/download.html)
## 📊 Data Schema
Each entry consists of a foreground object (`fg_class`) to be inserted within a background image (`bg_path`).
| Field | Type | Description |
|:---|:---|:---|
| **entry_id** | `int64` | Unique identifier for the data row. |
| **bg_path** | `string` | Relative file path to the background image in Places365. |
| **fg_class** | `string` | Category name of the foreground object (e.g., "bottle"). |
| **bbox** | `list` | Bounding box coordinates `[x, y, w, h]` (normalized 0–1). |
| **label** | `int64` | 1 for positive annotation, 0 for negative. |
| **image_reward_score** | `float64` | Ranker score from ImageReward. |
| **confidence** | `float64` | Detection confidence score (GroundedDINO). |
---
## 📐 Preprocessing & Bounding Boxes
The bounding boxes are defined relative to a **512x512 center-cropped** version of the background image.
1. Resize the shortest side of the original image to **512px**.
2. Perform a **center crop** to reach 512x512.
3. The upper-left corner of the crop is `(0, 0)`.
**Coordinate Conversion:**
```python
# Convert normalized [x, y, w, h] to 512x512 pixel coordinates
px_x, px_y = bbox[0] * 512, bbox[1] * 512
px_w, px_h = bbox[2] * 512, bbox[3] * 512
```
## Example Setup
huggingface-cli login
### Download Background Images from Places
```python
import torchvision.datasets as datasets
root = "INSERT_YOUR_PATH"
dataset = datasets.Places365(root=root, split='train-standard', small=False, download=True)
print(f"Downloaded {len(dataset)} images to {root}")
```
### Load as JSONL
```Python
from datasets import load_dataset
dataset = load_dataset("marco-schouten/hidden-objects", streaming=True)
first_row = next(iter(dataset["train"]))
print(first_row)
```
Sample:
```json
{
"entry_id": 1,
"bg_path": "data_large_standard/k/kitchen/00002986.jpg",
"fg_class": "bottle",
"bbox": [0.542969, 0.591797, 0.0625, 0.152344],
"label": 1,
"image_reward_score": -1.542461,
"confidence": 0.388181,
"source": "h"
}
```
### Load for Training / Evalauting
```Python
import os
import torch
from PIL import Image
from torch.utils.data import Dataset, DataLoader
from datasets import load_dataset
import torchvision.transforms as T
class HiddenObjectsDataset(Dataset):
def __init__(self, places_root, split="train"):
self.hf_data = load_dataset("marco-schouten/hidden-objects", split=split)
self.places_root = places_root
self.transform = T.Compose([
T.Resize(512),
T.CenterCrop(512),
T.ToTensor()
])
def __len__(self):
return len(self.hf_data)
def __getitem__(self, idx):
item = self.hf_data[idx]
img_path = os.path.join(self.places_root, item['bg_path'])
image = self.transform(Image.open(img_path).convert("RGB"))
bbox = torch.tensor(item['bbox']) * 512
return {"image": image, "bbox": bbox, "label": item['label'], "class": item['fg_class'], "image_reward_score" : item['image_reward_score']
"confidence" : item['confidence']}
# Usage
# dataset = HiddenObjectsDataset(places_root="./data/places365")
```
### Load Streaming Mode
```Python
import os
import torch
from PIL import Image
from torch.utils.data import Dataset
from datasets import load_dataset
import torchvision.transforms as T
class HiddenObjectsDataset(Dataset):
def __init__(self, places_root, split="train"):
self.hf_data = load_dataset("marco-schouten/hidden-objects", split=split)
self.places_root = places_root
self.transform = T.Compose([
T.Resize(512),
T.CenterCrop(512),
T.ToTensor()
])
def __len__(self):
return len(self.hf_data)
def __getitem__(self, idx):
item = self.hf_data[idx]
img_path = os.path.join(self.places_root, item['bg_path'])
image = self.transform(Image.open(img_path).convert("RGB"))
bbox = torch.tensor(item['bbox']) * 512
return {
"entry_id": item['entry_id'],
"image": image,
"bbox": bbox,
"label": item['label'],
"class": item['fg_class']
}
### B. Streaming Loader (Best for Quick Start)
from datasets import load_dataset
from torch.utils.data import DataLoader
import torchvision.transforms as T
import os
from PIL import Image
import torch
def get_streaming_loader(places_root, batch_size=32):
dataset = load_dataset("marco-schouten/hidden-objects", split="train", streaming=True)
preprocess = T.Compose([T.Resize(512), T.CenterCrop(512), T.ToTensor()])
def collate_fn(batch):
images, bboxes, ids = [], [], []
for item in batch:
path = os.path.join(places_root, item['bg_path'])
try:
img = Image.open(path).convert("RGB")
images.append(preprocess(img))
bboxes.append(torch.tensor(item['bbox']) * 512)
ids.append(item['entry_id'])
except FileNotFoundError:
continue
return {
"entry_id": ids,
"pixel_values": torch.stack(images),
"bboxes": torch.stack(bboxes)
}
return DataLoader(dataset, batch_size=batch_size, collate_fn=collate_fn)
``` |