hidden-objects / README.md
marco-schouten's picture
Update README.md
d87dfbe verified
metadata
license: mit
language:
  - en
pretty_name: Hidden-Objects
size_categories:
  - 10K<n<100K
task_categories:
  - object-detection
  - visual-question-answering
tags:
  - computer-vision
  - diffusion-priors
  - spatial-reasoning
configs:
  - config_name: default
    data_files:
      - split: train
        path: ho_irany_train_rel_full.jsonl
      - split: test
        path: ho_irany_test_rel_full.jsonl

Dataset Card: Hidden-Objects

πŸ“Œ Overview

It provides image-object pairs with localized bounding boxes, designed to help models learn realistic object placement and spatial relationships within background scenes.

πŸ“Š Data Schema

Each entry consists of a foreground object (fg_class) to be inserted within a background image (bg_path).

Field Type Description
entry_id int64 Unique identifier for the data row.
bg_path string Relative file path to the background image in Places365.
fg_class string Category name of the foreground object (e.g., "bottle").
bbox list Bounding box coordinates [x, y, w, h] (normalized 0–1).
label int64 1 for positive annotation, 0 for negative.
image_reward_score float64 Ranker score from ImageReward.
confidence float64 Detection confidence score (GroundedDINO).

πŸ“ Preprocessing & Bounding Boxes

The bounding boxes are defined relative to a 512x512 center-cropped version of the background image.

  1. Resize the shortest side of the original image to 512px.
  2. Perform a center crop to reach 512x512.
  3. The upper-left corner of the crop is (0, 0).

Coordinate Conversion:

# Convert normalized [x, y, w, h] to 512x512 pixel coordinates
px_x, px_y = bbox[0] * 512, bbox[1] * 512
px_w, px_h = bbox[2] * 512, bbox[3] * 512

Example Setup

huggingface-cli login

Download Background Images from Places


import torchvision.datasets as datasets

root = "INSERT_YOUR_PATH"   
dataset = datasets.Places365(root=root, split='train-standard', small=False, download=True)
print(f"Downloaded {len(dataset)} images to {root}")

Load as JSONL

from datasets import load_dataset

dataset = load_dataset("marco-schouten/hidden-objects", streaming=True)
first_row = next(iter(dataset["train"]))
print(first_row)

Sample:

{
  "entry_id": 1,
  "bg_path": "data_large_standard/k/kitchen/00002986.jpg",
  "fg_class": "bottle",
  "bbox": [0.542969, 0.591797, 0.0625, 0.152344],
  "label": 1,
  "image_reward_score": -1.542461,
  "confidence": 0.388181,
  "source": "h"
}

Load for Training / Evalauting

import os
import torch
from PIL import Image
from torch.utils.data import Dataset, DataLoader
from datasets import load_dataset
import torchvision.transforms as T

class HiddenObjectsDataset(Dataset):
    def __init__(self, places_root, split="train"):
        self.hf_data = load_dataset("marco-schouten/hidden-objects", split=split)
        self.places_root = places_root
        self.transform = T.Compose([
            T.Resize(512),
            T.CenterCrop(512),
            T.ToTensor()
        ])

    def __len__(self):
        return len(self.hf_data)

    def __getitem__(self, idx):
        item = self.hf_data[idx]
        img_path = os.path.join(self.places_root, item['bg_path'])
        image = self.transform(Image.open(img_path).convert("RGB"))
        bbox = torch.tensor(item['bbox']) * 512
        return {"image": image, "bbox": bbox, "label": item['label'], "class": item['fg_class'], "image_reward_score" : item['image_reward_score']
  "confidence" : item['confidence']}

# Usage
# dataset = HiddenObjectsDataset(places_root="./data/places365")

Load Streaming Mode

import os
import torch
from PIL import Image
from torch.utils.data import Dataset
from datasets import load_dataset
import torchvision.transforms as T

class HiddenObjectsDataset(Dataset):
    def __init__(self, places_root, split="train"):
        self.hf_data = load_dataset("marco-schouten/hidden-objects", split=split)
        self.places_root = places_root
        self.transform = T.Compose([
            T.Resize(512),
            T.CenterCrop(512),
            T.ToTensor()
        ])

    def __len__(self):
        return len(self.hf_data)

    def __getitem__(self, idx):
        item = self.hf_data[idx]
        img_path = os.path.join(self.places_root, item['bg_path'])
        image = self.transform(Image.open(img_path).convert("RGB"))
        bbox = torch.tensor(item['bbox']) * 512
        return {
            "entry_id": item['entry_id'],
            "image": image, 
            "bbox": bbox, 
            "label": item['label'], 
            "class": item['fg_class']
        }

### B. Streaming Loader (Best for Quick Start)
from datasets import load_dataset
from torch.utils.data import DataLoader
import torchvision.transforms as T
import os
from PIL import Image
import torch

def get_streaming_loader(places_root, batch_size=32):
    dataset = load_dataset("marco-schouten/hidden-objects", split="train", streaming=True)
    preprocess = T.Compose([T.Resize(512), T.CenterCrop(512), T.ToTensor()])

    def collate_fn(batch):
        images, bboxes, ids = [], [], []
        for item in batch:
            path = os.path.join(places_root, item['bg_path'])
            try:
                img = Image.open(path).convert("RGB")
                images.append(preprocess(img))
                bboxes.append(torch.tensor(item['bbox']) * 512)
                ids.append(item['entry_id'])
            except FileNotFoundError:
                continue
        return {
            "entry_id": ids,
            "pixel_values": torch.stack(images), 
            "bboxes": torch.stack(bboxes)
        }
    return DataLoader(dataset, batch_size=batch_size, collate_fn=collate_fn)