look-bench / RELEASE_GUIDE.md
pierrexsq's picture
Upload LookBench v20251201
c303684 verified

LookBench HuggingFace Release Guide

This guide explains how to prepare and upload the LookBench dataset to Hugging Face.

Directory Structure

huggingface_release/
├── README.md                    # Dataset card (displayed on HF)
├── LookBench.py                 # Custom dataset loading script
├── prepare_release.py           # Convert parquet to HF format
├── upload_to_hf.py              # Upload to HuggingFace Hub
├── create_sample_images.py      # Generate preview images
├── assets/                      # Images for dataset card
│   ├── lookbench_banner.png
│   ├── evaluation_dimensions.png
│   └── samples_*.png
└── v20251201/                   # Versioned data
    ├── aigen_streetlook/
    │   ├── query.parquet
    │   └── gallery.parquet
    ├── aigen_studio/
    │   ├── query.parquet
    │   └── gallery.parquet
    ├── real_streetlook/
    │   ├── query.parquet
    │   └── gallery.parquet
    ├── real_studio_flat/
    │   ├── query.parquet
    │   └── gallery.parquet
    ├── noise/
    │   └── noise_*.parquet
    └── version_info.json

Release Steps

Step 1: Prepare the Data

cd huggingface_release

# Convert existing parquet files to HF format
python prepare_release.py \
    --input_dir ../scripts/data/parquet_files_v5 \
    --output_dir v20251201 \
    --version v20251201

Step 2: Create Assets

# Generate banner and sample images
python create_sample_images.py \
    --input_dir ../scripts/data/parquet_files_v5 \
    --output_dir assets

Step 3: Upload to HuggingFace

# Set your HuggingFace token
export HF_TOKEN="your_token_here"

# Upload using folder method (preserves structure)
python upload_to_hf.py \
    --repo_id your-org/LookBench \
    --data_dir v20251201 \
    --method api

# Or upload using datasets library (creates proper configs)
python upload_to_hf.py \
    --repo_id your-org/LookBench \
    --data_dir v20251201 \
    --method datasets

Versioning Strategy

LookBench uses date-based versioning: vYYYYMMDD

Version Date Changes
v20251201 Dec 2024 Initial release
v20250301 Mar 2025 (future) Updated gallery

To add a new version:

  1. Create a new versioned directory (e.g., v20250301/)
  2. Update README.md with new version info
  3. Add new config entries in the YAML frontmatter
  4. Keep old versions for reproducibility

Dataset Card (README.md)

The README.md serves as the dataset card displayed on HuggingFace. Key sections:

  1. YAML Frontmatter: Metadata, configs, features
  2. Overview: Description and key features
  3. Dataset Structure: File organization
  4. Data Schema: Column descriptions
  5. Statistics: Sample counts and distributions
  6. Evaluation Metrics: How to evaluate
  7. Leaderboard: Model rankings
  8. Citation: BibTeX entry

Image Display on HuggingFace

For images to display in the HuggingFace viewer:

  1. Parquet format: Store images as {'bytes': image_bytes, 'path': None}
  2. Features definition: Use Image() type in the schema
  3. Dataset card images: Reference with ![alt](assets/image.png)

Parquet Image Format

# Correct format for HF image display
record = {
    'image': {'bytes': image_bytes, 'path': None},
    'category': 'bag',
    ...
}

Dataset Card Images

<!-- In README.md -->
![Banner](assets/lookbench_banner.png)
![Samples](assets/samples_aigen_streetlook.png)

Loading the Dataset

After upload, users can load the dataset:

from datasets import load_dataset

# Load a specific task
ds = load_dataset("your-org/LookBench", "aigen_streetlook")

# Access splits
query = ds["query"]
gallery = ds["gallery"]

# View sample
print(query[0])
query[0]["image"]  # Returns PIL Image

Updating the Leaderboard

To update the leaderboard:

  1. Run evaluation on your model
  2. Submit results via GitHub issue or PR
  3. Update the leaderboard table in README.md

Troubleshooting

Images not displaying

  • Ensure images are in {'bytes': ..., 'path': ...} format
  • Check that Features specifies Image() type
  • Verify parquet was saved with correct schema

Upload fails

  • Check HF_TOKEN is valid and has write access
  • Ensure repo exists and you have permissions
  • Try uploading in smaller batches

Dataset won't load

  • Verify parquet files are valid: pd.read_parquet(file)
  • Check config names match directory structure
  • Ensure all required columns are present

Contact

For issues or questions: