apex-test-suite / README.md
ToToKu
Add skippable success
06288ae
metadata
tags:
  - apex-studio
  - testing
  - generative-ai
  - text-to-image
  - image-to-video
  - upscaling

Apex Studio Test Suite (dataset)

This repository is a data-only test suite for the Apex Studio API:

  • Test definitions: JSON input payloads under image/, video/, upscalers/
  • Test assets: small input media under assets/ (images/videos/audio used by some tests)
  • Reference artifacts: example outputs under sample_outputs/ (useful for smoke-testing / sanity checks)
  • Runners: run_suite.py, run_one.py, check_missing_outputs.py (these require the Apex Studio API repo to actually execute inference)

What you need to actually run tests

This dataset does not contain model code/engines. To execute tests you need:

  • An Apex Studio checkout that contains apps/api/src/ and apps/api/manifest/
  • A working Python environment for Apex Studio (GPU recommended for most tests)

Dataset layout

apex-test-suite/
  assets/               # input images/videos/audio referenced by tests (use paths like "assets/foo.png")
  image/                # text-to-image / edit tests (JSON)
  video/                # text-to-video / image-to-video tests (JSON)
  upscalers/            # video upscaler tests (JSON)
  sample_outputs/       # example outputs (png/mp4) for quick sanity checks
  run_suite.py          # runs many tests (spawns a fresh process per test)
  run_one.py            # runs one JSON test (used by run_suite.py)
  check_missing_outputs.py  # checks which tests are missing artifacts in outputs/

Quickstart (recommended): use this dataset inside Apex Studio

1) Download the dataset from HuggingFace

Hugging Face auth (recommended for private/gated repos)

If you see 401/403 Unauthorized from Hugging Face (dataset downloads or model/component downloads), export a token in your shell before running the suite/API.

  • Linux / macOS (bash/zsh)
export HF_TOKEN="hf_xxx"
# or (also supported)
export HUGGING_FACE_HUB_TOKEN="hf_xxx"
  • Windows (PowerShell)
$env:HF_TOKEN="hf_xxx"
# or
$env:HUGGING_FACE_HUB_TOKEN="hf_xxx"
  • Windows (cmd.exe)
set HF_TOKEN=hf_xxx
REM or
set HUGGING_FACE_HUB_TOKEN=hf_xxx

Notes:

  • hf auth login stores a token on disk, but exporting HF_TOKEN is the most reliable option for test runs (especially when subprocesses/Ray workers are involved).
  • For gated models, you must also request/accept access on the model page in Hugging Face; a valid token alone can still return 401/403.

Pick one:

  • Option A: huggingface_hub snapshot download
python -m pip install -U huggingface_hub
python - <<'PY'
from huggingface_hub import snapshot_download

local_dir = "./apex-test-suite"
snapshot_download(
    repo_id="totoku/apex-test-suite",
    repo_type="dataset",
    local_dir=local_dir,
    local_dir_use_symlinks=False,
)
print("Downloaded to", local_dir)
PY
  • Option B: git clone (requires git-lfs)
git lfs install
git clone https://huggingface.co/datasets/totoku/apex-test-suite apex-test-suite

2) Mount the dataset into your Apex Studio repo

In your Apex Studio repo, the runners and engine code expect tests under:

  • apps/api/test_suite/

So, symlink the dataset folder to that location (recommended):

cd /path/to/apex-studio
ln -sfn /path/to/apex-test-suite apps/api/test_suite

Alternative: copy instead of symlinking:

cd /path/to/apex-studio
rm -rf apps/api/test_suite
cp -a /path/to/apex-test-suite apps/api/test_suite

3) Install Apex Studio dependencies

Use the Apex Studio API requirements that match your machine. See:

  • apps/api/requirements/README.md

Example (pick the correct one for your hardware):

cd /path/to/apex-studio/apps/api
python -m venv .venv
source .venv/bin/activate

# Example: install a CUDA stack (choose the right file for your GPU arch)
pip install -r requirements/cuda/ampere.txt

4) Run the suite

Run all tests:

cd /path/to/apex-studio
.venv/bin/python apps/api/test_suite/run_suite.py

Run only a category:

.venv/bin/python apps/api/test_suite/run_suite.py --kind image
.venv/bin/python apps/api/test_suite/run_suite.py --kind video
.venv/bin/python apps/api/test_suite/run_suite.py --kind upscalers

Run a subset by filename substring:

.venv/bin/python apps/api/test_suite/run_suite.py --filter seedvr2

Resume / rerun only failures (skip successes):

This treats a test as “successful” if its expected artifact already exists under outputs/ (e.g. outputs/<json_stem>.png for image/ tests, outputs/<json_stem>.mp4 for video/ / upscalers/).

.venv/bin/python apps/api/test_suite/run_suite.py --skip-successes
# (alias)
.venv/bin/python apps/api/test_suite/run_suite.py --only-failed

Run a single test JSON by absolute path:

.venv/bin/python apps/api/test_suite/run_suite.py \
  --json /path/to/apex-studio/apps/api/test_suite/image/srpo-text-to-image-1.0.0.v1.json

Outputs

  • Artifacts are written to: apps/api/test_suite/outputs/
  • A suite summary is written to: apps/api/test_suite/outputs/summary.json
  • Output naming is canonical:
    • outputs/<json_stem>.<ext> (e.g. outputs/srpo-text-to-image-1.0.0.v1.png)

Running a single test directly (run_one.py)

run_suite.py calls run_one.py in a fresh process per test. You can also run it directly:

.venv/bin/python apps/api/test_suite/run_one.py \
  --json /abs/path/to/apps/api/test_suite/video/wan-2.1-14b-text-to-video.json \
  --outputs-dir /abs/path/to/apps/api/test_suite/outputs

Validating coverage: which tests are missing outputs?

This is useful in CI or after adding new tests.

.venv/bin/python apps/api/test_suite/check_missing_outputs.py \
  --suite-dir /path/to/apex-studio/apps/api/test_suite \
  --outputs-dir /path/to/apex-studio/apps/api/test_suite/outputs \
  --verify-manifests \
  --manifest-dir /path/to/apex-studio/apps/api/manifest

Machine-readable JSON report:

.venv/bin/python apps/api/test_suite/check_missing_outputs.py --json

Notes / gotchas

Asset paths must be portable

In test JSONs, always reference local assets as:

  • assets/<filename>

The runner resolves assets/... relative to the local test_suite/assets/ folder.

Manifest mapping

Each test JSON maps to a manifest in apps/api/manifest/<kind>/.

The mapping is:

  • test_suite/<kind>/<name>.jsonmanifest/<kind>/<name>.yml

And it includes a fallback that strips trailing semver-like suffixes, so:

  • seedvr2-7b-1.0.0.v1.json can map to manifest/upscalers/seedvr2-7b.yml

Nunchaku tests may be skipped automatically

If the selected Python environment cannot import nunchaku, run_suite.py will skip tests whose filename contains nunchaku and report them as skipped in the summary.

sample_outputs/ are reference examples, not strict “goldens”

Many generative workloads are not bitwise deterministic across GPUs/drivers/library versions. The suite currently focuses on:

  • “does it run end-to-end?”
  • “does it produce an artifact with the expected extension/type?”

Use sample_outputs/ for quick qualitative checks, and treat strict pixel-perfect diffs as optional.

Adding a new test

  1. Pick a kind: add a JSON under image/, video/, or upscalers/.
  2. Use portable assets: reference inputs as assets/... and add any new media into assets/.
  3. Name it to match a manifest:
    • Prefer <manifest-id>-<semver>.v<k>.json (the runner can also resolve to a non-versioned manifest).
  4. Run it:
.venv/bin/python apps/api/test_suite/run_suite.py --json /abs/path/to/your-new-test.json
  1. Confirm output appears in outputs/ and update/inspect outputs/summary.json.