The dataset viewer is not available for this split.
Error code: FeaturesError
Exception: ArrowInvalid
Message: Schema at index 1 was different:
mask_image: string
image: string
prompt: string
negative_prompt: string
resolution: int64
aspect_ratio: string
height: int64
width: int64
num_inference_steps: int64
guidance_scale: double
true_cfg_scale: double
strength: double
seed: int64
vs
image: string
prompt: string
negative_prompt: string
resolution: int64
aspect_ratio: string
num_inference_steps: int64
height: int64
width: int64
seed: int64
true_cfg_scale: double
guidance_scale: double
Traceback: Traceback (most recent call last):
File "/src/services/worker/src/worker/job_runners/split/first_rows.py", line 243, in compute_first_rows_from_streaming_response
iterable_dataset = iterable_dataset._resolve_features()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 3496, in _resolve_features
features = _infer_features_from_batch(self.with_format(None)._head())
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2257, in _head
return next(iter(self.iter(batch_size=n)))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2461, in iter
for key, example in iterator:
^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 1952, in __iter__
for key, pa_table in self._iter_arrow():
^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 1974, in _iter_arrow
yield from self.ex_iterable._iter_arrow()
File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 531, in _iter_arrow
yield new_key, pa.Table.from_batches(chunks_buffer)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "pyarrow/table.pxi", line 5039, in pyarrow.lib.Table.from_batches
File "pyarrow/error.pxi", line 155, in pyarrow.lib.pyarrow_internal_check_status
File "pyarrow/error.pxi", line 92, in pyarrow.lib.check_status
pyarrow.lib.ArrowInvalid: Schema at index 1 was different:
mask_image: string
image: string
prompt: string
negative_prompt: string
resolution: int64
aspect_ratio: string
height: int64
width: int64
num_inference_steps: int64
guidance_scale: double
true_cfg_scale: double
strength: double
seed: int64
vs
image: string
prompt: string
negative_prompt: string
resolution: int64
aspect_ratio: string
num_inference_steps: int64
height: int64
width: int64
seed: int64
true_cfg_scale: double
guidance_scale: doubleNeed help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
Apex Studio Test Suite (dataset)
This repository is a data-only test suite for the Apex Studio API:
- Test definitions: JSON input payloads under
image/,video/,upscalers/ - Test assets: small input media under
assets/(images/videos/audio used by some tests) - Reference artifacts: example outputs under
sample_outputs/(useful for smoke-testing / sanity checks) - Runners:
run_suite.py,run_one.py,check_missing_outputs.py(these require the Apex Studio API repo to actually execute inference)
What you need to actually run tests
This dataset does not contain model code/engines. To execute tests you need:
- An Apex Studio checkout that contains
apps/api/src/andapps/api/manifest/ - A working Python environment for Apex Studio (GPU recommended for most tests)
Dataset layout
apex-test-suite/
assets/ # input images/videos/audio referenced by tests (use paths like "assets/foo.png")
image/ # text-to-image / edit tests (JSON)
video/ # text-to-video / image-to-video tests (JSON)
upscalers/ # video upscaler tests (JSON)
sample_outputs/ # example outputs (png/mp4) for quick sanity checks
run_suite.py # runs many tests (spawns a fresh process per test)
run_one.py # runs one JSON test (used by run_suite.py)
check_missing_outputs.py # checks which tests are missing artifacts in outputs/
Quickstart (recommended): use this dataset inside Apex Studio
1) Download the dataset from HuggingFace
Hugging Face auth (recommended for private/gated repos)
If you see 401/403 Unauthorized from Hugging Face (dataset downloads or model/component downloads), export a token in your shell before running the suite/API.
- Linux / macOS (bash/zsh)
export HF_TOKEN="hf_xxx"
# or (also supported)
export HUGGING_FACE_HUB_TOKEN="hf_xxx"
- Windows (PowerShell)
$env:HF_TOKEN="hf_xxx"
# or
$env:HUGGING_FACE_HUB_TOKEN="hf_xxx"
- Windows (cmd.exe)
set HF_TOKEN=hf_xxx
REM or
set HUGGING_FACE_HUB_TOKEN=hf_xxx
Notes:
hf auth loginstores a token on disk, but exportingHF_TOKENis the most reliable option for test runs (especially when subprocesses/Ray workers are involved).- For gated models, you must also request/accept access on the model page in Hugging Face; a valid token alone can still return 401/403.
Pick one:
- Option A:
huggingface_hubsnapshot download
python -m pip install -U huggingface_hub
python - <<'PY'
from huggingface_hub import snapshot_download
local_dir = "./apex-test-suite"
snapshot_download(
repo_id="totoku/apex-test-suite",
repo_type="dataset",
local_dir=local_dir,
local_dir_use_symlinks=False,
)
print("Downloaded to", local_dir)
PY
- Option B: git clone (requires git-lfs)
git lfs install
git clone https://huggingface.co/datasets/totoku/apex-test-suite apex-test-suite
2) Mount the dataset into your Apex Studio repo
In your Apex Studio repo, the runners and engine code expect tests under:
apps/api/test_suite/
So, symlink the dataset folder to that location (recommended):
cd /path/to/apex-studio
ln -sfn /path/to/apex-test-suite apps/api/test_suite
Alternative: copy instead of symlinking:
cd /path/to/apex-studio
rm -rf apps/api/test_suite
cp -a /path/to/apex-test-suite apps/api/test_suite
3) Install Apex Studio dependencies
Use the Apex Studio API requirements that match your machine. See:
apps/api/requirements/README.md
Example (pick the correct one for your hardware):
cd /path/to/apex-studio/apps/api
python -m venv .venv
source .venv/bin/activate
# Example: install a CUDA stack (choose the right file for your GPU arch)
pip install -r requirements/cuda/ampere.txt
4) Run the suite
Run all tests:
cd /path/to/apex-studio
.venv/bin/python apps/api/test_suite/run_suite.py
Run only a category:
.venv/bin/python apps/api/test_suite/run_suite.py --kind image
.venv/bin/python apps/api/test_suite/run_suite.py --kind video
.venv/bin/python apps/api/test_suite/run_suite.py --kind upscalers
Run a subset by filename substring:
.venv/bin/python apps/api/test_suite/run_suite.py --filter seedvr2
Resume / rerun only failures (skip successes):
This treats a test as “successful” if its expected artifact already exists under outputs/
(e.g. outputs/<json_stem>.png for image/ tests, outputs/<json_stem>.mp4 for video/ / upscalers/).
.venv/bin/python apps/api/test_suite/run_suite.py --skip-successes
# (alias)
.venv/bin/python apps/api/test_suite/run_suite.py --only-failed
Run a single test JSON by absolute path:
.venv/bin/python apps/api/test_suite/run_suite.py \
--json /path/to/apex-studio/apps/api/test_suite/image/srpo-text-to-image-1.0.0.v1.json
Outputs
- Artifacts are written to:
apps/api/test_suite/outputs/ - A suite summary is written to:
apps/api/test_suite/outputs/summary.json - Output naming is canonical:
outputs/<json_stem>.<ext>(e.g.outputs/srpo-text-to-image-1.0.0.v1.png)
Running a single test directly (run_one.py)
run_suite.py calls run_one.py in a fresh process per test. You can also run it directly:
.venv/bin/python apps/api/test_suite/run_one.py \
--json /abs/path/to/apps/api/test_suite/video/wan-2.1-14b-text-to-video.json \
--outputs-dir /abs/path/to/apps/api/test_suite/outputs
Validating coverage: which tests are missing outputs?
This is useful in CI or after adding new tests.
.venv/bin/python apps/api/test_suite/check_missing_outputs.py \
--suite-dir /path/to/apex-studio/apps/api/test_suite \
--outputs-dir /path/to/apex-studio/apps/api/test_suite/outputs \
--verify-manifests \
--manifest-dir /path/to/apex-studio/apps/api/manifest
Machine-readable JSON report:
.venv/bin/python apps/api/test_suite/check_missing_outputs.py --json
Notes / gotchas
Asset paths must be portable
In test JSONs, always reference local assets as:
assets/<filename>
The runner resolves assets/... relative to the local test_suite/assets/ folder.
Manifest mapping
Each test JSON maps to a manifest in apps/api/manifest/<kind>/.
The mapping is:
test_suite/<kind>/<name>.json→manifest/<kind>/<name>.yml
And it includes a fallback that strips trailing semver-like suffixes, so:
seedvr2-7b-1.0.0.v1.jsoncan map tomanifest/upscalers/seedvr2-7b.yml
Nunchaku tests may be skipped automatically
If the selected Python environment cannot import nunchaku, run_suite.py will skip tests whose filename contains nunchaku and report them as skipped in the summary.
sample_outputs/ are reference examples, not strict “goldens”
Many generative workloads are not bitwise deterministic across GPUs/drivers/library versions. The suite currently focuses on:
- “does it run end-to-end?”
- “does it produce an artifact with the expected extension/type?”
Use sample_outputs/ for quick qualitative checks, and treat strict pixel-perfect diffs as optional.
Adding a new test
- Pick a kind: add a JSON under
image/,video/, orupscalers/. - Use portable assets: reference inputs as
assets/...and add any new media intoassets/. - Name it to match a manifest:
- Prefer
<manifest-id>-<semver>.v<k>.json(the runner can also resolve to a non-versioned manifest).
- Prefer
- Run it:
.venv/bin/python apps/api/test_suite/run_suite.py --json /abs/path/to/your-new-test.json
- Confirm output appears in
outputs/and update/inspectoutputs/summary.json.
- Downloads last month
- 50