Dataset Viewer
The dataset viewer is not available for this subset.
Cannot get the split names for the config 'default' of the dataset.
Exception:    SplitsNotFoundError
Message:      The split names could not be parsed from the dataset config.
Traceback:    Traceback (most recent call last):
                File "/usr/local/lib/python3.12/site-packages/datasets/inspect.py", line 286, in get_dataset_config_info
                  for split_generator in builder._split_generators(
                                         ^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/webdataset/webdataset.py", line 90, in _split_generators
                  inferred_arrow_schema = pa.concat_tables(pa_tables, promote_options="default").schema
                                          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "pyarrow/table.pxi", line 6319, in pyarrow.lib.concat_tables
                File "pyarrow/error.pxi", line 155, in pyarrow.lib.pyarrow_internal_check_status
                File "pyarrow/error.pxi", line 92, in pyarrow.lib.check_status
              pyarrow.lib.ArrowTypeError: Unable to merge: Field json has incompatible types: struct<manifests: list<item: struct<annotations: struct<io.containerd.image.name: string, org.opencontainers.image.ref.name: string>, digest: string, mediaType: string, size: int64>>, mediaType: string, schemaVersion: int64> vs list<item: struct<Config: string, LayerSources: struct<sha256:09b3b02d101ade71cc3b84efc546e018170d02b60119640e7028a8b4345f4f53: struct<digest: string, mediaType: string, size: int64>, sha256:0ab8240faa07c566a4660d87e86905e9101a3e37fb874c18684d5539c48923c2: struct<digest: string, mediaType: string, size: int64>, sha256:0db45ee32b833cb9c52b63c40341b3ce526da5dbdbfb4325509424f29862672a: struct<digest: string, mediaType: string, size: int64>, sha256:144fa85389d8b31a27cf5658c4bbcae78652c64f8b979628eecd8eddbc5c99ba: struct<digest: string, mediaType: string, size: int64>, sha256:160022554564e58fbfcb714cb07f3079458a3e8d895e400c04a56607f528e2b5: struct<digest: string, mediaType: string, size: int64>, sha256:22f5dc55b5f74ccbe57d32c335474ca84d4833d8bd8963a947ea49d2c5fb6640: struct<digest: string, mediaType: string, size: int64>, sha256:256d88da41857db513b95b50ba9a9b28491b58c954e25477d5dad8abb465430b: struct<digest: string, mediaType: string, size: int64>, sha256:2ee4486aa7ef23974db27d67e44b8ff5fa3d6d4fd88b88a7b8dae302a98bef8a: struct<digest: string, mediaType: string, size: int64>, sha256:345cfa465206a6d1cc0812481df7edbc4553b64a26c63ccda0e5b11b0f2bf81b: struct<digest: string, mediaType: string, size: int64>, sha256:365252d745cebc4b6b371006b1f183800755f98ae0be5f99fc93bde85a0f9602: struct<digest: string, mediaType: string, size: int64>, sha256:399d155a03b034314cd9ea52e4e1feca44be4cf92ae172ba9c6ce14f5897f0a2: struct<digest: string, mediaType: string, size: int64>, sha256:4444bbd4beacb07500d6f7a2b5cf504ca2e10c75190316068754f413bf0dda34: struct<digest: string, mediaType: string, size: int64>, sha256:498bbcc60d01b2080fd6fc35117cb82c80ddd4eb8a654ee330dd91587b7ec90b: struct<digest: string, mediaType: string, size: int64>, sha256:827ee5689b06599e627a42124a2dbf33b7c977e1c9e4de0e1b2d8ecdc5ea19dc: struct<digest: string, mediaType: string, size: int64>, sha256:858cc1f15a8ab2d7d8602e69cf6fa81e98d4e67ae0d100516b13f7095bd83c88: struct<digest: string, mediaType: string, size: int64>, sha256:88858e911d588c77dd71de4345d4ab904b07b4f88bbb4d962f5b62fed181cfeb: struct<digest: string, mediaType: string, size: int64>, sha256:94bf6f6dbe200263497d4951ecc37eef914b633abe61d7171ce63c24e08a0578: struct<digest: string, mediaType: string, size: int64>, sha256:98df167145e3d24fc1d149d376009b0919f7eb20d903837b1dc563a3d9ac0dbb: struct<digest: string, mediaType: string, size: int64>, sha256:a76607691639f8a6d42e61c4461e428ccec209948473dc0e27249e5301356171: struct<digest: string, mediaType: string, size: int64>, sha256:ab861100b1e7e3798f995cf0709a5460fbaa4e3b3691d39a27c5d44550957056: struct<digest: string, mediaType: string, size: int64>, sha256:bc352a27a0e47d42df7bc06e702351a4f3102d20016484c9613644dba63239e0: struct<digest: string, mediaType: string, size: int64>, sha256:bc6197992fda971b7002b2dc2d9c0ff2382f4f04afaccc8a12f6f1038d40258e: struct<digest: string, mediaType: string, size: int64>, sha256:bcbbb2584d583762c01620ab0391d93aa4af26c5548f701666f830da0c314f30: struct<digest: string, mediaType: string, size: int64>, sha256:bd35bb15745eae5298a1395dcdc1e413a3e92256ff59a4b8282ad5805a025118: struct<digest: string, mediaType: string, size: int64>, sha256:c0e21dcee62311c36e1f025307b3186a4b4a034f0b52011704402b39623b6587: struct<digest: string, mediaType: string, size: int64>, sha256:d3cc875dc0e3f4b2e453a71114b2a22a1a666f1934f1ecab1decf9abdf494fc8: struct<digest: string, mediaType: string, size: int64>, sha256:d6b19a46b795f8b562888c6e2826a6b11f744ab98543268b4d45ee1af05ed1cf: struct<digest: string, mediaType: string, size: int64>, sha256:d7f66450e53cbd7978aad15927ef0474019238df4e68efc6a014a9e3b9fc7999: struct<digest: string, mediaType: string, size: int64>, sha256:dcb0f55f81ad931bb976c65730e4bafe7a03936d1fd1bd0fec6a9bcfde23561d: struct<digest: string, mediaType: string, size: int64>, sha256:ddddd36342287d1f7c8ea88a790ce382aaea0e5fcf2780cd180e3ba319e7fe42: struct<digest: string, mediaType: string, size: int64>, sha256:e6c05e83c163d632918d1c4906ee088b1e0d93a5bb3acfc6a268da52e76cc945: struct<digest: string, mediaType: string, size: int64>, sha256:e9bd3f6eee1b04b5c1aedf4877a3effb046d58cb946f82acc6dafb238eda52cc: struct<digest: string, mediaType: string, size: int64>, sha256:f279459b42835792f1f58a1c80fc159de36d2f8ce79e8141ac45a3c5611adad3: struct<digest: string, mediaType: string, size: int64>, sha256:f47222679a24ae36c89707a9f6522dbcf7a0f613c40b45125df4b09bfecf5ee1: struct<digest: string, mediaType: string, size: int64>>, Layers: list<item: string>, RepoTags: list<item: string>>>
              
              The above exception was the direct cause of the following exception:
              
              Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/config/split_names.py", line 65, in compute_split_names_from_streaming_response
                  for split in get_dataset_split_names(
                               ^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/inspect.py", line 340, in get_dataset_split_names
                  info = get_dataset_config_info(
                         ^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/inspect.py", line 291, in get_dataset_config_info
                  raise SplitsNotFoundError("The split names could not be parsed from the dataset config.") from err
              datasets.inspect.SplitsNotFoundError: The split names could not be parsed from the dataset config.

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

FARBench Docker Images

Per-task Docker images, saved with docker save | gzip and uploaded as plain LFS files. Each tarball is a fully self-contained image (CUDA + Python + task deps + baked task data); load it with docker load and the resulting image carries the canonical tag farbench/farbench:<task>-<cuda>.

Files are at the repo root with the flat naming convention <task>-<cuda>.docker.tar.gz (e.g. mnist_classification-cu118.docker.tar.gz).

Quick start

# 1. Download the tarball you need (single task, cu118 example).
huggingface-cli download \
    FARBenchAnonymous/FARBench \
    mnist_classification-cu118.docker.tar.gz \
    --repo-type dataset \
    --local-dir ./farbench-images

# 2. Load into your Docker daemon.
docker load -i ./farbench-images/mnist_classification-cu118.docker.tar.gz

# 3. The image is now available locally as e.g.
#    farbench/farbench:mnist_classification-cu118
docker images | grep farbench

If a tarball is split (*.part00, *.part01, ...) merge first:

cat <task>-<cuda>.docker.tar.gz.part* > <task>-<cuda>.docker.tar.gz
docker load -i <task>-<cuda>.docker.tar.gz

Available tasks

Task Domain Metric cu118 cu128
ade20k computer vision mIoU ade20k-cu118 ade20k-cu128
aime_math_rl natural language processing exact_match aime_math_rl-cu118 aime_math_rl-cu128
assist2009_kt natural language processing auc_roc assist2009_kt-cu118 assist2009_kt-cu128
asvspoof2021_la audio/speech understanding eer asvspoof2021_la-cu118 asvspoof2021_la-cu128
bigcodebench_codegen natural language processing pass_at_1 bigcodebench_codegen-cu118 bigcodebench_codegen-cu128
cifar100lt computer vision balanced_accuracy cifar100lt-cu118 cifar100lt-cu128
cifar100n computer vision accuracy cifar100n-cu118 cifar100n-cu128
climsim_lowres AI for science mean_r2 climsim_lowres-cu118 climsim_lowres-cu128
clotho_caption Audio/Speech spider clotho_caption-cu118 clotho_caption-cu128
cogniplan robotics exploration_score cogniplan-cu118 cogniplan-cu128
crohme_hmer computer vision exprate crohme_hmer-cu118 crohme_hmer-cu128
div2k_sr_x4 Computer Vision psnr_y div2k_sr_x4-cu118 div2k_sr_x4-cu128
domainnet_quickdraw computer vision accuracy domainnet_quickdraw-cu118 domainnet_quickdraw-cu128
etth1_forecasting AI for science mse etth1_forecasting-cu118 etth1_forecasting-cu128
flip_aav AI for science spearman_rho flip_aav-cu118 flip_aav-cu128
habitat3 robotics nav_seek_success habitat3-cu118 habitat3-cu128
humanoidbench robotics success_rate humanoidbench-cu118 humanoidbench-cu128
iwildcam_wilds computer vision macro_f1 iwildcam_wilds-cu118 iwildcam_wilds-cu128
ljspeech_tts Audio/Speech utmos ljspeech_tts-cu118 ljspeech_tts-cu128
metrla_traffic AI for science mae_60min metrla_traffic-cu118 metrla_traffic-cu128
minigrid robotics success_rate minigrid-cu118 minigrid-cu128
mnist_classification Computer Vision accuracy mnist_classification-cu118 mnist_classification-cu128
objaverse_3dgen computer vision lpips objaverse_3dgen-cu118 objaverse_3dgen-cu128
ogbg_molpcba AI for science avg_precision ogbg_molpcba-cu118 ogbg_molpcba-cu128
qlib_stock natural language processing ic_mean qlib_stock-cu118 qlib_stock-cu128
qm9 AI for science mae qm9-cu118 qm9-cu128
scanobjectnn Computer Vision overall_accuracy scanobjectnn-cu118 scanobjectnn-cu128
screenspot_pro computer vision grounding_score screenspot_pro-cu118 screenspot_pro-cu128
split_cifar100 computer vision average_accuracy split_cifar100-cu118 split_cifar100-cu128
terra_incognita computer vision balanced_accuracy terra_incognita-cu118 terra_incognita-cu128
vlabench_manipulation robotics success_rate vlabench_manipulation-cu118 vlabench_manipulation-cu128
voicebank_demand audio/speech understanding pesq voicebank_demand-cu118 voicebank_demand-cu128
weatherbench_z500t850 AI for science rmse_z500 weatherbench_z500t850-cu118 weatherbench_z500t850-cu128
wilds_fmow computer vision worst_region_accuracy wilds_fmow-cu118 wilds_fmow-cu128

CUDA variants

  • *-cu118.docker.tar.gz — built on nvidia/cuda:11.8.0-runtime-ubuntu22.04.
  • *-cu128.docker.tar.gz — built on nvidia/cuda:12.8.1-runtime-ubuntu22.04 (e.g. RTX 5090).

Use cu118 unless your GPU requires CUDA 12.x kernels. Both variants produce identical task behaviour and are interchangeable from the agent's point of view.

License

The RABench framework is released under Apache-2.0. The bundled datasets and pre-cached model weights are redistributed from their original sources and retain the original licenses; see each task's README in the data repository.

Downloads last month
52