Dataset Viewer
Duplicate
The dataset viewer is not available for this split.
Cannot load the dataset split (in streaming mode) to extract the first rows.
Error code:   StreamingRowsError
Exception:    CastError
Message:      Couldn't cast
jpg: binary
__key__: string
__url__: string
json: null
to
{'json': List({'answer': Value('string'), 'image': Value('string'), 'need_external_knowledge': Value('bool'), 'question': Value('string'), 'conversations': List({'from': Value('string'), 'value': Value('string')}), 'id': Value('string')}), '__key__': Value('string'), '__url__': Value('string')}
because column names don't match
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/src/worker/utils.py", line 99, in get_rows_or_raise
                  return get_rows(
                         ^^^^^^^^^
                File "/src/libs/libcommon/src/libcommon/utils.py", line 272, in decorator
                  return func(*args, **kwargs)
                         ^^^^^^^^^^^^^^^^^^^^^
                File "/src/services/worker/src/worker/utils.py", line 77, in get_rows
                  rows_plus_one = list(itertools.islice(ds, rows_max_number + 1))
                                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2567, in __iter__
                  for key, example in ex_iterable:
                                      ^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2102, in __iter__
                  for key, pa_table in self._iter_arrow():
                                       ^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2134, in _iter_arrow
                  pa_table = cast_table_to_features(pa_table, self.features)
                             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 2192, in cast_table_to_features
                  raise CastError(
              datasets.table.CastError: Couldn't cast
              jpg: binary
              __key__: string
              __url__: string
              json: null
              to
              {'json': List({'answer': Value('string'), 'image': Value('string'), 'need_external_knowledge': Value('bool'), 'question': Value('string'), 'conversations': List({'from': Value('string'), 'value': Value('string')}), 'id': Value('string')}), '__key__': Value('string'), '__url__': Value('string')}
              because column names don't match

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

HFLB (Heterogeneous Federated Learning Benchmark)

FL Benchmark originally proposed in FedDAT, and modified by ourselves, splitting each dataset into different subtasks for task incremental learning setup in FedMosaic (ICLR 2026). Please checkout configuration of HFLB in the paper

Constituent Datasets

Dataset Task Type Reference
GQA Compositional visual reasoning Hudson & Manning, CVPR 2019
Abstract VQA Abstract-scene visual question answering Antol et al., ICCV 2015
SNLI-VE Visual entailment Xie et al., arXiv 2019
COCO-QA Image question answering Ren et al., NeurIPS 2015
NLVR2 Natural-language visual reasoning over image pairs Suhr et al., ACL 2019
VizWiz Accessibility-focused VQA Gurari et al., CVPR 2018
NLVR2 Dual-image visual reasoning Suhr et al., ACL 2019
AQUA Art-domain visual question answering Garcia et al., ECCV Workshops 2020

How to Download

We highly recommend downloading each dataset (.tar) file separately:

# Example: Download GQA
huggingface-cli download SNUMPR/HFLB GQA.tar --local-dir ./ --repo-type dataset

# Example: Download AQUA
huggingface-cli download SNUMPR/HFLB AQUA.tar --local-dir ./ --repo-type dataset

After downloading, extract each archive:

tar -xvf AQUA.tar
# Repeat for other archives

Place extracted data under the dataset/ folder in the code repository, following the structure described in the README.


Dataset Credits & References

HFLB builds on the following publicly available datasets.

@inproceedings{hudson2019gqa,
  title     = {GQA: A New Dataset for Real-World Visual Reasoning and Compositional Question Answering},
  author    = {Hudson, Drew A. and Manning, Christopher D.},
  booktitle = {CVPR},
  year      = {2019}
}

@inproceedings{antol2015vqa,
  title     = {VQA: Visual Question Answering},
  author    = {Antol, Stanislaw and Agrawal, Aishwarya and Lu, Jiasen and Mitchell, Margaret and Batra, Dhruv and Zitnick, C. Lawrence and Parikh, Devi},
  booktitle = {ICCV},
  year      = {2015}
}

@article{xie2019snlive,
  title   = {Visual Entailment: A Novel Task for Fine-Grained Image Understanding},
  author  = {Xie, Ning and Lai, Farley and Doran, Derek and Kadav, Asim},
  journal = {arXiv preprint arXiv:1901.06706},
  year    = {2019}
}

@inproceedings{ren2015cocoqa,
  title     = {Exploring Models and Data for Image Question Answering},
  author    = {Ren, Mengye and Kiros, Ryan and Zemel, Richard S.},
  booktitle = {NeurIPS},
  year      = {2015}
}

@inproceedings{suhr2019nlvr2,
  title     = {A Corpus for Reasoning about Natural Language Grounded in Photographs},
  author    = {Suhr, Alane and Zhou, Stephanie and Zhang, Ally and Zhang, Iris and Bai, Huajun and Artzi, Yoav},
  booktitle = {ACL},
  year      = {2019}
}

@inproceedings{gurari2018vizwiz,
  title     = {VizWiz Grand Challenge: Answering Visual Questions from Blind People},
  author    = {Gurari, Danna and Li, Qing and Stangl, Abigale J. and Guo, Anhong and Lin, Chi and Grauman, Kristen and Luo, Jiebo and Bigham, Jeffrey P.},
  booktitle = {CVPR},
  year      = {2018}
}

@inproceedings{garcia2020aqua,
  title     = {A Dataset and Baselines for Visual Question Answering on Art},
  author    = {Garcia, Noa and Ye, Chentao and Liu, Zihua and Hu, Qingtao and Otani, Mayu and Chu, Chenhui and Nakashima, Yuta and Mitamura, Teruko},
  booktitle = {ECCV Workshops},
  year      = {2020}
}
---

Citation

If you use HFLB in your research, please cite FedDAT paper and our paper:

@inproceedings{chen2023feddat,
  title={FedDAT: An Approach for Foundation Model Finetuning in Multi-Modal Heterogeneous Federated Learning},
  author={Chen, Haokun and Zhang, Yao and Krompass, Denis and Gu, Jindong and Tresp, Volker},
  booktitle={AAAI},
  year={2024}
}

@inproceedings{seo2026colora,
  title     = {Co-LoRA: Collaborative Model Personalization on Heterogeneous Multi-Modal Clients},
  author    = {Seo, Minhyuk and Kim, Taeheon and Lee, Hankook and Choi, Jonghyun and Tuytelaars, Tinne},
  booktitle = {The Fourteenth International Conference on Learning Representations (ICLR)},
  year      = {2026},
  url       = {https://openreview.net/forum?id=0g5Dk4Qfh0}
}
Downloads last month
6

Collection including SNUMPR/HFLB

Papers for SNUMPR/HFLB