HFLB / README.md
SNUMPR's picture
Update README.md
f3353d4 verified
metadata
task_categories:
  - question-answering
language:
  - en
tags:
  - agent
size_categories:
  - 100K<n<1M

HFLB (Heterogeneous Federated Learning Benchmark)

FL Benchmark originally proposed in FedDAT, and modified by ourselves, splitting each dataset into different subtasks for task incremental learning setup in FedMosaic (ICLR 2026). Please checkout configuration of HFLB in the paper

Constituent Datasets

Dataset Task Type Reference
GQA Compositional visual reasoning Hudson & Manning, CVPR 2019
Abstract VQA Abstract-scene visual question answering Antol et al., ICCV 2015
SNLI-VE Visual entailment Xie et al., arXiv 2019
COCO-QA Image question answering Ren et al., NeurIPS 2015
NLVR2 Natural-language visual reasoning over image pairs Suhr et al., ACL 2019
VizWiz Accessibility-focused VQA Gurari et al., CVPR 2018
NLVR2 Dual-image visual reasoning Suhr et al., ACL 2019
AQUA Art-domain visual question answering Garcia et al., ECCV Workshops 2020

How to Download

We highly recommend downloading each dataset (.tar) file separately:

# Example: Download GQA
huggingface-cli download SNUMPR/HFLB GQA.tar --local-dir ./ --repo-type dataset

# Example: Download AQUA
huggingface-cli download SNUMPR/HFLB AQUA.tar --local-dir ./ --repo-type dataset

After downloading, extract each archive:

tar -xvf AQUA.tar
# Repeat for other archives

Place extracted data under the dataset/ folder in the code repository, following the structure described in the README.


Dataset Credits & References

HFLB builds on the following publicly available datasets.

@inproceedings{hudson2019gqa,
  title     = {GQA: A New Dataset for Real-World Visual Reasoning and Compositional Question Answering},
  author    = {Hudson, Drew A. and Manning, Christopher D.},
  booktitle = {CVPR},
  year      = {2019}
}

@inproceedings{antol2015vqa,
  title     = {VQA: Visual Question Answering},
  author    = {Antol, Stanislaw and Agrawal, Aishwarya and Lu, Jiasen and Mitchell, Margaret and Batra, Dhruv and Zitnick, C. Lawrence and Parikh, Devi},
  booktitle = {ICCV},
  year      = {2015}
}

@article{xie2019snlive,
  title   = {Visual Entailment: A Novel Task for Fine-Grained Image Understanding},
  author  = {Xie, Ning and Lai, Farley and Doran, Derek and Kadav, Asim},
  journal = {arXiv preprint arXiv:1901.06706},
  year    = {2019}
}

@inproceedings{ren2015cocoqa,
  title     = {Exploring Models and Data for Image Question Answering},
  author    = {Ren, Mengye and Kiros, Ryan and Zemel, Richard S.},
  booktitle = {NeurIPS},
  year      = {2015}
}

@inproceedings{suhr2019nlvr2,
  title     = {A Corpus for Reasoning about Natural Language Grounded in Photographs},
  author    = {Suhr, Alane and Zhou, Stephanie and Zhang, Ally and Zhang, Iris and Bai, Huajun and Artzi, Yoav},
  booktitle = {ACL},
  year      = {2019}
}

@inproceedings{gurari2018vizwiz,
  title     = {VizWiz Grand Challenge: Answering Visual Questions from Blind People},
  author    = {Gurari, Danna and Li, Qing and Stangl, Abigale J. and Guo, Anhong and Lin, Chi and Grauman, Kristen and Luo, Jiebo and Bigham, Jeffrey P.},
  booktitle = {CVPR},
  year      = {2018}
}

@inproceedings{garcia2020aqua,
  title     = {A Dataset and Baselines for Visual Question Answering on Art},
  author    = {Garcia, Noa and Ye, Chentao and Liu, Zihua and Hu, Qingtao and Otani, Mayu and Chu, Chenhui and Nakashima, Yuta and Mitamura, Teruko},
  booktitle = {ECCV Workshops},
  year      = {2020}
}
---

Citation

If you use HFLB in your research, please cite FedDAT paper and our paper:

@inproceedings{chen2023feddat,
  title={FedDAT: An Approach for Foundation Model Finetuning in Multi-Modal Heterogeneous Federated Learning},
  author={Chen, Haokun and Zhang, Yao and Krompass, Denis and Gu, Jindong and Tresp, Volker},
  booktitle={AAAI},
  year={2024}
}

@inproceedings{seo2026colora,
  title     = {Co-LoRA: Collaborative Model Personalization on Heterogeneous Multi-Modal Clients},
  author    = {Seo, Minhyuk and Kim, Taeheon and Lee, Hankook and Choi, Jonghyun and Tuytelaars, Tinne},
  booktitle = {The Fourteenth International Conference on Learning Representations (ICLR)},
  year      = {2026},
  url       = {https://openreview.net/forum?id=0g5Dk4Qfh0}
}