File size: 4,657 Bytes
cfc2d3d
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
79c3d02
 
 
 
 
 
 
 
 
 
 
cfc2d3d
79c3d02
 
e2a8633
 
 
 
 
 
f3353d4
e2a8633
 
f3353d4
e2a8633
 
 
 
 
 
 
 
 
 
 
 
79c3d02
 
 
 
cfc2d3d
 
 
 
 
 
 
 
 
79c3d02
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
cfc2d3d
 
 
 
 
 
79c3d02
 
 
 
 
 
 
cfc2d3d
79c3d02
 
cfc2d3d
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
---
task_categories:
- question-answering
language:
- en
tags:
- agent
size_categories:
- 100K<n<1M
---

# HFLB (Heterogeneous Federated Learning Benchmark)

FL Benchmark originally proposed in [FedDAT](https://arxiv.org/abs/2308.12305), and modified by ourselves, splitting each dataset into different subtasks for task incremental learning setup in [FedMosaic (ICLR 2026)](https://openreview.net/forum?id=0g5Dk4Qfh0).
Please checkout configuration of HFLB in the [paper](https://openreview.net/forum?id=0g5Dk4Qfh0)

### Constituent Datasets
| Dataset | Task Type | Reference |
|---|---|---|
| GQA | Compositional visual reasoning | Hudson & Manning, CVPR 2019 |
| Abstract VQA | Abstract-scene visual question answering | Antol et al., ICCV 2015 |
| SNLI-VE | Visual entailment | Xie et al., arXiv 2019 |
| COCO-QA | Image question answering | Ren et al., NeurIPS 2015 |
| NLVR2 | Natural-language visual reasoning over image pairs | Suhr et al., ACL 2019 |
| VizWiz | Accessibility-focused VQA | Gurari et al., CVPR 2018 |
| NLVR2 | Dual-image visual reasoning | Suhr et al., ACL 2019 |
| AQUA | Art-domain visual question answering | Garcia et al., ECCV Workshops 2020 |

---

## How to Download

We highly recommend downloading each dataset (`.tar`) file separately:

```bash
# Example: Download GQA
huggingface-cli download SNUMPR/HFLB GQA.tar --local-dir ./ --repo-type dataset

# Example: Download AQUA
huggingface-cli download SNUMPR/HFLB AQUA.tar --local-dir ./ --repo-type dataset
```

After downloading, extract each archive:
```bash
tar -xvf AQUA.tar
# Repeat for other archives
```

Place extracted data under the `dataset/` folder in the [code repository](https://github.com/snumprlab/fedmosaic), following the structure described in the [README](https://github.com/snumprlab/fedmosaic/blob/main/README.md).

---

<details>
<summary>Dataset Credits & References</summary>

HFLB builds on the following publicly available datasets.

```bibtex
@inproceedings{hudson2019gqa,
  title     = {GQA: A New Dataset for Real-World Visual Reasoning and Compositional Question Answering},
  author    = {Hudson, Drew A. and Manning, Christopher D.},
  booktitle = {CVPR},
  year      = {2019}
}

@inproceedings{antol2015vqa,
  title     = {VQA: Visual Question Answering},
  author    = {Antol, Stanislaw and Agrawal, Aishwarya and Lu, Jiasen and Mitchell, Margaret and Batra, Dhruv and Zitnick, C. Lawrence and Parikh, Devi},
  booktitle = {ICCV},
  year      = {2015}
}

@article{xie2019snlive,
  title   = {Visual Entailment: A Novel Task for Fine-Grained Image Understanding},
  author  = {Xie, Ning and Lai, Farley and Doran, Derek and Kadav, Asim},
  journal = {arXiv preprint arXiv:1901.06706},
  year    = {2019}
}

@inproceedings{ren2015cocoqa,
  title     = {Exploring Models and Data for Image Question Answering},
  author    = {Ren, Mengye and Kiros, Ryan and Zemel, Richard S.},
  booktitle = {NeurIPS},
  year      = {2015}
}

@inproceedings{suhr2019nlvr2,
  title     = {A Corpus for Reasoning about Natural Language Grounded in Photographs},
  author    = {Suhr, Alane and Zhou, Stephanie and Zhang, Ally and Zhang, Iris and Bai, Huajun and Artzi, Yoav},
  booktitle = {ACL},
  year      = {2019}
}

@inproceedings{gurari2018vizwiz,
  title     = {VizWiz Grand Challenge: Answering Visual Questions from Blind People},
  author    = {Gurari, Danna and Li, Qing and Stangl, Abigale J. and Guo, Anhong and Lin, Chi and Grauman, Kristen and Luo, Jiebo and Bigham, Jeffrey P.},
  booktitle = {CVPR},
  year      = {2018}
}

@inproceedings{garcia2020aqua,
  title     = {A Dataset and Baselines for Visual Question Answering on Art},
  author    = {Garcia, Noa and Ye, Chentao and Liu, Zihua and Hu, Qingtao and Otani, Mayu and Chu, Chenhui and Nakashima, Yuta and Mitamura, Teruko},
  booktitle = {ECCV Workshops},
  year      = {2020}
}
```
</details>
---

## Citation

If you use HFLB in your research, please cite FedDAT paper and our paper:

```bibtex
@inproceedings{chen2023feddat,
  title={FedDAT: An Approach for Foundation Model Finetuning in Multi-Modal Heterogeneous Federated Learning},
  author={Chen, Haokun and Zhang, Yao and Krompass, Denis and Gu, Jindong and Tresp, Volker},
  booktitle={AAAI},
  year={2024}
}

@inproceedings{seo2026colora,
  title     = {Co-LoRA: Collaborative Model Personalization on Heterogeneous Multi-Modal Clients},
  author    = {Seo, Minhyuk and Kim, Taeheon and Lee, Hankook and Choi, Jonghyun and Tuytelaars, Tinne},
  booktitle = {The Fourteenth International Conference on Learning Representations (ICLR)},
  year      = {2026},
  url       = {https://openreview.net/forum?id=0g5Dk4Qfh0}
}
```