| # HPO-B | |
| HPO-B is a benchmark for assessing the performance of black-box HPO algorithms. This repo contains the code for easing the consumption of the meta-dataset and speeding up the testing. | |
| ## Meta-Dataset | |
| The meta-dataset contains evaluations of the accuracy for different search-spaces on different datasets. For more details on the meta-dataset, refer to our [paper](https://arxiv.org/pdf/2106.06257.pdf). It is presented in three versions: | |
| - **HPO-B-v1**: The raw benchmark of all 176 meta-datasets. | |
| - **HPO-B-v2**: Subset of 16 meta-datasets with the most frequent search spaces. | |
| - **HPO-B-v3**: Split of HPO-B-v2 into training, validation and testing. | |
| ## Meta-dataset format | |
| As described in the paper, the meta-dataset follows a JavaScript Object Notation (JSON) to encapsulate the evaluations. In Python, this corresponds to nested dictionaries, where the first level key corresponds to the **search space ID**, the second level key contains the **dataset ID**, and finally the last level contains the list of hyperparameter configurations (**X**) and its response (**y**). | |
| ```python | |
| meta_dataset = { "search_space_id_1" : { "dataset_id_1": {"X": [[1,1], [0,2]], | |
| "y": [[0.9], [0.1]]}, | |
| { "dataset_id_2": ... }, | |
| "search_space_id_2" : ... | |
| } | |