| --- |
| license: cc-by-4.0 |
| language: |
| - en |
| tags: |
| - hardware |
| - infrastructure |
| - system |
| - subsystem |
| - CPU |
| - GPU |
| - memory |
| - network |
| - storage |
| - telemetry |
| - anomaly-detection |
| - performance |
| pretty_name: Reveal |
| --- |
| |
| # 🛰️ Dataset Card for **Reveal: Hardware Telemetry Dataset for Machine Learning Infrastructure Profiling and Anomaly Detection** |
|
|
| ## Dataset Details |
|
|
| ### Dataset Description |
|
|
| **Reveal** is a large-scale, curated dataset of **hardware telemetry** collected from high-performance computing (HPC) while running diverse machine learning (ML) workloads. |
| It enables reproducible research on **system-level profiling**, **unsupervised anomaly detection**, and **ML infrastructure optimization**. |
|
|
| The dataset accompanies the paper |
| 📄 *“Detecting Anomalies in Systems for AI Using Hardware Telemetry”* (Chen *et al.*, University of Oxford, 2025). |
| Reveal captures low-level hardware and operating system metrics—fully accessible to operators—allowing anomaly detection **without requiring workload knowledge or instrumentation**. |
|
|
| - **Curated by:** Ziji Chen, Steven W. D. Chien, Peng Qian, Noa Zilberman (University of Oxford, Department of Engineering Science) |
| - **Shared by:** Ziji Chen (contact: ziji.chen@eng.ox.ac.uk) |
| - **Language(s):** English (metadata and documentation) |
| - **License:** [CC BY 4.0](https://creativecommons.org/licenses/by/4.0/) |
|
|
| --- |
|
|
| ### Dataset Sources |
|
|
| - **Paper:** [Detecting Anomalies in Systems for AI Using Hardware Telemetry](https://arxiv.org/abs/2510.26008) |
| - **DOI:** [10.5281/zenodo.17470313](https://doi.org/10.5281/zenodo.17470313) |
|
|
| --- |
|
|
| ## Uses |
|
|
| ### Direct Use |
|
|
| Reveal can be used for: |
| - Research on **unsupervised anomaly detection** in system telemetry |
| - Modeling **multivariate time-series** from hardware metrics |
| - Studying **cross-subsystem interactions** (CPU, GPU, memory, network, storage) |
| - Developing **performance-aware ML infrastructure tools** |
| - Training or benchmarking anomaly detection models for **AIOps** and **ML system health monitoring** |
|
|
| ### Out-of-Scope Use |
|
|
| The dataset **should not** be used for: |
| - Inferring or reconstructing user workloads or model behavior |
| - Benchmarking end-user application performance |
| - Any use involving personal, confidential, or proprietary data reconstruction |
|
|
| --- |
|
|
| ## Dataset Structure |
|
|
| Reveal consists of time-series telemetry, derived features, and automatically labeled anomaly segments. |
|
|
|
|
|
|
| **Core fields include:** |
| - `timestamp`: UTC time of sample |
| - `host_id`: host or node identifier |
| - `metric_name`: name of the measured counter |
| - `value`: recorded numeric value |
| - `subsystem`: {CPU, *GPU (if supported by the underlying infrastructure), Memory, Network, Storage} |
| |
| **Additional Notes** |
| |
| A complete list of metrics and their descriptions can be found in `MetricDescriptionCPU.md` and `MetricDescriptionGPU.md`. |
| |
| After downloading and extracting the dataset zip, place the `meta.csv` file and the `example Jupyter notebooks` inside the `Reveal/` directory before running. |
| |
| --- |
| |
| ## Dataset Creation |
| |
| ### Curation Rationale |
| |
| Modern ML workloads are complex and opaque to operators due to virtualization and containerization. Reveal was created to **enable infrastructure-level observability** and anomaly detection purely from hardware telemetry, without access to user workloads. |
| |
| ### Source Data |
| |
| #### Data Collection and Processing |
| |
| - Collected using: `perf`, `procfs`, `nvidia-smi`, and standard Linux utilities |
| - Sampling interval: 100 ms |
| - ~150 raw metric types per host, expanded to ~700 time-series channels, including metrics related to GPUs. |
| |
| #### Workloads and Systems |
| |
| - **Workloads:** >30 ML applications (BERT, BART, ResNet, ViT, VGG, DeepSeek, LLaMA, Mistral) |
| - **Datasets:** GLUE/SST2, WikiSQL, PASCAL VOC, CIFAR, MNIST |
| - **Systems:** |
| - Dual-node GPU HPC cluster: Two nodes, each with two NVIDIA V100 GPUs (32 GB), an Intel Xeon Platinum 8628 CPU (48 cores), 384 GB memory, connected through InfiniBand HDR100. Packaged as `Reveal.zip`. |
| - Nine-node CPU cluster: Nine servers, each running 11 Apptainer containers (four threads and 20 GB memory per container), powered by AMD EPYC 7443P CPUs. Packaged as `RevealCPURun<n>.zip`. |
| |
| #### Who are the data producers? |
| |
| All data was generated by the authors in controlled environments using synthetic workloads. |
| No user or private information is included. |
| |
| ### Annotations |
| |
| #### Personal and Sensitive Information |
| No personal, identifiable, or proprietary data. |
| All records are machine telemetry and anonymized. |
| |
| --- |
| |
| ## Bias, Risks, and Limitations |
| |
| - Collected on specific hardware (NVIDIA/AMD CPUs, NVIDIA GPUs); behavior may differ on other architectures. |
| - Reflects **controlled test conditions**, not production cloud variability. |
| |
| --- |
| |
| ## Citation |
| |
| **BibTeX:** |
| ```bibtex |
| @misc{chen2025detectinganomaliesmachinelearning, |
| title={Detecting Anomalies in Machine Learning Infrastructure via Hardware Telemetry}, |
| author={Ziji Chen and Steven W. D. Chien and Peng Qian and Noa Zilberman}, |
| year={2025}, |
| eprint={2510.26008}, |
| archivePrefix={arXiv}, |
| primaryClass={cs.PF}, |
| url={https://arxiv.org/abs/2510.26008}, |
| } |
| |