dnflsr's picture
another small formatting change to readme
9a4de8f verified
|
raw
history blame
6.91 kB
metadata
language:
  - en
homepage: https://github.com/ylaboratory/methylation-classification
license: cc-by-4.0
task_categories:
  - tabular-classification
tags:
  - biology
  - bioinformatics
  - biomedical
  - DNA-methylation
  - multi-label-classification
pretty_name: 450k DNA methylation tissue classification
size_categories:
  - 10K<n<100K

DNA Methylation Tissue Classification Dataset

Dataset Summary

This data resource is vast, curated reference atlas of DNA methylation (DNAm) profiles spanning 16,959 healthy primary human tissue and cell samples profiled on Illumina 450K arrays. Samples cover 86 unique tissues and cell types and are manually mapped to a common set of terms in the UBERON anatomical ontology. This dataset is intended to be used as a baseline resource for multi-label classification in the biomedical domain, particuarly for tissue/cell‑type classification, deconvolution, and epigenetic biomarker discovery.

Key stats:

  • 16,959 total DNAm samples from 210 studies in the Gene Expression Omnibus (GEO)
  • 86 tissue/cell types (55 in training set, 31 in holdout)
  • 297,598 quality controlled CpG sites (M-values) per sample
  • 10,351 samples used for training (>= 2 studies per label)
  • 6,608 samples reserved in holdout set to evalulate generalization/label transfer

Data and usage

The dataset itself is divided into two sets, one used for training and cross-validation, and a separate holdout set used to for evaluation on unseen labels. For faster loading and improved the files are stored as parquet files.

For each partition there are two main file types:

  • _mvalues: containing preprocessed and quality controlled DNAm M-values background corrected using preprocessNoob and normalized using BMIQ.
  • _meta: metadata files containing the sample id, dataset, and UBERON tissue/cell identifiers and labels. There are two columns corresponding to UBERON identifiers, one contains the most descriptive tissue or cell term, and the second contains a more general term used to create a larger training compendium. (e.g., pericardial fat vs. visceral fat).

The full list of files include:

  • full_ontology.edgelist: a networkx file containing the tissue/cell ontology connecting all 86 tissue and cell terms
  • training_ontology.edgelist: a networkx file containing the tissue/cell ontology connecting the 55 tissue and cell terms
  • training_meta.parquet: metadata for the samples in the training partition
  • label_transfer_meta.parquet: metadata for the samples in the holdout partition
  • training_mvalues.parquet: m-values measuring DNAm at 297,598 CpG sites across the genome for samples in the training partition
  • label_transfer_mvalues.parquet: m-values measuring DNAm at 297,598 CpG sites across the genome for samples in the holdout partition

The columns in metadata files:

  • Sample ID: sample identifier in GEO (GSM ID)
  • training.ID: standardized UBERON ID used for training
  • training.Name: corresponding tissue/cell name for the training ID
  • Dataset: dataset identifier in GEO (GSE ID)
  • Original.ID: manually annotated most descriptive UBERON ID
  • Original.Name: correpsonding tissue/cell name for original ID

Mvalue files are structured probe (rows) by samples (columns). Columns are labeled with GSM identifiers with one additional column for probe which contains Illumina CpG IDs (e.g., cg03128332).

Quick start

Using python with the huggingface_hub and pyarrow packages, and the optional pandas and networkx packages installed we can quickly get started with this dataset.

from datasets import load_dataset
import pyarrow.parquet as pq

import pandas as pd
import networkx as nx
import seaborn as sns
import matplotlib.pyplot as plt

training_mv = load_dataset(
    "parquet",
    data_files="https://huggingface.co/datasets/ylab/methyl-classification/resolve/main/training_mvalues.parquet"
).to_pandas().set_index('Sample')
training_meta = load_dataset(
    "parquet",
    data_files="https://huggingface.co/datasets/ylab/methyl-classification/resolve/main/training_meta.parquet"
).to_pandas().set_index('Sample')

labtransfer_mv = load_dataset(
    "parquet",
    data_files="https://huggingface.co/datasets/ylab/methyl-classification/resolve/main/labtransfer_mvalues.parquet"
).to_pandas().set_index('Sample')
labtransfer_meta = load_dataset(
    "parquet",
    data_files="https://huggingface.co/datasets/ylab/methyl-classification/resolve/main/labtransfer_meta.parquet"
).to_pandas().set_index('Sample')

# View the training set metadata
print(training_meta.describe())

# Plot m-value density plots for first five samples
sns.kdeplot(data=training_mv.iloc[:5].T, common_norm=False)
plt.xlabel("Methylation Value")
plt.ylabel("Density")
plt.title("Methylation Density for 5 Samples")
plt.show()

Code for data processing, analysis, and tissue classification

This dataset, while designed to be standalone, was generated as a part of a larger paper predicting tissue and cell type. The code for processing the raw data files and conducting the analysis in that paper can be found on the project Github.

Citation

If you use this dataset in your work, please cite:

Kim et al., “Ontology‑aware DNA methylation classification with a curated atlas of human tissues and cell types”, [bioRxiv preprint] (2025).

@article{kim2024methylation_atlas,
  title   = {Ontology-aware DNA methylation classification with a curated atlas of human tissues and cell types},
  author  = {Kim, Mirae and Dannenfelser, Ruth and Cui, Yufei and Allen, Genevera and Yao, Vicky},
  journal = {bioRxiv preprint},
  year    = {2025},
  doi     = {10.1101/2024.XX.XXXXXX}
}

License

This dataset is released under CC BY 4.0, permitting both academic and commercial use with attribution.