SiatBioInf commited on
Commit
4e1f168
·
verified ·
1 Parent(s): d380d7d

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +160 -19
README.md CHANGED
@@ -19,25 +19,67 @@ size_categories:
19
  # SingleCell-Unseen-Benchmark
20
 
21
  ## Overview
22
- A large-scale unseen single-cell transcriptomic benchmark covering tumor, stem, neural, and normal cell populations for evaluating single-cell foundation models.
 
 
 
 
 
 
23
 
24
  ## Dataset Collection
25
- - **Tumor cells**: 21 cancer types from GEO (2,225 samples, 1,645,662 cells), including primary tumors, metastases, and circulating tumor cells (CTCs).
26
- - **Stem cells**: 5 datasets from CELLxGENE (325,092 cells, 4 stem cell types).
27
- - **Neural cells**: 1 dataset from CELLxGENE (423,707 cells, 6 neural cell types).
28
- - **Normal cells**: 7 datasets from CELLxGENE (1,838,991 cells, 10 normal cell types).
29
 
30
- All datasets were mapped to HGNC symbols, and cells with <200 detected genes were removed.
 
 
 
 
 
31
 
32
- ## Tumor Cell Identification
33
- - GEO-derived tumor cells were re-identified using a consensus workflow: lineage-level screening based on **CancerSCEM 2.0 markers**, followed by malignancy confirmation using **inferCNV**.
34
- - CELLxGENE-derived datasets used original annotations.
 
 
35
 
36
- ## Downstream Tasks
37
- Designed to evaluate foundation models across multiple categories:
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
38
 
39
- | Category | Task | Prediction type |
40
- |-----------|------|----------------|
 
 
 
 
41
  | Tumor | Tumor cell identification | Binary |
42
  | Tumor | Primary site tracing | Multi-class |
43
  | Stem | Stem cell identification | Binary |
@@ -45,14 +87,113 @@ Designed to evaluate foundation models across multiple categories:
45
  | Neural | Neural cell identification | Binary |
46
  | Neural | Neural cell subtype classification | Multi-class |
47
 
48
- Models take **high-dimensional embeddings** as input and predict cell types using lightweight classifiers.
 
 
49
 
50
  ## Benchmark Models
51
- Evaluated models: **Geneformer, scFoundation, scGPT, UCE, scLONG**
 
 
 
 
 
 
 
 
 
52
 
53
  ## Evaluation Metrics
54
- - Binary tasks: Accuracy, Precision, Recall, F1
55
- - Multi-class tasks: Accuracy, Macro-Precision, Macro-Recall, Macro-F1
56
 
57
- ## Usage
58
- Datasets can be loaded via Scanpy or other single-cell analysis frameworks.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
19
  # SingleCell-Unseen-Benchmark
20
 
21
  ## Overview
22
+
23
+ **SingleCell-Unseen-Benchmark** is a large-scale unseen single-cell transcriptomic benchmark designed to systematically evaluate foundation models on cell identification and cell type tracing tasks.
24
+ The benchmark covers **tumor, stem, neural, and normal cell populations**, with a particular emphasis on **unseen data distributions**, including rare cell types, cross-dataset generalization, and heterogeneous tumor states.
25
+
26
+ In addition to curated datasets, this repository provides **standardized benchmark results** for multiple single-cell foundation models, enabling transparent and reproducible comparison.
27
+
28
+ ---
29
 
30
  ## Dataset Collection
 
 
 
 
31
 
32
+ ### Tumor Cells
33
+ - **Source**: GEO
34
+ - **Cancer types**: 21
35
+ - **Samples**: 2,225
36
+ - **Cells**: 1,645,662
37
+ - **Cell states**: Primary tumors, metastases, circulating tumor cells (CTCs)
38
 
39
+ ### Stem Cells
40
+ - **Source**: CELLxGENE
41
+ - **Datasets**: 5
42
+ - **Cells**: 325,092
43
+ - **Stem cell types**: 4
44
 
45
+ ### Neural Cells
46
+ - **Source**: CELLxGENE
47
+ - **Datasets**: 1
48
+ - **Cells**: 423,707
49
+ - **Neural cell types**: 6
50
+
51
+ ### Normal Cells
52
+ - **Source**: CELLxGENE
53
+ - **Datasets**: 7
54
+ - **Cells**: 1,838,991
55
+ - **Normal cell types**: 10
56
+
57
+ ### Preprocessing
58
+ - All genes were mapped to **HGNC symbols**
59
+ - Cells with fewer than **200 detected genes** were removed
60
+ - Expression matrices are stored in **AnnData (`.h5ad`) format**
61
+
62
+ ---
63
+
64
+ ## Tumor Cell Identification Strategy
65
+
66
+ Tumor cells derived from GEO were re-identified using a **consensus workflow**:
67
+
68
+ 1. **Lineage-level screening** based on **CancerSCEM 2.0** marker genes
69
+ 2. **Malignancy confirmation** using **inferCNV**
70
+
71
+ CELLxGENE-derived datasets retain their **original annotations**.
72
+
73
+ This strategy ensures consistent tumor labeling while minimizing dataset-specific bias.
74
+
75
+ ---
76
 
77
+ ## Downstream Benchmark Tasks
78
+
79
+ The benchmark evaluates foundation models across multiple biologically meaningful tasks:
80
+
81
+ | Category | Task | Prediction Type |
82
+ |--------|------|-----------------|
83
  | Tumor | Tumor cell identification | Binary |
84
  | Tumor | Primary site tracing | Multi-class |
85
  | Stem | Stem cell identification | Binary |
 
87
  | Neural | Neural cell identification | Binary |
88
  | Neural | Neural cell subtype classification | Multi-class |
89
 
90
+ Models take **high-dimensional cell embeddings** as input and perform prediction using **lightweight downstream classifiers**, isolating representation quality from classifier complexity.
91
+
92
+ ---
93
 
94
  ## Benchmark Models
95
+
96
+ The following single-cell foundation models are evaluated:
97
+
98
+ - **Geneformer**
99
+ - **scFoundation**
100
+ - **scGPT**
101
+ - **UCE**
102
+ - **scLONG**
103
+
104
+ ---
105
 
106
  ## Evaluation Metrics
 
 
107
 
108
+ - **Binary classification tasks**
109
+ - Accuracy
110
+ - Precision
111
+ - Recall
112
+ - F1-score
113
+
114
+ - **Multi-class classification tasks**
115
+ - Accuracy
116
+ - Macro-Precision
117
+ - Macro-Recall
118
+ - Macro-F1
119
+
120
+ ---
121
+
122
+ ## Data Format and Access
123
+
124
+ ### Data Files
125
+
126
+ All datasets are provided in **AnnData (`.h5ad`) format**.
127
+
128
+ > **Note**
129
+ > `.h5ad` files are not natively supported by the Hugging Face Dataset Viewer.
130
+ > Users are expected to download the files and load them locally using standard single-cell analysis tools such as **Scanpy** or **Seurat**.
131
+
132
+ ### Directory Structure
133
+
134
+ SingleCell-Unseen-Benchmark/
135
+ ├── tumor/ # Tumor cell datasets (.h5ad)
136
+ ├── stem/ # Stem cell datasets (.h5ad)
137
+ ├── neural/ # Neural cell datasets (.h5ad)
138
+ ├── normal/ # Normal cell datasets (.h5ad)
139
+ ├── results/ # Benchmark results
140
+ └── README.md
141
+
142
+ ## Benchmark Results
143
+
144
+ In addition to raw datasets, we provide **complete benchmark evaluation results** under the `results/` directory.
145
+
146
+ ### Results Organization
147
+
148
+ results/
149
+ ├── by_model/
150
+ │ ├── geneformer/
151
+ │ │ ├── tumor_identification.csv
152
+ │ │ ├── primary_site_tracing.csv
153
+ │ │ └── ...
154
+ │ ├── scfoundation/
155
+ │ ├── scgpt/
156
+ │ ├── uce/
157
+ │ └── sclong/
158
+ └── by_task/
159
+ ├── tumor_identification.csv
160
+ ├── primary_site_tracing.csv
161
+ ├── stem_identification.csv
162
+ └── ...
163
+
164
+
165
+ ### Design Rationale
166
+
167
+ - **`by_model/`**
168
+ Provides a **model-centric view**, facilitating analysis of how a single model performs across different tasks.
169
+
170
+ - **`by_task/`**
171
+ Provides a **task-centric view**, enabling direct comparison of multiple models on the same task.
172
+
173
+ Both views contain **identical information** and are provided to improve usability, clarity, and reproducibility.
174
+
175
+ ---
176
+
177
+ ## Intended Use
178
+
179
+ This benchmark is intended for:
180
+
181
+ - Evaluating **generalization and robustness** of single-cell foundation models
182
+ - Studying **tumor cell identification and origin tracing** under unseen conditions
183
+ - Benchmarking representation quality across diverse biological contexts
184
+
185
+ The dataset is **not intended for clinical decision-making**.
186
+
187
+ ---
188
+
189
+ ## Citation
190
+
191
+ If you use this dataset or benchmark in your work, please cite:
192
+
193
+
194
+ ## Contact
195
+
196
+ For questions, issues, or suggestions, please open an issue on the Hugging Face repository.
197
+
198
+
199
+