JFLa commited on
Commit
9ea683c
·
verified ·
1 Parent(s): fd3e64f

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -3
README.md CHANGED
@@ -19,8 +19,8 @@ tags:
19
  # Geneformer-CAB: Benchmarking Scale and Architecture in Foundation Models for Single-Cell Transcriptomics
20
  - Model Overview:
21
 
22
- Geneformer-CAB (Cumulative-Assignment-Blocking) is a benchmarked variant of the Geneformer architecture for modeling single-cell transcriptomic data.
23
- Rather than introducing an entirely new model, Geneformer-CAB systematically evaluates how data scale and architectural refinements interact to influence model generalization, predictive diversity, and robustness to batch effects.
24
 
25
  - This model integrates two architectural enhancements:
26
 
@@ -28,4 +28,4 @@ Rather than introducing an entirely new model, Geneformer-CAB systematically eva
28
 
29
  2. Similarity-based regularization, which penalizes redundant token predictions to promote diversity and alignment with rank-ordered gene expression profiles.
30
 
31
- Together, these mechanisms provide insight into the limits of scale in single-cell foundation models revealing that scaling up pretraining data does not always yield superior downstream performance.
 
19
  # Geneformer-CAB: Benchmarking Scale and Architecture in Foundation Models for Single-Cell Transcriptomics
20
  - Model Overview:
21
 
22
+ Geneformer-CAB (Cumulative-Assignment-Blocking, GF-CAB) is a benchmarked variant of the Geneformer architecture for modeling single-cell transcriptomic data.
23
+ Rather than introducing an entirely new model, GF-CAB systematically evaluates how data scale and architectural refinements interact to influence model generalization, predictive diversity, and robustness to batch effects.
24
 
25
  - This model integrates two architectural enhancements:
26
 
 
28
 
29
  2. Similarity-based regularization, which penalizes redundant token predictions to promote diversity and alignment with rank-ordered gene expression profiles.
30
 
31
+ Together, these mechanisms provide insight into the limits of scale in single-cell foundation models, revealing that scaling up pretraining data does not always yield superior downstream performance.