Datasets:
license: mit
task_categories:
- token-classification
task_ids:
- named-entity-recognition
dataset_info:
features:
- name: sentence
dtype: string
- name: entities
list:
- name: end
dtype: int64
- name: label
dtype: string
- name: start
dtype: int64
- name: text
dtype: string
- name: data_source
dtype: string
splits:
- name: train
num_bytes: 859293
num_examples: 3361
- name: validation
num_bytes: 447206
num_examples: 1819
download_size: 564719
dataset_size: 1306499
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
Proposed Active Learning Data split
Overview
This dataset release represents a proposed and experimental data split designed specifically to support and validate planned active learning (AL) cycles for biomedical Named Entity Recognition (NER).
The current version is not a final benchmark split. Instead, it serves as an initial, controlled setup for testing active learning strategies, model uncertainty sampling, and iterative annotation workflows prior to large-scale development.
Both splits (train and validation) have been carefully curated to ensure coverage of all three target entity types:
CellLine
CellType
Tissue
This balanced representation is critical for meaningful evaluation of active learning behavior across heterogeneous biomedical entities.
Dataset Description
Each of the split contain the following features:
- sentence: list of sentences
- entities: list of dict of entities found in sentences
- data_source: the source of the article where the sentence orginates
The dataset has been curated from three complementary biomedical domains, each contributing distinct entity distributions:
- Single-Cell transciptomics: rich in CellTypes and Tissues
- ChembL assay desriptions: rich in CellLines
- Stem-Cell research: contains all 3 entities
The stem cell–related articles were collected from the CellFinder data repository. The creation and annotation methodology of the original CellFinder dataset are described in the following reference:
Mariana Neves, Alexander Damaschun, Andreas Kurtz, Ulf Leser (2012) Annotating and evaluating text for stem cell research. In Proceedings Third Workshop on Building and Evaluation Resources for Biomedical Text Mining (BioTxtM 2012), Language Resources and Evaluation (LREC) 2012.
Article PMCIDs and source
Train Set
| PMCID | In-Split | Source |
|---|---|---|
| PMC12435838 | train | Single-Cell |
| PMC11578878 | train | Single-Cell |
| PMC12396968 | train | Single-Cell |
| PMC11116453 | train | Single-Cell |
| PMC12408821 | train | Single-Cell |
| PMC10968586 | train | CheMBL-V1 |
| PMC10761218 | train | CheMBL-V1 |
| PMC7642379 | train | CheMBL-V1 |
| PMC10674574 | train | CheMBL-V1 |
| PMC1315352 | train | CellFinder |
| PMC2041973 | train | CellFinder |
| PMC2238795 | train | CellFinder |
Validation Set
| PMCID | In-Split | Source |
|---|---|---|
| PMC12256823 | val | Single-Cell |
| PMC12116388 | val | Single-Cell |
| PMC10287567 | val | Single-Cell |
| PMC12133578 | val | Single-Cell |
| PMC8658661 | val | CheMBL-V1 |
| PMC11350568 | val | CheMBL-V1 |
| PMC12072392 | val | CheMBL-V2/CeLLaTe-V2 |
| PMC12115102 | val | CheMBL-V2/CeLLaTe-V2 |
| PMC2063610 | val | CellFinder |
| PMC1462997 | val | CellFinder |
Intended Use
Primary Use
- Supervised NER training for biomedical NLP tasks
Not Intended For
- Clinical or patient-level decision making
Notes and Limitations
This is an experimental split, subject to change.
Entity distributions may not reflect real-world prevalence.
Annotation density varies across domains by design