Datasets:
metadata
license: apache-2.0
tags:
- ner
- gliner
- zero-shot
- bootstrap
- uv-script
size_categories:
- n<10K
davanstrien/eval-mentions-bootstrap
Bootstrap NER dataset produced by urchade/gliner_multi-v2.1 over /input/cleaned-cards.parquet.
Generated using uv-scripts/gliner/extract-entities.py.
Provenance
| Source dataset | /input/cleaned-cards.parquet (split train) |
| Text column | card |
| Bootstrap model | urchade/gliner_multi-v2.1 |
| Entity types | benchmark name, evaluation dataset, evaluation metric |
| Confidence threshold | 0.6 |
| Samples processed | 10000 |
| Total entities extracted | 15811 |
| Inference device | cuda |
| Wall clock | 951.7s (10.51 samples/s) |
Schema
Original /input/cleaned-cards.parquet columns plus an entities column:
entities: list of {
"start": int, # character offset, inclusive
"end": int, # character offset, exclusive
"text": str, # the matched span
"label": str, # one of ['benchmark name', 'evaluation dataset', 'evaluation metric']
"score": float, # GLiNER confidence in [0, 1]
}
Caveats
- These are bootstrap labels, not human-reviewed. Treat low-confidence (< 0.7) entities as candidates for review.
- GLiNER is zero-shot: changing
--entity-typeschanges what it extracts, but quality varies by entity type. - Long texts were truncated at 8000 characters before inference.