Datasets:
metadata
license: mit
language:
- it
- en
tags:
- sentence-boundary-detection
- token-classification
- legal-nlp
- multilingual
task_categories:
- token-classification
pretty_name: SentenceSplitter Dataset
size_categories:
- 1K<n<10K
SentenceSplitter Dataset
Dataset Description
This dataset is designed for Sentence Boundary Disambiguation (SBD) as a token classification task.
Each sample uses the schema:
tokens: list of token stringsner_tags: list of integer labels aligned withtokens0= not end of sentence1= end of sentence
The dataset is intended for multilingual SBD, with focus on Italian and English, and includes both domain-specific and adversarial patterns.
Data Sources
The training corpus is created by merging:
- Professor corpus from
sent_split_data.tar.gz - MultiLegalSBD legal JSONL corpora
- Wikipedia (
20231101.it,20231101.en)
Current filtering rules used in data preparation:
- Only professor files ending with
-train.sent_split - Only legal files ending with
*train.jsonl
These filters are used to avoid dev/test leakage from source corpora.
Dataset Splits
Published splits in this dataset repo:
train: 1591 rowsvalidation: 177 rowstest_adversarial: 59 rows
All splits use the same features:
tokensner_tags
How Splits Are Built
trainandvalidationare derived fromunified_training_datasetwithtrain_test_split(test_size=0.1, seed=42).test_adversarialis loaded fromcomprehensive_test_datasetgenerated by the project testset pipeline.
Intended Uses
- Training and evaluating SBD models for legal/academic/general text.
- Robustness checks on punctuation-heavy and abbreviation-heavy inputs.
- Benchmarking token-classification approaches for sentence segmentation.
Limitations
- The adversarial split is intentionally difficult and may not represent natural document frequency.
- Source corpora come from different domains and annotation strategies.
- Performance can vary on domains not represented by legal, academic, or encyclopedic text.
Reproducibility Notes
Core preprocessing choices:
- Sliding window size: 128
- Stride: 100
- Whitespace tokenization at dataset construction stage
- Label alignment to token-level EOS boundaries
Recommended practice:
- Use
validationfor tuning - Keep
test_adversarialfor final robustness evaluation