File size: 2,393 Bytes
8d19dec
5d62319
 
 
 
 
 
 
 
 
 
 
 
 
 
8d19dec
5d62319
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
---
license: mit
language:
- it
- en
tags:
- sentence-boundary-detection
- token-classification
- legal-nlp
- multilingual
task_categories:
- token-classification
pretty_name: SentenceSplitter Dataset
size_categories:
- 1K<n<10K
---

# SentenceSplitter Dataset

## Dataset Description
This dataset is designed for Sentence Boundary Disambiguation (SBD) as a token classification task.

Each sample uses the schema:
- `tokens`: list of token strings
- `ner_tags`: list of integer labels aligned with `tokens`
  - `0` = not end of sentence
  - `1` = end of sentence

The dataset is intended for multilingual SBD, with focus on Italian and English, and includes both domain-specific and adversarial patterns.

## Data Sources
The training corpus is created by merging:

1. Professor corpus from `sent_split_data.tar.gz`
2. MultiLegalSBD legal JSONL corpora
3. Wikipedia (`20231101.it`, `20231101.en`)

Current filtering rules used in data preparation:
- Only professor files ending with `-train.sent_split`
- Only legal files ending with `*train.jsonl`

These filters are used to avoid dev/test leakage from source corpora.

## Dataset Splits
Published splits in this dataset repo:

- `train`: 1591 rows
- `validation`: 177 rows
- `test_adversarial`: 59 rows

All splits use the same features:
- `tokens`
- `ner_tags`

## How Splits Are Built
- `train` and `validation` are derived from `unified_training_dataset` with `train_test_split(test_size=0.1, seed=42)`.
- `test_adversarial` is loaded from `comprehensive_test_dataset` generated by the project testset pipeline.

## Intended Uses
- Training and evaluating SBD models for legal/academic/general text.
- Robustness checks on punctuation-heavy and abbreviation-heavy inputs.
- Benchmarking token-classification approaches for sentence segmentation.

## Limitations
- The adversarial split is intentionally difficult and may not represent natural document frequency.
- Source corpora come from different domains and annotation strategies.
- Performance can vary on domains not represented by legal, academic, or encyclopedic text.

## Reproducibility Notes
Core preprocessing choices:
- Sliding window size: 128
- Stride: 100
- Whitespace tokenization at dataset construction stage
- Label alignment to token-level EOS boundaries

Recommended practice:
- Use `validation` for tuning
- Keep `test_adversarial` for final robustness evaluation