LorenzoVentrone commited on
Commit
5d62319
·
verified ·
1 Parent(s): 8d19dec

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +76 -27
README.md CHANGED
@@ -1,29 +1,78 @@
1
  ---
2
- dataset_info:
3
- features:
4
- - name: tokens
5
- list: string
6
- - name: ner_tags
7
- list: int64
8
- splits:
9
- - name: train
10
- num_bytes: 2965530
11
- num_examples: 1591
12
- - name: validation
13
- num_bytes: 329917
14
- num_examples: 177
15
- - name: test_adversarial
16
- num_bytes: 24511
17
- num_examples: 59
18
- download_size: 3327548
19
- dataset_size: 3319958
20
- configs:
21
- - config_name: default
22
- data_files:
23
- - split: train
24
- path: data/train-*
25
- - split: validation
26
- path: data/validation-*
27
- - split: test_adversarial
28
- path: data/test_adversarial-*
29
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ license: mit
3
+ language:
4
+ - it
5
+ - en
6
+ tags:
7
+ - sentence-boundary-detection
8
+ - token-classification
9
+ - legal-nlp
10
+ - multilingual
11
+ task_categories:
12
+ - token-classification
13
+ pretty_name: SentenceSplitter Dataset
14
+ size_categories:
15
+ - 1K<n<10K
 
 
 
 
 
 
 
 
 
 
 
 
 
16
  ---
17
+
18
+ # SentenceSplitter Dataset
19
+
20
+ ## Dataset Description
21
+ This dataset is designed for Sentence Boundary Disambiguation (SBD) as a token classification task.
22
+
23
+ Each sample uses the schema:
24
+ - `tokens`: list of token strings
25
+ - `ner_tags`: list of integer labels aligned with `tokens`
26
+ - `0` = not end of sentence
27
+ - `1` = end of sentence
28
+
29
+ The dataset is intended for multilingual SBD, with focus on Italian and English, and includes both domain-specific and adversarial patterns.
30
+
31
+ ## Data Sources
32
+ The training corpus is created by merging:
33
+
34
+ 1. Professor corpus from `sent_split_data.tar.gz`
35
+ 2. MultiLegalSBD legal JSONL corpora
36
+ 3. Wikipedia (`20231101.it`, `20231101.en`)
37
+
38
+ Current filtering rules used in data preparation:
39
+ - Only professor files ending with `-train.sent_split`
40
+ - Only legal files ending with `*train.jsonl`
41
+
42
+ These filters are used to avoid dev/test leakage from source corpora.
43
+
44
+ ## Dataset Splits
45
+ Published splits in this dataset repo:
46
+
47
+ - `train`: 1591 rows
48
+ - `validation`: 177 rows
49
+ - `test_adversarial`: 59 rows
50
+
51
+ All splits use the same features:
52
+ - `tokens`
53
+ - `ner_tags`
54
+
55
+ ## How Splits Are Built
56
+ - `train` and `validation` are derived from `unified_training_dataset` with `train_test_split(test_size=0.1, seed=42)`.
57
+ - `test_adversarial` is loaded from `comprehensive_test_dataset` generated by the project testset pipeline.
58
+
59
+ ## Intended Uses
60
+ - Training and evaluating SBD models for legal/academic/general text.
61
+ - Robustness checks on punctuation-heavy and abbreviation-heavy inputs.
62
+ - Benchmarking token-classification approaches for sentence segmentation.
63
+
64
+ ## Limitations
65
+ - The adversarial split is intentionally difficult and may not represent natural document frequency.
66
+ - Source corpora come from different domains and annotation strategies.
67
+ - Performance can vary on domains not represented by legal, academic, or encyclopedic text.
68
+
69
+ ## Reproducibility Notes
70
+ Core preprocessing choices:
71
+ - Sliding window size: 128
72
+ - Stride: 100
73
+ - Whitespace tokenization at dataset construction stage
74
+ - Label alignment to token-level EOS boundaries
75
+
76
+ Recommended practice:
77
+ - Use `validation` for tuning
78
+ - Keep `test_adversarial` for final robustness evaluation