File size: 3,684 Bytes
0be5874
9ca7a5a
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
2e3d5bf
9ca7a5a
 
 
 
 
 
 
 
 
2e251b8
2e3d5bf
9ca7a5a
 
2e3d5bf
9ca7a5a
 
2e3d5bf
9ca7a5a
 
2e3d5bf
9ca7a5a
 
2e3d5bf
9ca7a5a
 
 
 
 
 
2e3d5bf
9ca7a5a
 
 
 
 
 
 
 
 
2e251b8
2e3d5bf
9ca7a5a
 
2e3d5bf
9ca7a5a
 
2e3d5bf
9ca7a5a
 
2e3d5bf
9ca7a5a
 
2e3d5bf
0be5874
2e3d5bf
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
---

annotations_creators:
- machine-generated
language_creators:
- found
language:
- en
license:
- mit
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- original
task_categories:
- text-classification
- sentence-similarity
task_ids:
- semantic-similarity-classification
pretty_name: WikiSection (en_city, en_disease)
tags:
- text segmentation
- document segmentation
- topic segmentation
- topic shift detection
- semantic chunking
- chunking
- nlp
- wikipedia
dataset_info:
- config_name: en_city
  features:
  - name: id
    dtype: string
  - name: title
    dtype: string
  - name: ids
    sequence: string
  - name: sentences
    sequence: string
  - name: titles_mask
    sequence: uint8
  - name: labels
    sequence:
      class_label:
        names:
          '0': semantic-continuity
          '1': semantic-shift
  splits:
  - name: train
    num_bytes: 105236889
    num_examples: 13679
  - name: validation
    num_bytes: 15693016
    num_examples: 1953
  - name: test
    num_bytes: 31140798
    num_examples: 3907
  download_size: 94042594
  dataset_size: 152070703
- config_name: en_disease
  features:
  - name: id
    dtype: string
  - name: title
    dtype: string
  - name: ids
    sequence: string
  - name: sentences
    sequence: string
  - name: titles_mask
    sequence: uint8
  - name: labels
    sequence:
      class_label:
        names:
          '0': semantic-continuity
          '1': semantic-shift
  splits:
  - name: train
    num_bytes: 22409988
    num_examples: 2513
  - name: validation
    num_bytes: 3190201
    num_examples: 359
  - name: test
    num_bytes: 6088470
    num_examples: 718
  download_size: 94042594
  dataset_size: 31688659
---

# Dataset Card for WikiSection (en_city, en_disease) Dataset

The WikiSection dataset is a collection of segmented Wikipedia articles related to cities and diseases, structured in this repository for a sentence-level document segmentation task.

## Dataset Overview

WikiSection contains two English subsets:
- **en_city**: 19.5k Wikipedia articles about cities and city-related topics.

- **en_disease**: 3.6k articles on diseases and health-related scientific information.

Each subset provides segmented articles, where the task is to classify sentence boundaries as either "semantic-continuity" or "semantic-shift."

## Features

The dataset provides the following features:

- **id**: `string` - A unique identifier for each document.
- **title**: `string` - The title of the document.
- **ids**: `list[string]` - The sentence ids within the document
- **sentences**: `list[string]` - The sentences within the document.
- **titles_mask**: `list[uint8]` - A binary mask to indicate which sentences are titles.

- **labels**: `list[int]` - Binary labels for each sentence, where `0` represents "semantic-continuity" and `1` represents "semantic-shift."



## Usage



The dataset can be easily loaded using the HuggingFace `datasets` library:



```python

from datasets import load_dataset



# en_city

titled_en_city = load_dataset('saeedabc/wikisection', 'en_city', trust_remote_code=True)



untitled_en_city = load_dataset('saeedabc/wikisection', 'en_city', drop_titles=True, trust_remote_code=True)



# en_disease

titled_en_disease = load_dataset('saeedabc/wikisection', 'en_disease', trust_remote_code=True)



untitled_en_disease = load_dataset('saeedabc/wikisection', 'en_disease', drop_titles=True, trust_remote_code=True)

```



## Dataset Details



- **Homepage**: [WikiSection on GitHub](https://github.com/sebastianarnold/WikiSection)