File size: 1,447 Bytes
395091c
 
 
 
 
 
 
 
 
 
24931c5
395091c
35f6264
 
395091c
 
 
 
35f6264
395091c
35f6264
395091c
 
 
 
 
35f6264
 
 
 
395091c
 
 
 
 
35f6264
395091c
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
---
license: odc-by
task_categories:
- text-generation
- text2text-generation
language:
- en
---
# C4 English Tokenized Samples

This dataset contains tokenized English samples from the C4 (Colossal Clean Crawled Corpus) dataset for natural language processing (NLP) tasks.

The first 125 000 entries from the `en` split of [allenai/c4](https://huggingface.co/datasets/allenai/c4)
were tokenized using [spaCy](https://spacy.io/)'s `en_core_web_sm` model. Tokens joined with spaces.

## Features

- `text`: Original text from C4
- `tokenized`: The tokenized and space-joined text
- `num_tokens`: Number of tokens after tokenization
- `num_punct_tokens`: Number of punctuation tokens after tokenization

## Example

```json
{
  "text": "ALDUS MANUTIUS AND HIS THESAURUS CORNUCOPIAE OF 1496.\nSyracuse (1958) . 7.5 x 4.25, cloth, 32 pp, a v.g. copy [...]",
  "tokenized": "ALDUS MANUTIUS AND HIS THESAURUS CORNUCOPIAE OF 1496 . \n Syracuse ( 1958 ) . 7.5 x 4.25 , cloth , 32 pp , a v.g . copy [...]",
  "num_tokens": 84,
  "num_punct_tokens": 19
}
```

## Usage

This dataset can be useful for:
- Text classification tasks
- Language modeling
- Sentiment analysis
- Other NLP applications requiring tokenized English text

Researchers and developers can use this dataset to jumpstart their projects without the need for initial tokenization steps.

## Licence

This dataset is licensed under the ODC-BY (Open Data Commons Attribution) licence.