ylelauta commited on
Commit
5fe24c6
·
verified ·
1 Parent(s): 45dfbc8

Update README with parquet dataset documentation

Browse files
Files changed (1) hide show
  1. README.md +136 -46
README.md CHANGED
@@ -1,80 +1,170 @@
1
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
2
  license: cc-by-4.0
3
  task_categories:
4
  - text-classification
5
- - token-classification
6
  language:
7
  - en
8
  tags:
9
  - 4chan
10
- - political-discourse
11
  - toxicity
12
- - named-entity-recognition
13
  - perspective-api
14
- - social-media
15
- - imageboard
 
16
  size_categories:
17
- - 10M<n<100M
18
- source_datasets:
19
- - original
20
  ---
21
 
22
- # Raiders of the Lost Kek: 3.5 Years of Augmented 4chan /pol/ Posts
23
 
24
- ## Dataset Description
25
-
26
- Academic research dataset containing 3.5 years of 4chan /pol/ (Politically Incorrect) board content with enriched metadata. Published alongside the paper "Raiders of the Lost Kek: 3.5 Years of Augmented 4chan Posts from the Politically Incorrect Board" (ICWSM 2020).
27
-
28
- **Authors:** Antonis Papasavva, Savvas Zannettou, Emiliano De Cristofaro, Gianluca Stringhini, Jeremy Blackburn
29
-
30
- **Institutions:** University College London, Max Planck Institute, Boston University, Binghamton University
31
 
32
- ## Content
33
 
34
- - **Format:** Newline-delimited JSON (NDJSON), one line per thread
35
- - **Date range:** June 2016 — November 2019
36
- - **Source:** 4chan /pol/ board via the 4chan API
37
- - **Size:** ~24 GB compressed (tar.zst)
38
 
39
- ## Augmented Annotations
 
40
 
41
- Three supplementary data layers beyond the standard 4chan API fields:
42
 
43
- 1. **Named Entities**Extracted using spaCy NLP pipeline
44
- 2. **Perspective API Scores** — 7 toxicity dimension scores in [0, 1] from Google's Perspective API
45
- 3. **Extracted Poster ID** — Additional identifier field
46
 
47
- ## Usage
48
 
49
- ```bash
50
- # Decompress
51
- tar --zstd -xf pol_0616-1119_labeled.tar.zst
52
 
53
- # Stream process with Python
54
- import json
55
- with open('pol_0616-1119_labeled.ndjson') as f:
56
- for line in f:
57
- thread = json.loads(line)
58
- # thread contains posts with entities, toxicity scores, etc.
59
  ```
60
 
61
- ## Citation
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
62
 
63
  ```bibtex
64
  @inproceedings{papasavva2020raiders,
65
  title={Raiders of the Lost Kek: 3.5 Years of Augmented 4chan Posts from the Politically Incorrect Board},
66
  author={Papasavva, Antonis and Zannettou, Savvas and De Cristofaro, Emiliano and Stringhini, Gianluca and Blackburn, Jeremy},
67
- booktitle={Proceedings of the International AAAI Conference on Web and Social Media (ICWSM)},
68
  year={2020}
69
  }
70
  ```
71
 
72
- ## License
73
-
74
- Creative Commons Attribution 4.0 International (CC-BY-4.0)
75
-
76
- **DOI:** [10.5281/zenodo.3606810](https://doi.org/10.5281/zenodo.3606810)
77
-
78
- ## Intended Use
79
 
80
- This dataset is intended for academic research on online discourse, toxicity detection, and social media analysis. It is mirrored here from Zenodo for easier programmatic access.
 
1
  ---
2
+ dataset_info:
3
+ features:
4
+ - name: thread_no
5
+ dtype: int64
6
+ - name: archived_on
7
+ dtype: int64
8
+ - name: semantic_url
9
+ dtype: string
10
+ - name: "no"
11
+ dtype: int64
12
+ - name: resto
13
+ dtype: int64
14
+ - name: time
15
+ dtype: int64
16
+ - name: now
17
+ dtype: string
18
+ - name: name
19
+ dtype: string
20
+ - name: trip
21
+ dtype: string
22
+ - name: sub
23
+ dtype: string
24
+ - name: com
25
+ dtype: string
26
+ - name: country
27
+ dtype: string
28
+ - name: country_name
29
+ dtype: string
30
+ - name: filename
31
+ dtype: string
32
+ - name: ext
33
+ dtype: string
34
+ - name: fsize
35
+ dtype: int64
36
+ - name: md5
37
+ dtype: string
38
+ - name: w
39
+ dtype: int32
40
+ - name: h
41
+ dtype: int32
42
+ - name: tn_w
43
+ dtype: int32
44
+ - name: tn_h
45
+ dtype: int32
46
+ - name: tim
47
+ dtype: int64
48
+ - name: replies
49
+ dtype: int32
50
+ - name: images
51
+ dtype: int32
52
+ - name: bumplimit
53
+ dtype: int32
54
+ - name: imagelimit
55
+ dtype: int32
56
+ - name: archived
57
+ dtype: int32
58
+ - name: closed
59
+ dtype: int32
60
+ - name: toxicity
61
+ dtype: float32
62
+ - name: severe_toxicity
63
+ dtype: float32
64
+ - name: inflammatory
65
+ dtype: float32
66
+ - name: profanity
67
+ dtype: float32
68
+ - name: insult
69
+ dtype: float32
70
+ - name: obscene
71
+ dtype: float32
72
+ - name: spam
73
+ dtype: float32
74
+ - name: entities
75
+ dtype: string
76
+ splits:
77
+ - name: train
78
+ num_bytes: 19440000000
79
+ num_examples: 134529233
80
+ download_size: 19440000000
81
+ dataset_size: 134529233
82
+ configs:
83
+ - config_name: default
84
+ data_files:
85
+ - split: train
86
+ path: data/train-*-of-00270.parquet
87
  license: cc-by-4.0
88
  task_categories:
89
  - text-classification
 
90
  language:
91
  - en
92
  tags:
93
  - 4chan
 
94
  - toxicity
 
95
  - perspective-api
96
+ - named-entities
97
+ - political
98
+ pretty_name: "/pol/ 4chan Augmented (Jun 2016 - Nov 2019)"
99
  size_categories:
100
+ - 100M<n<1B
 
 
101
  ---
102
 
103
+ # /pol/ 4chan Augmented Dataset
104
 
105
+ **134.5M posts** from 3.4M threads on 4chan's /pol/ board (June 2016 - November 2019), augmented with Perspective API toxicity scores and named entity recognition.
 
 
 
 
 
 
106
 
107
+ ## Dataset Description
108
 
109
+ This dataset contains posts from 4chan's Politically Incorrect (/pol/) board, collected between June 2016 and November 2019. Each post has been augmented with:
 
 
 
110
 
111
+ - **Perspective API toxicity scores** (7 dimensions): toxicity, severe_toxicity, inflammatory, profanity, insult, obscene, spam
112
+ - **Named Entity Recognition**: extracted entities stored as JSON arrays
113
 
114
+ ### Source
115
 
116
+ Original data from [Papasavva et al. (2020)](https://zenodo.org/records/3606810) "Raiders of the Lost Kek: 3.5 Years of Augmented 4chan Posts from the Politically Incorrect Board".
 
 
117
 
118
+ ### Format
119
 
120
+ 270 parquet shards with zstd compression (~70MB each). The dataset is directly loadable with HuggingFace `datasets`:
 
 
121
 
122
+ ```python
123
+ from datasets import load_dataset
124
+ ds = load_dataset("ylelauta/pol-4chan-augmented")
 
 
 
125
  ```
126
 
127
+ ### Schema
128
+
129
+ | Field | Type | Description |
130
+ |-------|------|-------------|
131
+ | `thread_no` | int64 | Thread number (from OP) |
132
+ | `no` | int64 | Post number |
133
+ | `resto` | int64 | 0 = OP, >0 = reply-to thread number |
134
+ | `time` | int64 | Unix timestamp |
135
+ | `com` | string | Comment HTML |
136
+ | `country` / `country_name` | string | Poster's country flag |
137
+ | `sub` | string | Subject (OP only) |
138
+ | `name` / `trip` | string | Poster identity |
139
+ | `filename` / `ext` / `fsize` / `md5` / `w` / `h` / `tim` | mixed | Image metadata |
140
+ | `replies` / `images` | int32 | Thread stats (OP only) |
141
+ | `toxicity` | float32 | Perspective API toxicity score (0-1) |
142
+ | `severe_toxicity` | float32 | Severe toxicity score |
143
+ | `inflammatory` | float32 | Inflammatory score |
144
+ | `profanity` | float32 | Profanity score |
145
+ | `insult` | float32 | Insult score |
146
+ | `obscene` | float32 | Obscene score |
147
+ | `spam` | float32 | Spam score |
148
+ | `entities` | string | JSON array of named entities |
149
+
150
+ ### Statistics
151
+
152
+ - **134,529,233** posts
153
+ - **3,397,911** threads
154
+ - **270** parquet shards
155
+ - Date range: June 2016 - November 2019
156
+
157
+ ### Citation
158
 
159
  ```bibtex
160
  @inproceedings{papasavva2020raiders,
161
  title={Raiders of the Lost Kek: 3.5 Years of Augmented 4chan Posts from the Politically Incorrect Board},
162
  author={Papasavva, Antonis and Zannettou, Savvas and De Cristofaro, Emiliano and Stringhini, Gianluca and Blackburn, Jeremy},
163
+ booktitle={Proceedings of the International AAAI Conference on Web and Social Media},
164
  year={2020}
165
  }
166
  ```
167
 
168
+ ### License
 
 
 
 
 
 
169
 
170
+ CC-BY-4.0 (following the original dataset license)