Datasets:

Modalities:
Text
Formats:
parquet
Languages:
English
ArXiv:
License:
File size: 10,577 Bytes
a3ccda1
 
4dd7b03
a3ccda1
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
4dd7b03
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
cc996a4
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1b96da5
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
e395685
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
a3ccda1
 
 
 
 
4dd7b03
 
 
 
cc996a4
 
 
 
1b96da5
 
 
 
e395685
 
 
 
1a94125
 
 
 
 
 
 
 
 
 
 
a3ccda1
1a94125
 
 
 
 
 
 
 
f2fb4ae
 
1a94125
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
f2fb4ae
1a94125
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
---
dataset_info:
- config_name: chunk_1
  features:
  - name: text
    dtype: string
  - name: id
    dtype: string
  - name: metadata
    struct:
    - name: probability
      dtype: float64
    - name: relevant
      dtype: bool
    - name: dump
      dtype: string
    - name: url
      dtype: string
    - name: date
      dtype: timestamp[ms]
    - name: file_path
      dtype: string
    - name: language
      dtype: string
    - name: language_score
      dtype: float64
    - name: token_count
      dtype: int64
    - name: idx
      dtype: int64
    - name: score
      dtype: float64
    - name: int_score
      dtype: int64
  splits:
  - name: train
    num_bytes: 12928572805
    num_examples: 2644168
  download_size: 7225481499
  dataset_size: 12928572805
- config_name: chunk_2
  features:
  - name: text
    dtype: string
  - name: id
    dtype: string
  - name: metadata
    struct:
    - name: probability
      dtype: float64
    - name: relevant
      dtype: bool
    - name: dump
      dtype: string
    - name: url
      dtype: string
    - name: date
      dtype: timestamp[ms]
    - name: file_path
      dtype: string
    - name: language
      dtype: string
    - name: language_score
      dtype: float64
    - name: token_count
      dtype: int64
    - name: idx
      dtype: int64
    - name: score
      dtype: float64
    - name: int_score
      dtype: int64
  splits:
  - name: train
    num_bytes: 12442322111
    num_examples: 2644168
  download_size: 6946939349
  dataset_size: 12442322111
- config_name: chunk_3
  features:
  - name: text
    dtype: string
  - name: id
    dtype: string
  - name: metadata
    struct:
    - name: probability
      dtype: float64
    - name: relevant
      dtype: bool
    - name: dump
      dtype: string
    - name: url
      dtype: string
    - name: date
      dtype: timestamp[ms]
    - name: file_path
      dtype: string
    - name: language
      dtype: string
    - name: language_score
      dtype: float64
    - name: token_count
      dtype: int64
    - name: idx
      dtype: int64
    - name: score
      dtype: float64
    - name: int_score
      dtype: int64
  splits:
  - name: train
    num_bytes: 12298903026
    num_examples: 2644168
  download_size: 6838364994
  dataset_size: 12298903026
- config_name: chunk_4
  features:
  - name: text
    dtype: string
  - name: id
    dtype: string
  - name: metadata
    struct:
    - name: probability
      dtype: float64
    - name: relevant
      dtype: bool
    - name: dump
      dtype: string
    - name: url
      dtype: string
    - name: date
      dtype: timestamp[ms]
    - name: file_path
      dtype: string
    - name: language
      dtype: string
    - name: language_score
      dtype: float64
    - name: token_count
      dtype: int64
    - name: idx
      dtype: int64
    - name: score
      dtype: float64
    - name: int_score
      dtype: int64
  splits:
  - name: train
    num_bytes: 12366516316
    num_examples: 2644168
  download_size: 6878740053
  dataset_size: 12366516316
- config_name: chunk_5
  features:
  - name: text
    dtype: string
  - name: id
    dtype: string
  - name: metadata
    struct:
    - name: probability
      dtype: float64
    - name: relevant
      dtype: bool
    - name: dump
      dtype: string
    - name: url
      dtype: string
    - name: date
      dtype: timestamp[ms]
    - name: file_path
      dtype: string
    - name: language
      dtype: string
    - name: language_score
      dtype: float64
    - name: token_count
      dtype: int64
    - name: idx
      dtype: int64
    - name: score
      dtype: float64
    - name: int_score
      dtype: int64
  splits:
  - name: train
    num_bytes: 11968043495
    num_examples: 2644168
  download_size: 6734425417
  dataset_size: 11968043495
configs:
- config_name: chunk_1
  data_files:
  - split: train
    path: chunk_1/train-*
- config_name: chunk_2
  data_files:
  - split: train
    path: chunk_2/train-*
- config_name: chunk_3
  data_files:
  - split: train
    path: chunk_3/train-*
- config_name: chunk_4
  data_files:
  - split: train
    path: chunk_4/train-*
- config_name: chunk_5
  data_files:
  - split: train
    path: chunk_5/train-*
license: odc-by
task_categories:
- text-generation
language:
- en
tags:
- cybersecurity
- pretraining
pretty_name: RedSage-CFW
size_categories:
- 10M<n<100M
---

# Dataset Card for RedSage-CFW

<p align="center">
  <b> RedSage: A Cybersecurity Generalist LLM" (ICLR 2026). </b>
  <br>
  <b>Authors:</b> Naufal Suryanto<sup>1</sup>, Muzammal Naseer<sup>1†</sup>, Pengfei Li<sup>1</sup>, Syed Talal Wasim<sup>2</sup>, Jinhui Yi<sup>2</sup>, Juergen Gall<sup>2</sup>, Paolo Ceravolo<sup>3</sup>, Ernesto Damiani<sup>3</sup>
  <br>
  <sup>1</sup>Khalifa University, <sup>2</sup>University of Bonn, <sup>3</sup>University of Milan  
  <br>
  <sup></sup>Project Lead
  <br>
  <br>
  <a href="https://openreview.net/forum?id=W4FAenIrQ2"><img src="https://img.shields.io/badge/Paper-OpenReview-B31B1B.svg"></a>
  <a href="https://huggingface.co/RISys-Lab"><img src="https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-RISys--Lab-orange"></a>
  <br>
  🌐 <a href="https://risys-lab.github.io/RedSage/">Project Page</a>&nbsp;&nbsp;|&nbsp;&nbsp;
  🤖 <a href="https://huggingface.co/collections/RISys-Lab/redsage-models">Model Collection</a>&nbsp;&nbsp;|&nbsp;&nbsp;
  📊 <a href="https://huggingface.co/collections/RISys-Lab/redsage-benchmarks">Benchmark Collection</a>&nbsp;&nbsp;|&nbsp;&nbsp;
  📘 <a href="https://huggingface.co/collections/RISys-Lab/redsage-datasets">Data Collection </a>
  
  
</p>

****


## Dataset Description

* **Developed by:** RISysLab
* **Repository:** [GitHub](https://github.com/RISys-Lab/RedSage)
* **Paper:** [RedSage: A Cybersecurity Generalist LLM](https://openreview.net/forum?id=W4FAenIrQ2)
* **Arxiv:** https://arxiv.org/abs/2601.22159

### Dataset Summary

**RedSage-CFW** (CyberFineWeb) is a large-scale, cybersecurity dataset designed for the continual pretraining of Large Language Models (LLMs). It consists of approximately **11.7 billion tokens** spanning **13 million documents**.

The dataset was constructed by filtering the **FineWeb** corpus (Common Crawl 2013–2024) using a custom ModernBERT-based classifier to identify cybersecurity-relevant content. To prevent catastrophic forgetting of general capabilities during pretraining, the cybersecurity data is mixed with general educational content from **FineWeb-Edu**.

### Supported Tasks

* **Continual Pretraining:** Designed to adapt general-purpose LLMs (e.g., Qwen, Llama) to the cybersecurity domain.
* **Domain Adaptation:** Enhances model performance on cybersecurity knowledge, skills, and tool usage

### Languages

The dataset primarily consists of English text, derived from Common Crawl sources.

## Dataset Structure

### Data Instances

The dataset is partitioned into 5 chunks (config names: `chunk_1` through `chunk_5`). Each instance represents a single document (e.g., a web page, article, or forum post).

### Data Fields

Based on the provided configuration, the data fields are:

* **`text`** (string): The full text content of the document.
* **`id`** (string): A unique identifier for the document.
* **`metadata`** (struct): Contains detailed attributes about the source and filtering:
* `probability` (float64): The confidence score from the cybersecurity classifier.
* `relevant` (bool): A flag indicating if the document passed the relevance filter.
* `url` (string): The source URL of the document.
* `date` (timestamp): The crawl or publication date.
* `dump` (string): The Common Crawl dump identifier (e.g., `CC-MAIN-2024-51`).
* `file_path` (string): Path information for the original file.
* `language` (string): The detected language of the text.
* `language_score` (float64): Confidence score of the language detection.
* `token_count` (int64): The number of tokens in the document.
* `score`, `int_score`: Additional quality or relevance metrics.

### Data Splits

The dataset is segmented into 5 chunks. The paper notes that the final corpus consists of the "latest 5 chunks" from the filtered pipeline to fit training budgets.

* **Total Size:** ~11.7B tokens.
* **Total Documents:** ~13M documents.

## Dataset Creation

### Curation Rationale

Existing cybersecurity solutions often rely on proprietary APIs or lack domain adaptation. RedSage-CFW bridges this gap by providing a transparent, open-source corpus for training local, privacy-preserving cybersecurity assistants.

### Source Data

* **FineWeb:** The base corpus is FineWeb, aggregated from 104 Common Crawl subsets between Summer 2013 and December 2024 (~17.2T tokens).
* **FineWeb-Edu:** Used for mixing general knowledge to maintain reasoning capabilities.

### Data Processing & Filtering

1. **Classifier Training:** A binary classifier based on **ModernBERT-base** was trained on the "Cybersecurity Topic Classification" dataset (sourced from Reddit, StackExchange, and arXiv). It achieved 97.3% accuracy on validation.
2. **Filtering:** This classifier was applied to FineWeb, identifying ~125M cybersecurity-relevant documents (~89.8B tokens).
3. **General Knowledge Replay:** To avoid catastrophic forgetting, the cybersecurity data was mixed with FineWeb-Edu samples at a **30% replay ratio**.
4. **Deduplication:** Global deduplication was performed using MinHash-LSH (via DataTrove), reducing the corpus size by ~47.9% in tokens.
5. **Chunking:** The final dataset comprises the latest 5 chronological chunks from the processed data to manage computational costs.


## Considerations for Using the Data

### Social Impact

The dataset enables the development of open-source cybersecurity assistants, potentially helping to bridge the global skills shortage in the field.

### Discussion of Biases and Limitations

* **Source Bias:** As a web-crawled dataset, it may inherit biases present in Common Crawl and online cybersecurity discussions.
* **Dual Use:** The dataset may contains offensive security knowledge (e.g., penetration testing techniques). While intended for defense, there is an inherent risk of misuse.

---

## Citation

```bibtex
@inproceedings{suryanto2026redsage,
  title={RedSage: A Cybersecurity Generalist {LLM}},
  author={Naufal Suryanto and Muzammal Naseer and Pengfei Li and Syed Talal Wasim and Jinhui Yi and Juergen Gall and Paolo Ceravolo and Ernesto Damiani},
  booktitle={The Fourteenth International Conference on Learning Representations},
  year={2026},
  url={https://openreview.net/forum?id=W4FAenIrQ2}
}

```