Datasets:

Modalities:
Text
Formats:
json
Languages:
English
ArXiv:
Libraries:
Datasets
pandas
License:
themendu commited on
Commit
d2a3e92
·
verified ·
1 Parent(s): 89d11c7

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +86 -3
README.md CHANGED
@@ -1,3 +1,86 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ task_categories:
4
+ - text-generation
5
+ language:
6
+ - en
7
+ size_categories:
8
+ - 100K<n<1M
9
+ ---
10
+
11
+ # SafeC4Sample: C4 Dataset with Harmfulness Predictions
12
+
13
+ ## Overview
14
+
15
+ SafeC4Sample is a processed subset of the [C4 dataset (Colossal, Cleaned version of Common Crawl's web crawl corpus)](https://huggingface.co/datasets/allenai/c4) that includes harmfulness predictions from a HarmFormer [As used in our paper](https://arxiv.org/pdf/2505.02009). This dataset can be used for content moderation, safer language model training, or research into harmfulness detection in web text.
16
+
17
+ The original C4 dataset, created by Google, provides a cleaned version of Common Crawl's web data and serves as training data for many large language models. This project enhances the dataset by adding harmfulness predictions for various categories, making it suitable for safety-focused applications.
18
+
19
+ We would be releasing the HarmFormer's prediction on the entire C4 shortly. For more details regarding the HarmFormer, please visit [our paper](https://arxiv.org/pdf/2505.02009).
20
+
21
+ ## Model and Inference
22
+
23
+ The inference is performed using a HarmFormer (finetuned `allenai/longformer-base-4096`), which was trained to detect potentially harmful content across 5 different harm categories with three dimensions (Safe, Topical, Toxic):
24
+
25
+ - **H**: Hate and Violence
26
+ - **IH**: Ideological Harm
27
+ - **SE**: Sexual Harm
28
+ - **IL**: Illegal Activities
29
+ - **SI**: Self-Inflicted
30
+
31
+ Each text in the dataset is processed and assigned probabilities for each harm category using the MultiTaskModel architecture, which features:
32
+ - Base Longformer model for handling long context
33
+ - Multiple classification heads (one per harm category)
34
+ - Predictions with 3 risk levels per category
35
+
36
+ ## Dataset Structure
37
+
38
+ Each entry in the dataset includes:
39
+ - All original C4 fields: `text`, `url`, and `timestamp`
40
+ - Additional `prediction` field containing an array of probability distributions for each harm category and 3 dimensions (Safe, Topical, Toxic).
41
+
42
+ Example entry:
43
+ ```json
44
+ {
45
+ "url": "https://dosalmas.us/2024/01/30/free-spins-ports-a-comprehensive-guide/",
46
+ "text": "Vending machine have constantly been the go-to games for casino lovers. With their vibrant graphics, interesting sounds...",
47
+ "timestamp": "2019-04-25T12:57:54Z",
48
+ "prediction": [
49
+ [0.982, 0.015, 0.003], // H category probabilities for 3 risk levels
50
+ [0.991, 0.007, 0.002], // IH category probabilities
51
+ [0.963, 0.035, 0.002], // SE category probabilities
52
+ [0.976, 0.018, 0.006], // IL category probabilities
53
+ [0.988, 0.010, 0.002] // SI category probabilities
54
+ ]
55
+ }
56
+ ```
57
+
58
+ ## Usage
59
+
60
+ You can load this dataset using the Hugging Face Datasets library:
61
+
62
+ ```python
63
+ from datasets import load_dataset
64
+
65
+ # Load the dataset
66
+ dataset = load_dataset("themendu/SafeC4Sample")
67
+
68
+ # Access an example
69
+ example = next(iter(dataset["train"]))
70
+ print(example["text"])
71
+ print(example["prediction"])
72
+
73
+ # Filter examples based on harm probability
74
+ def is_harmful(example, category_idx=2, threshold=0.7): # SE category (index 2)
75
+ return sum(example["prediction"][category_idx][1:]) > threshold # Sum of non-safe probabilities
76
+
77
+ harmful_examples = dataset.filter(is_harmful)
78
+ ```
79
+
80
+ ## Ethical Considerations
81
+
82
+ This dataset sample is provided for research purposes to help build safer AI systems. The harm predictions should be treated as probabilistic estimates, not definitive classifications. When using this dataset:
83
+
84
+ - Validate model predictions before making content moderation decisions
85
+ - Consider context and nuance in text that might be incorrectly flagged
86
+ - Be aware of potential biases in the training data of the harm detection model