File size: 1,664 Bytes
949c2bd
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
9c7423f
 
 
 
 
 
 
ef8d220
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
---
dataset_info:
  features:
  - name: website
    dtype: string
  - name: title
    dtype: string
  - name: url
    dtype: string
  - name: domain
    dtype: string
  - name: slop
    dtype: string
  - name: content
    dtype: string
  splits:
  - name: train
    num_bytes: 9210022
    num_examples: 963
  download_size: 3897976
  dataset_size: 9210022
configs:
- config_name: default
  data_files:
  - split: train
    path: data/train-*
license: cc-by-4.0
task_categories:
- text-classification
language:
- en
size_categories:
- n<1K
---

# πŸ—‚οΈ Stop Slop Dataset

This is a dataset scraped from multiple news and entertainment websites.  
Each entry is labeled as `Slop` or `Non-Slop` depending on content quality.

## πŸ“„ Dataset Details

- **Website**: Source domain (e.g., NY Times, BBC)
- **Title**: Title of the page
- **URL**: Direct link
- **Domain**: News, Lifestyle, etc.
- **Slop**: Label (`Slop` / `Non-Slop`)
- **Content**: Cleaned text from the HTML (for the version with the raw html check [stop-slop-data-html](https://huggingface.co/datasets/elalber2000/stop-slop-data-html))

## πŸ“Š Dataset Overview

- **963 examples** labeled as `Slop` or `Non-Slop`.

## πŸ› οΈ Scraping and Preprocessing

This is part of the [stop-slop project](https://github.com/elalber2000/stop_slop)
The code used for scraping and cleaning this dataset is available [here](https://github.com/elalber2000/stop_slop/tree/main/src/scrapping).

## πŸ“œ License

Distributed under [CC BY 4.0](https://creativecommons.org/licenses/by/4.0/).

## πŸš€ Usage Example

```python
from datasets import load_dataset

dataset = load_dataset("elalber2000/stop-slop-data")
```