|
|
--- |
|
|
dataset_info: |
|
|
features: |
|
|
- name: website |
|
|
dtype: string |
|
|
- name: title |
|
|
dtype: string |
|
|
- name: url |
|
|
dtype: string |
|
|
- name: domain |
|
|
dtype: string |
|
|
- name: slop |
|
|
dtype: string |
|
|
- name: content |
|
|
dtype: string |
|
|
splits: |
|
|
- name: train |
|
|
num_bytes: 9210022 |
|
|
num_examples: 963 |
|
|
download_size: 3897976 |
|
|
dataset_size: 9210022 |
|
|
configs: |
|
|
- config_name: default |
|
|
data_files: |
|
|
- split: train |
|
|
path: data/train-* |
|
|
license: cc-by-4.0 |
|
|
task_categories: |
|
|
- text-classification |
|
|
language: |
|
|
- en |
|
|
size_categories: |
|
|
- n<1K |
|
|
--- |
|
|
|
|
|
# ποΈ Stop Slop Dataset |
|
|
|
|
|
This is a dataset scraped from multiple news and entertainment websites. |
|
|
Each entry is labeled as `Slop` or `Non-Slop` depending on content quality. |
|
|
|
|
|
## π Dataset Details |
|
|
|
|
|
- **Website**: Source domain (e.g., NY Times, BBC) |
|
|
- **Title**: Title of the page |
|
|
- **URL**: Direct link |
|
|
- **Domain**: News, Lifestyle, etc. |
|
|
- **Slop**: Label (`Slop` / `Non-Slop`) |
|
|
- **Content**: Cleaned text from the HTML (for the version with the raw html check [stop-slop-data-html](https://huggingface.co/datasets/elalber2000/stop-slop-data-html)) |
|
|
|
|
|
## π Dataset Overview |
|
|
|
|
|
- **963 examples** labeled as `Slop` or `Non-Slop`. |
|
|
|
|
|
## π οΈ Scraping and Preprocessing |
|
|
|
|
|
This is part of the [stop-slop project](https://github.com/elalber2000/stop_slop) |
|
|
The code used for scraping and cleaning this dataset is available [here](https://github.com/elalber2000/stop_slop/tree/main/src/scrapping). |
|
|
|
|
|
## π License |
|
|
|
|
|
Distributed under [CC BY 4.0](https://creativecommons.org/licenses/by/4.0/). |
|
|
|
|
|
## π Usage Example |
|
|
|
|
|
```python |
|
|
from datasets import load_dataset |
|
|
|
|
|
dataset = load_dataset("elalber2000/stop-slop-data") |
|
|
``` |
|
|
|