stop-slop-data / README.md
elalber2000's picture
Update README.md
ef8d220 verified
metadata
dataset_info:
  features:
    - name: website
      dtype: string
    - name: title
      dtype: string
    - name: url
      dtype: string
    - name: domain
      dtype: string
    - name: slop
      dtype: string
    - name: content
      dtype: string
  splits:
    - name: train
      num_bytes: 9210022
      num_examples: 963
  download_size: 3897976
  dataset_size: 9210022
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*
license: cc-by-4.0
task_categories:
  - text-classification
language:
  - en
size_categories:
  - n<1K

πŸ—‚οΈ Stop Slop Dataset

This is a dataset scraped from multiple news and entertainment websites.
Each entry is labeled as Slop or Non-Slop depending on content quality.

πŸ“„ Dataset Details

  • Website: Source domain (e.g., NY Times, BBC)
  • Title: Title of the page
  • URL: Direct link
  • Domain: News, Lifestyle, etc.
  • Slop: Label (Slop / Non-Slop)
  • Content: Cleaned text from the HTML (for the version with the raw html check stop-slop-data-html)

πŸ“Š Dataset Overview

  • 963 examples labeled as Slop or Non-Slop.

πŸ› οΈ Scraping and Preprocessing

This is part of the stop-slop project The code used for scraping and cleaning this dataset is available here.

πŸ“œ License

Distributed under CC BY 4.0.

πŸš€ Usage Example

from datasets import load_dataset

dataset = load_dataset("elalber2000/stop-slop-data")