Update README.md
Browse files
README.md
CHANGED
|
@@ -31,4 +31,39 @@ language:
|
|
| 31 |
- en
|
| 32 |
size_categories:
|
| 33 |
- n<1K
|
| 34 |
-
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 31 |
- en
|
| 32 |
size_categories:
|
| 33 |
- n<1K
|
| 34 |
+
---
|
| 35 |
+
|
| 36 |
+
# ποΈ Stop Slop Dataset
|
| 37 |
+
|
| 38 |
+
This is a dataset scraped from multiple news and entertainment websites.
|
| 39 |
+
Each entry is labeled as `Slop` or `Non-Slop` depending on content quality.
|
| 40 |
+
|
| 41 |
+
## π Dataset Details
|
| 42 |
+
|
| 43 |
+
- **Website**: Source domain (e.g., NY Times, BBC)
|
| 44 |
+
- **Title**: Title of the page
|
| 45 |
+
- **URL**: Direct link
|
| 46 |
+
- **Domain**: News, Lifestyle, etc.
|
| 47 |
+
- **Slop**: Label (`Slop` / `Non-Slop`)
|
| 48 |
+
- **Content**: Cleaned text from the HTML (for the version with the raw html check [stop-slop-data-html](https://huggingface.co/datasets/elalber2000/stop-slop-data-html))
|
| 49 |
+
|
| 50 |
+
## π Dataset Overview
|
| 51 |
+
|
| 52 |
+
- **963 examples** labeled as `Slop` or `Non-Slop`.
|
| 53 |
+
|
| 54 |
+
## π οΈ Scraping and Preprocessing
|
| 55 |
+
|
| 56 |
+
This is part of the [stop-slop project](https://github.com/elalber2000/stop_slop)
|
| 57 |
+
The code used for scraping and cleaning this dataset is available [here](https://github.com/elalber2000/stop_slop/tree/main/src/scrapping).
|
| 58 |
+
|
| 59 |
+
## π License
|
| 60 |
+
|
| 61 |
+
Distributed under [CC BY 4.0](https://creativecommons.org/licenses/by/4.0/).
|
| 62 |
+
|
| 63 |
+
## π Usage Example
|
| 64 |
+
|
| 65 |
+
```python
|
| 66 |
+
from datasets import load_dataset
|
| 67 |
+
|
| 68 |
+
dataset = load_dataset("elalber2000/stop-slop-data")
|
| 69 |
+
```
|