Datasets:
File size: 1,045 Bytes
4669e6d 6b40fbc abeea70 4669e6d 6b40fbc abeea70 6b40fbc |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 |
---
tags:
- deduplicated
- semhash
- semantic-deduplication
- hfjobs
---
# Deduplicated imdb
This dataset is a deduplicated version of [imdb](https://huggingface.co/datasets/imdb)
using semantic deduplication with [SemHash](https://github.com/MinishLab/semhash).
## Deduplication Details
- **Method**: deduplicate
- **Column**: `text`
- **Original size**: 25,000 samples
- **Deduplicated size**: 24,830 samples
- **Duplicate ratio**: 0.68%
- **Reduction**: 0.68%
- **Date processed**: 2025-06-27
## How to use
```python
from datasets import load_dataset
dataset = load_dataset("imdb-deduplicated")
```
## Processing script
This dataset was created using the following script:
```bash
uv run dedupe-dataset.py imdb text <repo_id> --method deduplicate
```
## About semantic deduplication
Unlike exact deduplication, semantic deduplication identifies and removes samples that are
semantically similar even if they use different words. This helps create cleaner training
datasets and prevents data leakage between train/test splits.
|