imdb-deduplicated / README.md
davanstrien's picture
davanstrien HF Staff
Add dataset card
abeea70 verified
---
tags:
- deduplicated
- semhash
- semantic-deduplication
- hfjobs
---
# Deduplicated imdb
This dataset is a deduplicated version of [imdb](https://huggingface.co/datasets/imdb)
using semantic deduplication with [SemHash](https://github.com/MinishLab/semhash).
## Deduplication Details
- **Method**: deduplicate
- **Column**: `text`
- **Original size**: 25,000 samples
- **Deduplicated size**: 24,830 samples
- **Duplicate ratio**: 0.68%
- **Reduction**: 0.68%
- **Date processed**: 2025-06-27
## How to use
```python
from datasets import load_dataset
dataset = load_dataset("imdb-deduplicated")
```
## Processing script
This dataset was created using the following script:
```bash
uv run dedupe-dataset.py imdb text <repo_id> --method deduplicate
```
## About semantic deduplication
Unlike exact deduplication, semantic deduplication identifies and removes samples that are
semantically similar even if they use different words. This helps create cleaner training
datasets and prevents data leakage between train/test splits.