uniref50_processed / README.md
vv137's picture
Update README.md
b938cef verified
---
license: cc-by-4.0
tags:
- biology
---
# UniRef50 (Processed, ESM-valid as Validation)
## Dataset Summary
This dataset is a **preprocessed UniRef50** snapshot tailored for **unsupervised protein representation learning**. It:
* Normalizes sequences (uppercase, `*` removed), filters by length and ambiguity, and deduplicates by MD5.
* Splits by **UniRef50 cluster ID** to prevent leakage.
* Uses the **official ESM validation headers** as the entire `valid` split (no sampling).
* Provides **JSONL.zst shards** for efficient streaming with 🤗 `datasets`.
> If you need the exact preprocessing script: see **Reproducibility** below.
---
## Source
* **Upstream data:** UniProt / UniRef50 (2018_03 snapshot).
* **Evaluation headers:** `uniref201803_ur50_valid_headers.txt` from the ESM paper.
Please respect UniProt terms when using or redistributing this derivative dataset.
---
## Splits
| Split | Definition | Notes |
| ------- | ---------------------------------------------------------------- | ----------------------------------------- |
| `train` | All clusters **not** in ESM valid and not hashed into test | Majority of UniRef50 |
| `valid` | **Only** clusters in ESM’s validation header list | Field `is_esm_valid=true` for all records |
| `test` | Hash‐based holdout by cluster: `xxhash64(cluster_id) % 100 == 2` | Small random holdout |
> Splitting by **cluster_id** avoids train/val/test contamination across cluster members.
---
## Features (Schema)
| Field | Type | Description | |
| -------------- | ------- | ---------------------------------------------------------------- | -------- |
| `id` | string | Stable ID = `cluster_id | md5[:8]` |
| `sequence` | string | Normalized AA sequence (uppercase; `*` removed) | |
| `length` | int32 | Sequence length after normalization | |
| `cluster_id` | string | UniRef50 cluster ID (e.g., `UniRef50_Q8WZ42-5`) | |
| `description` | string? | Optional description parsed from FASTA header (after `Cluster:`) | |
| `seq_md5` | string | MD5 of normalized sequence | |
| `is_esm_valid` | bool | `true` iff the record belongs to the ESM validation header set | |
> Ambiguous residues: records with ambiguity fraction > 5% (non-canonical AAs) are filtered out by default.
---
## Preprocessing & Filters
* **Normalization:** uppercase, remove terminal/internal `*`.
* **Length filter:** keep `30 ≤ L ≤ 1024`.
* **Ambiguity filter:** keep sequences with ≤ **5%** non-canonical residues (`ACDEFGHIKLMNPQRSTVWY` are canonical).
* **Deduplication:** exact dedup by MD5 of normalized sequence (global).
* **Splitting:** by `cluster_id` as described above.
* **Headers:** FASTA lines like
`>UniRef50_Q8WZ42-5 Cluster: Isoform 5 of Titin``cluster_id="UniRef50_Q8WZ42-5"`, `description="Isoform 5 of Titin"`.
---
## Intended Use
* **Self-supervised training** of protein LMs/encoders that must be robust to substitutions and indels (e.g., OT/UOT objectives).
* **Evaluation** aligned with the ESM paper by using the official validation header set for `valid`.
Not intended for clinical use. No personal data.
---
## How to Load (Streaming & Local)
### Streaming (recommended for large shards)
```python
from datasets import load_dataset
repo = "DeepFoldProtein/uniref50_processed" # replace with your namespace
ds_train = load_dataset(repo, split="train", streaming=True)
row = next(iter(ds_train))
print(row["cluster_id"], row["length"])
```
### Extract ESM-valid subset (within `valid`)
```python
from datasets import load_dataset
ds_valid = load_dataset(repo, split="valid", streaming=True)
esm_valid = ds_valid.filter(lambda x: x["is_esm_valid"])
print(next(iter(esm_valid)))
```
### Non-streaming load (small splits only)
```python
from datasets import load_dataset
ds_test = load_dataset(repo, split="test") # materializes locally
print(len(ds_test))
```
---
## Quick Stats Helper
Use this helper to print length statistics per split:
```python
from datasets import load_dataset
import math
def stats(split):
ds = load_dataset("DeepFoldProtein/uniref50_processed", split=split, streaming=True)
n=s=s2=0; mn=10**9; mx=0
for r in ds:
L = int(r.get("length", len(r["sequence"])))
n += 1; s += L; s2 += L*L; mn = min(mn, L); mx = max(mx, L)
mean = s/n if n else float("nan")
std = math.sqrt(max(0.0, s2/n - mean*mean)) if n else float("nan")
return {"count": n, "min": mn, "max": mx, "mean": mean, "std": std}
print(stats("train"))
print(stats("valid"))
print(stats("test"))
```
---
## Licensing
* **Data source:** UniProt / UniRef50. Follow the UniProt license and attribution requirements: [https://www.uniprot.org/help/license](https://www.uniprot.org/help/license)
* **Derivative dataset:** You must attribute UniProt and include a link to their license when redistributing.
* **Code (preprocessing):** Provide your own license for the script if you distribute it.
---
## Citation
If you use this dataset, please cite UniProt and (optionally) ESM:
**UniProt:**
> The UniProt Consortium. *UniProt: the universal protein knowledgebase.* Nucleic Acids Res. (2018)
**ESM:**
> Rives et al. *Evolutionary-scale prediction of atomic-level protein structure with a language model.* Science (2023).
---
## Known Limitations
* **Snapshot drift:** This mirrors UniRef50 (2018_03) conventions; later UniRef releases may differ.
* **Non-random validation:** `valid` is defined by ESM’s curated header list (by design).
* **Ambiguity handling:** Sequences with >5% ambiguous residues are dropped; adjust if you need broader coverage.
* **Dedup scope:** Deduplication is by normalized sequence only (not by cluster consensus).
---
## Changelog / Versioning
* **v1.0:** Initial release — ESM-valid set defines `valid`; hash-based `test`; JSONL.zst shards; manifest schema above.
* Future updates will be tagged with semantic versions and described here.
---
## Contact
* **Issues:** Please open a GitHub issue or HF discussion on this dataset repo.
---
If you’d like, I can also generate a minimal `dataset_info.yaml` with this schema so the Hub shows the features immediately.