--- license: cc-by-4.0 pretty_name: UniRef50 size_categories: - 10M. ## Statistics | | | |---|---| | Source files | 1 | | Shards | 61 | | Compressed shard bytes | 10.74 GiB (11,527,890,402) | | Records (per-source manifest sum) | 60,315,044 | | Residues (per-source manifest sum) | 17,282,055,793 | | Aggregate manifest `total_records` | 60315044 | ## Layout ``` . ├── _MANIFEST.json # aggregate manifest written by the pipeline ├── manifests/.json # per-source manifest (records, residues, shards) ├── metadata/.records.jsonl # per-record provenance └── sequences//shard-NNNNNN.fasta.zst ``` `` corresponds 1:1 with an upstream source archive; e.g. `sequence_uniprotkb_uniprot_sprot.fasta.gz`. ## Loading Stream every shard of one source (replace `` with the directory of interest under `sequences/`): ```bash hf download LiteFold/UniRef50 --repo-type dataset \ --include 'sequences//shard-*.fasta.zst' \ --local-dir ./uniref50 zstd -dc ./uniref50/sequences//shard-*.fasta.zst | head ``` Programmatic streaming with [`zstandard`](https://pypi.org/project/zstandard/): ```python from huggingface_hub import snapshot_download from pathlib import Path import zstandard as zstd local = snapshot_download( repo_id="LiteFold/UniRef50", repo_type="dataset", allow_patterns=["sequences/*/shard-*.fasta.zst"], ) dctx = zstd.ZstdDecompressor() for shard in sorted(Path(local).rglob("shard-*.fasta.zst")): with shard.open("rb") as f, dctx.stream_reader(f) as reader: buf = b"" while chunk := reader.read(1 << 20): buf += chunk *lines, buf = buf.split(b"\n") for line in lines: ... # naive splitter; swap in your FASTA parser ``` ## License CC BY 4.0 (UniProt Consortium). ## Citation > Suzek BE, Wang Y, Huang H, McGarvey PB, Wu CH; UniProt Consortium. UniRef clusters: a comprehensive and scalable alternative for improving sequence similarity searches. Bioinformatics, 31(6):926-32, 2015. ## Provenance Built from the local manifest entry `uniref50` of `manifests/atlas_download_plan.json`. Pipeline source: `megadata-post normalize --dataset uniref50`.