license: cc-by-4.0
pretty_name: MGnify Protein Catalogues
size_categories:
- unknown
task_categories:
- other
language:
- en
tags:
- biology
- proteins
- sequences
- fasta
- mgnify
- metagenomics
MGnify Protein Catalogues
Normalized FASTA shards of MGnify protein cluster catalogues (mgy_clusters and mgy_proteins partitions).
Processed and uploaded by the MegaData post-download pipeline (internal repo). Original source: https://www.ebi.ac.uk/metagenomics/.
Note:
shard-000000.fasta.zstper source is a 100-record sample left over from validation passes and is not part of this dataset. Onlyshard-000001.fasta.zstand higher contain full-clean data.
Statistics
| Source files | 26 |
| Shards | 3,148 |
| Compressed shard bytes | 319.27 GiB (342,815,994,580) |
| Normalized table files | 28 |
| Compressed table bytes | 2.17 TiB (2,380,980,172,257) |
Layout
.
├── tables/<source_slug>.jsonl # normalized table rows (one JSON object per line)
└── sequences/<source_slug>/shard-NNNNNN.fasta.zst
metadata/,manifests/, and_MANIFEST.jsonare intentionally not shipped here — the on-disk versions were sample-only. Recompute per-shard record/residue counts downstream by streaming the FASTA records out of the shipped shards.
<source_slug> corresponds 1:1 with an upstream source archive; e.g.
sequence_uniprotkb_uniprot_sprot.fasta.gz.
Loading
Stream every shard of one source (replace <source_slug> with the directory of
interest under sequences/):
hf download LiteFold/Mgnify --repo-type dataset \
--include 'sequences/<source_slug>/shard-*.fasta.zst' \
--local-dir ./mgnify_proteins
zstd -dc ./mgnify_proteins/sequences/<source_slug>/shard-*.fasta.zst | head
Programmatic streaming with zstandard:
from huggingface_hub import snapshot_download
from pathlib import Path
import zstandard as zstd
local = snapshot_download(
repo_id="LiteFold/Mgnify",
repo_type="dataset",
allow_patterns=["sequences/*/shard-*.fasta.zst"],
)
dctx = zstd.ZstdDecompressor()
for shard in sorted(Path(local).rglob("shard-*.fasta.zst")):
with shard.open("rb") as f, dctx.stream_reader(f) as reader:
buf = b""
while chunk := reader.read(1 << 20):
buf += chunk
*lines, buf = buf.split(b"\n")
for line in lines:
... # naive splitter; swap in your FASTA parser
License
CC BY 4.0 (EMBL-EBI MGnify).
Citation
Mitchell AL, et al. MGnify: the microbiome analysis resource in 2020. Nucleic Acids Research, 48(D1):D570-D578, 2020.
Caveats
shard-000000.fasta.zstper source is a 100-record sample left over from validation passes and is excluded from this upload. Onlyshard-000001.fasta.zstand onward are shipped.- Per-source
manifests/*.jsonandmetadata/*.records.jsonlare not shipped because their on-disk contents are sample-only (100 records per source). They can be regenerated downstream by streaming the shipped shards (each shard is a concatenation of FASTA records; counting^>headers gives per-shard record counts).
Provenance
Built from the local manifest entry mgnify_proteins of manifests/atlas_download_plan.json.
Pipeline source: megadata-post normalize --dataset mgnify_proteins.