Mgnify / README.md
anindya64's picture
Add files using upload-large-folder tool
138b150 verified
metadata
license: cc-by-4.0
pretty_name: MGnify Protein Catalogues
size_categories:
  - unknown
task_categories:
  - other
language:
  - en
tags:
  - biology
  - proteins
  - sequences
  - fasta
  - mgnify
  - metagenomics

MGnify Protein Catalogues

Normalized FASTA shards of MGnify protein cluster catalogues (mgy_clusters and mgy_proteins partitions).

Processed and uploaded by the MegaData post-download pipeline (internal repo). Original source: https://www.ebi.ac.uk/metagenomics/.

Note: shard-000000.fasta.zst per source is a 100-record sample left over from validation passes and is not part of this dataset. Only shard-000001.fasta.zst and higher contain full-clean data.

Statistics

Source files 26
Shards 3,148
Compressed shard bytes 319.27 GiB (342,815,994,580)
Normalized table files 28
Compressed table bytes 2.17 TiB (2,380,980,172,257)

Layout

.
├── tables/<source_slug>.jsonl         # normalized table rows (one JSON object per line)
└── sequences/<source_slug>/shard-NNNNNN.fasta.zst

metadata/, manifests/, and _MANIFEST.json are intentionally not shipped here — the on-disk versions were sample-only. Recompute per-shard record/residue counts downstream by streaming the FASTA records out of the shipped shards.

<source_slug> corresponds 1:1 with an upstream source archive; e.g. sequence_uniprotkb_uniprot_sprot.fasta.gz.

Loading

Stream every shard of one source (replace <source_slug> with the directory of interest under sequences/):

hf download LiteFold/Mgnify --repo-type dataset \
  --include 'sequences/<source_slug>/shard-*.fasta.zst' \
  --local-dir ./mgnify_proteins
zstd -dc ./mgnify_proteins/sequences/<source_slug>/shard-*.fasta.zst | head

Programmatic streaming with zstandard:

from huggingface_hub import snapshot_download
from pathlib import Path
import zstandard as zstd

local = snapshot_download(
    repo_id="LiteFold/Mgnify",
    repo_type="dataset",
    allow_patterns=["sequences/*/shard-*.fasta.zst"],
)

dctx = zstd.ZstdDecompressor()
for shard in sorted(Path(local).rglob("shard-*.fasta.zst")):
    with shard.open("rb") as f, dctx.stream_reader(f) as reader:
        buf = b""
        while chunk := reader.read(1 << 20):
            buf += chunk
            *lines, buf = buf.split(b"\n")
            for line in lines:
                ...  # naive splitter; swap in your FASTA parser

License

CC BY 4.0 (EMBL-EBI MGnify).

Citation

Mitchell AL, et al. MGnify: the microbiome analysis resource in 2020. Nucleic Acids Research, 48(D1):D570-D578, 2020.

Caveats

  • shard-000000.fasta.zst per source is a 100-record sample left over from validation passes and is excluded from this upload. Only shard-000001.fasta.zst and onward are shipped.
  • Per-source manifests/*.json and metadata/*.records.jsonl are not shipped because their on-disk contents are sample-only (100 records per source). They can be regenerated downstream by streaming the shipped shards (each shard is a concatenation of FASTA records; counting ^> headers gives per-shard record counts).

Provenance

Built from the local manifest entry mgnify_proteins of manifests/atlas_download_plan.json. Pipeline source: megadata-post normalize --dataset mgnify_proteins.