Dataset Viewer
The dataset viewer is not available for this subset.
Cannot get the split names for the config 'default' of the dataset.
Exception:    SplitsNotFoundError
Message:      The split names could not be parsed from the dataset config.
Traceback:    Traceback (most recent call last):
                File "/usr/local/lib/python3.12/site-packages/datasets/inspect.py", line 289, in get_dataset_config_info
                  for split_generator in builder._split_generators(
                                         ^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/hdf5/hdf5.py", line 64, in _split_generators
                  with h5py.File(first_file, "r") as h5:
                       ^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/h5py/_hl/files.py", line 564, in __init__
                  fid = make_fid(name, mode, userblock_size, fapl, fcpl, swmr=swmr)
                        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/h5py/_hl/files.py", line 238, in make_fid
                  fid = h5f.open(name, flags, fapl=fapl)
                        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "h5py/_objects.pyx", line 56, in h5py._objects.with_phil.wrapper
                File "h5py/_objects.pyx", line 57, in h5py._objects.with_phil.wrapper
                File "h5py/h5f.pyx", line 102, in h5py.h5f.open
              FileNotFoundError: [Errno 2] Unable to synchronously open file (unable to open file: name = 'zip://candidate_features.h5::hf://datasets/JakeTurner616/mtg-cards-SIFT-Features@e4926b481e2ecec4597f4e4a43fe6761be734f9f/resources-nightly.zip', errno = 2, error message = 'No such file or directory', flags = 0, o_flags = 0)
              
              The above exception was the direct cause of the following exception:
              
              Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/config/split_names.py", line 65, in compute_split_names_from_streaming_response
                  for split in get_dataset_split_names(
                               ^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/inspect.py", line 343, in get_dataset_split_names
                  info = get_dataset_config_info(
                         ^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/inspect.py", line 294, in get_dataset_config_info
                  raise SplitsNotFoundError("The split names could not be parsed from the dataset config.") from err
              datasets.inspect.SplitsNotFoundError: The split names could not be parsed from the dataset config.

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

MTG Card SIFT Features Dataset (v5.1)

demo

This dataset contains the latest incremental MTG card SIFT + RootSIFT feature extraction pipeline. It is designed for server-side production inference, enabling additive updates to the FAISS index and id_map.json without retraining or reindexing from scratch.

Note: This version aligns with a daily resources-nightly.zip Hugging Face upload workflow for reliable continuous deployment via my production server.


What’s New in v5.1?

Feature v5.0 v5.1 (Current)
Index updates Additive-safe Same
Upload workflow Manual or ad-hoc Integrated with HF nightly pipeline
Logging Basic Detailed zipping and upload logs
Service impact Potential blocking uploads Runs in background, non-blocking
HF dataset structure Single zip upload Same, consistent naming

File Layout (resources-nightly.zip)

resources/
└── run/
    ├── candidate_features.h5     # Keypoints + descriptors per card (gzip HDF5)
    ├── faiss_ivf.index           # FAISS IVF-PQ index
    └── id_map.json               # Descriptor index-to-scryfall_id mapping

What Is This?

A high-precision visual descriptor dataset for Magic: The Gathering cards, used for:

Visual SearchMobile/Server Card ScanningFAISS-based Similarity SearchIncremental Model Growth


Pipeline Summary

  1. Image Acquisition

    • Fetched from Scryfall using official API.
  2. Preprocessing

    • Resize with aspect ratio, CLAHE on L-channel, grayscale conversion.
  3. Feature Extraction

    • OpenCV SIFT + RootSIFT normalization.
  4. Storage

    • candidate_features.h5 using gzip compression.
  5. Indexing

    • FAISS IVF-PQ (100 clusters, 8x8 PQ), additive updates supported.
  6. Mapping

    • id_map.json aligns descriptors to scryfall_id.
  7. HF Upload

    • Zipped nightly, uploaded to HF dataset repo in the background.

Use Cases

Real-time card scanning for Magic: The Gathering ✅ Card image search pipelinesLocal inference on low-resource serversModel growth without reindexing


Workflow (Simplified)

[Input Image]
    ↓
[CLAHE + SIFT + RootSIFT]
    ↓
[FAISS IVF-PQ Search]
    ↓
[Retrieve scryfall_id]
    ↓
[Result]

Why v5.1 Matters

No SQLite requiredWorks with h5py + NumPyParallel-safe and resumableTiny memory footprintMIT Licensed for free, unlimited use


Acknowledgments

Created by JakeTurner616 Powered by:


For deep implementation details, see the mtgscan.cards monorepo.

Downloads last month
82