--- library_name: ising-decoding tags: - quantum - qec - error_correction - decoders - surface_code - predecoder license: apache-2.0 --- # Quantispect Overview ![Quantispect Neural Pre-Decoder Architecture](framework.png) ## Model Summary | Item | Value | |---|---:| | Model name | Quantispect | | Checkpoint file | `Quantispect_RF13_v1.0.10.pt` | | Total parameters | ~0.663M | | Checkpoint size | ~2.63 MB | | Architecture | FastHyper-style 3D CNN neural pre-decoder | | Receptive field | R=13 | | Input tensor | `(B, 4, T, D, D)` | | Output tensor | `(B, 4, T, D, D)` | | Release date | April 26, 2026 | ## Description: Quantispect is a compact neural pre-decoder for rotated surface-code quantum error correction. It consumes five-dimensional syndrome volumes across batch, channel, time, and two spatial dimensions, and predicts local correction maps that are consumed by a downstream global decoder such as MWPM / PyMatching or an Ising-decoding post-processing pipeline. Quantispect is designed to run inside an NVIDIA Ising-Decoding-compatible workflow after applying the Quantispect code patch included with this model release. ## Model Architecture: Architecture type: 3D Convolutional Neural Network (3D CNN) Network architecture: custom multi-branch spatio-temporal 3D CNN with residual FastHyper blocks. ### Input Input shape: ```text (B, 4, T, D, D) ``` ### Stem ```text Conv3D 4 -> 96, kernel 3x3x3 GroupNorm GELU ``` Stem output shape: ```text (B, 96, T, D, D) ``` ### Main Body The main body contains five repeated `FastHyperBlock` modules: ```text FastHyperBlock x5 ``` Each `FastHyperBlock` first expands the feature width from 96 to 144 channels with a 1x1x1 convolution, then applies three parallel feature extraction branches: ```text Pre-projection: GroupNorm -> 1x1x1 Conv3D, 96 -> 144 -> GELU Branch A: Depthwise Conv3D, kernel 1x3x3, spatial branch Branch B: Depthwise Conv3D, kernel 3x1x1, temporal branch Branch C: GroupNorm -> Grouped Conv3D, kernel 3x3x3, groups=6, joint local spatio-temporal branch ``` The three branch outputs are aligned and fused by element-wise summation rather than channel concatenation. The fused feature is then projected and recalibrated: ```text Element-wise sum fusion 1x1x1 Conv3D projection, 144 -> 96 GELU ChannelGate / SE-style channel attention Dropout3D Residual connection ``` Main body output shape: ```text (B, 96, T, D, D) ``` ### Head ```text GroupNorm 1x1x1 Conv3D, 96 -> 96 GELU 1x1x1 Conv3D, 96 -> 4 ``` Output shape: ```text (B, 4, T, D, D) ``` The output maps are used by the residual-syndrome construction module and then passed to MWPM / Ising-decoder post-processing. ## Usage: Quantispect is intended to be used with the NVIDIA Ising-Decoding environment: ```text https://github.com/NVIDIA/Ising-Decoding ``` A clean NVIDIA Ising-Decoding checkout does not natively know the Quantispect / FastHyper architecture. To run `Quantispect_RF13_v1.0.10.pt`, first apply the Quantispect code patch included in this model repository. ### Required code patch files The patch package should preserve the following relative paths: ```text quantispect_code_patch/ ├── conf/ │ └── config_public.yaml └── code/ ├── model/ │ ├── predecoder_fasthyper_rf13_v1.py │ ├── factory.py │ └── registry.py ├── workflows/ │ ├── config_validator.py │ └── run.py └── scripts/ └── local_run.sh ``` These files should be copied into the NVIDIA Ising-Decoding repository with the same relative paths: ```text conf/config_public.yaml -> Ising-Decoding/conf/config_public.yaml code/model/predecoder_fasthyper_rf13_v1.py -> Ising-Decoding/code/model/predecoder_fasthyper_rf13_v1.py code/model/factory.py -> Ising-Decoding/code/model/factory.py code/model/registry.py -> Ising-Decoding/code/model/registry.py code/workflows/config_validator.py -> Ising-Decoding/code/workflows/config_validator.py code/workflows/run.py -> Ising-Decoding/code/workflows/run.py code/scripts/local_run.sh -> Ising-Decoding/code/scripts/local_run.sh ``` The patch mainly adds the `predecoder_fasthyper_rf13_v1` model implementation, registers `model_id: 6`, adds the Quantispect model hyperparameters to `config_public.yaml`, and enables explicit `.pt` checkpoint loading through `model_checkpoint_file`. ### Apply the patch From the directory containing both the clean NVIDIA Ising-Decoding repository and this downloaded patch package: ```bash cp -r code/* Ising-Decoding/code/ cp -r conf/* Ising-Decoding/conf/ ``` Then place the Quantispect checkpoint under the repository model directory: ```bash mkdir -p Ising-Decoding/models cp Quantispect_RF13_v1.0.10.pt Ising-Decoding/models/Quantispect_RF13_v1.0.10.pt ``` Expected directory layout: ```text Ising-Decoding/ ├── code/ │ ├── model/ │ │ └── predecoder_fasthyper_rf13_v1.py │ ├── workflows/ │ │ ├── config_validator.py │ │ └── run.py │ └── scripts/ │ └── local_run.sh ├── conf/ │ └── config_public.yaml ├── models/ │ └── Quantispect_RF13_v1.0.10.pt └── README.md ``` ## Inference Deployment: Configure the NVIDIA Ising-Decoding repository for inference, apply the Quantispect patch files above, and place the downloaded model checkpoint at `models/Quantispect_RF13_v1.0.10.pt`. Run from the repository root: ```bash cd Ising-Decoding CUDA_VISIBLE_DEVICES=0,1,2,3 \ PYTHONUNBUFFERED=1 \ PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True \ WORKFLOW=inference \ EXPERIMENT_NAME=infer_quantispect \ TORCH_COMPILE=0 \ EXTRA_PARAMS="+model_checkpoint_file=models/Quantispect_RF13_v1.0.10.pt" \ bash code/scripts/local_run.sh \ 2>&1 | tee infer_quantispect.log ```