--- library_name: qiskit tags: - quantum - qcnn - nisq - ibm-quantum - variational-quantum-algorithm license: apache-2.0 --- # Nighthawk-QCNN ![night](https://cdn-uploads.huggingface.co/production/uploads/67329d3f69fded92d56ab41a/Knx2N1lGc6flPfK2mSeQy.jpeg) **96-qubit Quantum Convolutional Neural Network (QCNN)** Trained end-to-end on real IBM Quantum Heron r2/r3 hardware (backend: ibm_fez/ibm_kingston) on February 4, 2026. ## Task Binary classification of parity of random Pauli-X excitations in 1D cluster state (0 — even number → trivial state, 1 — odd number → non-trivial). ## Technical Details - **Qubits**: 96 (actively used in ansatz + preparation) - **Architecture**: QCNN with 3 layers (conv → pool → conv → pool → conv → readout) - **Convolution operator**: 4-parameter 2-qubit block (RY, RZ, CZ), shared parameters - **Pooling**: static (measure + CZ, no conditional X due to compiler limitations) - **Readout**: Z-probability on final qubit → MSE loss - **Trainable parameters**: 72 (8 per layer × 3) - **Dataset**: 24 samples (on-the-fly generation) - **Shots per evaluation**: 384 - **Optimizer**: SPSA, 12 iterations - **Final loss (MSE)**: 0.2704 (after 36 evaluations) - **QPU time**: ~7 minutes (IBM Heron r2/r3) - **Backend**: ibm_fez (156 qubits, heavy-hex lattice, tunable couplers) ## Training Convergence ![loss_curve](https://cdn-uploads.huggingface.co/production/uploads/67329d3f69fded92d56ab41a/LDHkVXtixkQm8-ifEBvFv.jpeg) MSE loss starts at ~0.268, dips to ~0.243 around evaluation 1.0, then rises again due to noise accumulation. | Run | Qubits | Samples | Shots | Iterations | Final Loss | QPU Time | |-----|--------|---------|-------|------------|------------|----------| | 1 | 96 | 16 | 256 | 8 | 0.29 | ~2 min | | 2 | 96 | 24 | 384 | 12 | 0.2704 | ~7 min | ## Repository Files - `Nighthawk.npy` — trained parameters (72 values) - `qcnn.qasm` — QASM3 description of the ansatz (parameter-free) - `results.csv` — final training metrics - `training_log.txt` — full log of loss evaluations and transpilation - `requirements.txt` — dependencies for reproduction ![nighthawk banner](https://cdn-uploads.huggingface.co/production/uploads/67329d3f69fded92d56ab41a/nJQdXnlVcCaY-y4uFwjQA.png) ## Usage / Inference ```python from qiskit import qasm3 import numpy as np # Load model theta = np.load("Nighthawk.npy") qcnn = qasm3.loads(open("qcnn.qasm").read()) qcnn.assign_parameters(theta) print("Model loaded. Number of parameters:", len(theta)) # Next: compose with preparation circuit + run via Sampler ``` ## Notes - Proof-of-concept for scaling QCNN on NISQ hardware in 2026. - Loss near random guess (0.25) due to high noise on Heron r2 — typical for NISQ. - **Why better results expected on ibm_miami (Nighthawk r1)**: - Square lattice topology (vs heavy-hex on Heron r2) → much better natural locality for convolutional layers - Higher CLOPS and lower gate errors → deeper circuits with less decoherence - Improved connectivity → fewer SWAPs during transpilation → lower overall error accumulation - Expected: noticeably lower final loss and higher effective classification accuracy - Improvements: more shots, error mitigation (twirling/M3), run on Nighthawk (square lattice).
**Pro Mundi Vita**