File size: 3,401 Bytes
59628a3 0e2fde9 676344e cbf9b37 edc49d8 9c46cc5 23c22bc 59628a3 676344e 59628a3 676344e da3e6f7 676344e 59628a3 0c08a15 59628a3 676344e 59628a3 79177d4 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 |
---
library_name: qiskit
tags:
- quantum
- qcnn
- nisq
- ibm-quantum
- variational-quantum-algorithm
license: apache-2.0
---
# Nighthawk-QCNN

**96-qubit Quantum Convolutional Neural Network (QCNN)**
Trained end-to-end on real IBM Quantum Heron r2/r3 hardware
(backend: ibm_fez/ibm_kingston) on February 4, 2026.
## Task
Binary classification of parity of random Pauli-X excitations in 1D cluster state
(0 β even number β trivial state, 1 β odd number β non-trivial).
## Technical Details
- **Qubits**: 96 (actively used in ansatz + preparation)
- **Architecture**: QCNN with 3 layers (conv β pool β conv β pool β conv β readout)
- **Convolution operator**: 4-parameter 2-qubit block (RY, RZ, CZ), shared parameters
- **Pooling**: static (measure + CZ, no conditional X due to compiler limitations)
- **Readout**: Z-probability on final qubit β MSE loss
- **Trainable parameters**: 72 (8 per layer Γ 3)
- **Dataset**: 24 samples (on-the-fly generation)
- **Shots per evaluation**: 384
- **Optimizer**: SPSA, 12 iterations
- **Final loss (MSE)**: 0.2704 (after 36 evaluations)
- **QPU time**: ~7 minutes (IBM Heron r2/r3)
- **Backend**: ibm_fez (156 qubits, heavy-hex lattice, tunable couplers)
## Training Convergence

MSE loss starts at ~0.268, dips to ~0.243 around evaluation 1.0, then rises again due to noise accumulation.
| Run | Qubits | Samples | Shots | Iterations | Final Loss | QPU Time |
|-----|--------|---------|-------|------------|------------|----------|
| 1 | 96 | 16 | 256 | 8 | 0.29 | ~2 min |
| 2 | 96 | 24 | 384 | 12 | 0.2704 | ~7 min |
## Repository Files
- `Nighthawk.npy` β trained parameters (72 values)
- `qcnn.qasm` β QASM3 description of the ansatz (parameter-free)
- `results.csv` β final training metrics
- `training_log.txt` β full log of loss evaluations and transpilation
- `requirements.txt` β dependencies for reproduction

## Usage / Inference
```python
from qiskit import qasm3
import numpy as np
# Load model
theta = np.load("Nighthawk.npy")
qcnn = qasm3.loads(open("qcnn.qasm").read())
qcnn.assign_parameters(theta)
print("Model loaded. Number of parameters:", len(theta))
# Next: compose with preparation circuit + run via Sampler
```
## Notes
- Proof-of-concept for scaling QCNN on NISQ hardware in 2026.
- Loss near random guess (0.25) due to high noise on Heron r2 β typical for NISQ.
- **Why better results expected on ibm_miami (Nighthawk r1)**:
- Square lattice topology (vs heavy-hex on Heron r2) β much better natural locality for convolutional layers
- Higher CLOPS and lower gate errors β deeper circuits with less decoherence
- Improved connectivity β fewer SWAPs during transpilation β lower overall error accumulation
- Expected: noticeably lower final loss and higher effective classification accuracy
- Improvements: more shots, error mitigation (twirling/M3), run on Nighthawk (square lattice).
<div align="center">
**Pro Mundi Vita**
</div>
|