Update README.md
Browse files

README.md
CHANGED
|
@@ -10,7 +10,7 @@ license: apache-2.0
|
|
| 10 |
---
|
| 11 |
|
| 12 |
# Nighthawk-QCNN 🦅
|
| 13 |
-
|
| 14 |
96-qubit Quantum Convolutional Neural Network (QCNN) trained end-to-end on real IBM Quantum Heron r2/r3 hardware (backend: ibm_fez/ibm_kingston) on February 4, 2026.
|
| 15 |
|
| 16 |
## Task
|
|
@@ -29,9 +29,21 @@ Binary classification of parity of random Pauli-X excitations in 1D cluster stat
|
|
| 29 |
- **Shots per evaluation**: 384
|
| 30 |
- **Optimizer**: SPSA, 12 iterations
|
| 31 |
- **Final loss (MSE)**: 0.2704 (after 36 evaluations)
|
| 32 |
-
- **QPU time**: ~7 minutes (IBM Heron r2
|
| 33 |
- **Backend**: ibm_fez (156 qubits, heavy-hex lattice, tunable couplers)
|
| 34 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 35 |
## Repository Files
|
| 36 |
|
| 37 |
- `Nighthawk.npy` — trained parameters (72 values)
|
|
@@ -52,7 +64,8 @@ qcnn = qasm3.loads(open("qcnn.qasm").read())
|
|
| 52 |
qcnn.assign_parameters(theta)
|
| 53 |
|
| 54 |
print("Model loaded. Number of parameters:", len(theta))
|
| 55 |
-
# Next: compose with preparation circuit + run via Sampler
|
|
|
|
| 56 |
|
| 57 |
## Notes
|
| 58 |
|
|
@@ -64,5 +77,3 @@ print("Model loaded. Number of parameters:", len(theta))
|
|
| 64 |
- Improved connectivity → fewer SWAPs during transpilation → lower overall error accumulation
|
| 65 |
- Expected: noticeably lower final loss and higher effective classification accuracy
|
| 66 |
- Improvements: more shots, error mitigation (twirling/M3), run on Nighthawk (square lattice).
|
| 67 |
-
|
| 68 |
-
Ready for Hugging Face upload.
|
|
|
|
| 10 |
---
|
| 11 |
|
| 12 |
# Nighthawk-QCNN 🦅
|
| 13 |
+

|
| 14 |
96-qubit Quantum Convolutional Neural Network (QCNN) trained end-to-end on real IBM Quantum Heron r2/r3 hardware (backend: ibm_fez/ibm_kingston) on February 4, 2026.
|
| 15 |
|
| 16 |
## Task
|
|
|
|
| 29 |
- **Shots per evaluation**: 384
|
| 30 |
- **Optimizer**: SPSA, 12 iterations
|
| 31 |
- **Final loss (MSE)**: 0.2704 (after 36 evaluations)
|
| 32 |
+
- **QPU time**: ~7 minutes (IBM Heron r2/r3)
|
| 33 |
- **Backend**: ibm_fez (156 qubits, heavy-hex lattice, tunable couplers)
|
| 34 |
|
| 35 |
+
## Training Convergence
|
| 36 |
+
|
| 37 |
+

|
| 38 |
+
MSE loss starts at ~0.268, dips to ~0.243 around evaluation 1.0, then rises again due to noise accumulation.
|
| 39 |
+
|
| 40 |
+
| Run | Qubits | Samples | Shots | Iterations | Final Loss | QPU Time |
|
| 41 |
+
|-----|--------|---------|-------|------------|------------|----------|
|
| 42 |
+
| 1 | 96 | 16 | 256 | 8 | 0.29 | ~2 min |
|
| 43 |
+
| 2 | 96 | 24 | 384 | 12 | 0.2704 | ~7 min |
|
| 44 |
+
|
| 45 |
+
|
| 46 |
+
|
| 47 |
## Repository Files
|
| 48 |
|
| 49 |
- `Nighthawk.npy` — trained parameters (72 values)
|
|
|
|
| 64 |
qcnn.assign_parameters(theta)
|
| 65 |
|
| 66 |
print("Model loaded. Number of parameters:", len(theta))
|
| 67 |
+
# Next: compose with preparation circuit + run via Sampler
|
| 68 |
+
```
|
| 69 |
|
| 70 |
## Notes
|
| 71 |
|
|
|
|
| 77 |
- Improved connectivity → fewer SWAPs during transpilation → lower overall error accumulation
|
| 78 |
- Expected: noticeably lower final loss and higher effective classification accuracy
|
| 79 |
- Improvements: more shots, error mitigation (twirling/M3), run on Nighthawk (square lattice).
|
|
|
|
|
|