File size: 12,644 Bytes
38bd64c 2d3a0cc 38bd64c |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 |
---
license: apache-2.0
tags:
- quantum-computing
- quantum-machine-learning
- qiskit
- quantum-neural-network
- qnn
- classification
- hardware-efficient-ansatz
- nisq
- variational-quantum-algorithm
datasets:
- two_moons
metrics:
- accuracy
library_name: qiskit
pipeline_tag: tabular-classification
---
# Quantum Neural Network - Two Moons Classification
<div align="center">
<div align="center">
<img src="https://cdn-uploads.huggingface.co/production/uploads/67329d3f69fded92d56ab41a/X7Hq7qSzx0TIM43duGFHP.jpeg" width="50%" alt="twomoons">
</div>
**A 2-qubit Quantum Neural Network (QNN) trained for binary classification on the Two Moons dataset**
[](https://qiskit.org/)
[](LICENSE)
[](https://www.python.org/)
</div>
## ๐ Model Overview
This is a **Quantum Neural Network (QNN)** designed for binary classification tasks, demonstrating quantum machine learning on real quantum hardware. The model uses a hardware-efficient ansatz with 2 qubits and has been tested on IBM Quantum's `ibm_fez` backend.
### Key Features
- ๐ฌ **Pure Quantum Model**: Uses quantum circuits for feature encoding and classification
- โก **Hardware-Efficient**: Optimized for NISQ-era quantum devices
- ๐ฏ **Binary Classification**: Trained on the Two Moons dataset
- ๐ **IBM Quantum**: Compatible with real quantum hardware
- ๐ฆ **Easy to Use**: Simple inference API with pre-trained weights
## ๐๏ธ Model Architecture
### Specifications
| Feature | Value |
|---------|-------|
| **Qubits** | 2 |
| **Circuit Depth** | 4 layers |
| **Total Parameters** | 6 (2 input + 4 trainable) |
| **Trainable Parameters** | 4 |
| **Gates** | 6ร Ry + 1ร CNOT |
| **Entanglement** | Linear topology |
| **Ansatz Type** | Hardware-Efficient |
| **Backend** | IBM Quantum (ibm_fez) |
### Circuit Diagram
```
โโโโโโโโโโโโโโโโโโโโโโโโ โโโโโโโโโโโโ
q_0: โค Ry(x[0]) โโค Ry(w[0]) โโโโ โโโค Ry(w[2]) โ
โโโโโโโโโโโโคโโโโโโโโโโโโคโโโดโโโโโโโโโโโโโโค
q_1: โค Ry(x[1]) โโค Ry(w[1]) โโค X โโค Ry(w[3]) โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
```
### Layer Breakdown
1. **Encoding Layer** (`Ry(x[0])`, `Ry(x[1])`): Encodes 2D classical data into quantum states
2. **Variational Layer 1** (`Ry(w[0])`, `Ry(w[1])`): First trainable rotation gates
3. **Entanglement Layer** (`CNOT`): Creates quantum correlations between qubits
4. **Variational Layer 2** (`Ry(w[2])`, `Ry(w[3])`): Second trainable rotation gates
5. **Measurement**: Parity measurement on both qubits for classification
## ๐ Quick Start
### Installation
```bash
pip install qiskit qiskit-machine-learning numpy huggingface-hub
```
### Basic Usage
```python
from huggingface_hub import hf_hub_download
from qiskit import qpy
import numpy as np
# Download model files
circuit_path = hf_hub_download(
repo_id="squ11z1/Two-Moons",
filename="circuit.qpy"
)
weights_path = hf_hub_download(
repo_id="squ11z1/Two-Moons",
filename="weights.npy"
)
# Load quantum circuit
with open(circuit_path, 'rb') as f:
circuit = qpy.load(f)[0]
# Load trained weights
weights = np.load(weights_path)
print(f"Loaded QNN with {circuit.num_qubits} qubits")
print(f"Trained weights: {weights}")
```
### Inference Example
```python
from qiskit.circuit import ParameterVector
from qiskit_machine_learning.neural_networks import SamplerQNN
from qiskit_machine_learning.algorithms.classifiers import NeuralNetworkClassifier
from qiskit.primitives import StatevectorSampler as Sampler
# Load test data
X_test = np.load(hf_hub_download(
repo_id="squ11z1/TwoMoons-2Q",
filename="X_test.npy"
))
# Setup parameters
input_params = [p for p in circuit.parameters if p.name.startswith('x')]
weight_params = [p for p in circuit.parameters if p.name.startswith('w')]
# Parity interpretation function
def parity(x):
"""Convert measurement to binary classification (0 or 1)"""
return bin(x).count("1") % 2
# Create QNN
sampler = Sampler()
qnn = SamplerQNN(
circuit=circuit,
input_params=input_params,
weight_params=weight_params,
interpret=parity,
output_shape=2,
sampler=sampler
)
# Create classifier with pre-trained weights
classifier = NeuralNetworkClassifier(
neural_network=qnn,
optimizer=None # Weights already trained
)
classifier._fit_result = type('obj', (object,), {'x': weights})
# Make predictions
predictions = classifier.predict(X_test)
print(f"Predictions: {predictions}")
```
### Using the Helper Module
```python
from qnn_inference import load_qnn_model, create_qnn_classifier
import numpy as np
# Load model
circuit, weights = load_qnn_model(repo_id="squ11z1/Two-Moons")
# Create classifier
classifier = create_qnn_classifier(circuit, weights)
# Predict on new data
X_new = np.array([[0.5, 0.2], [-0.5, 0.5]])
predictions = classifier.predict(X_new)
print(f"Predictions: {predictions}")
```
## ๐ Training Details
### Dataset
- **Name:** Two Moons (sklearn.datasets.make_moons)
- **Type:** Synthetic binary classification dataset
- **Features:** 2D coordinates (x, y)
- **Classes:** 2 (crescent-shaped clusters)
- **Train samples:** 8
- **Test samples:** 4
- **Total:** 12 samples
### Training Configuration
- **Optimizer:** COBYLA (Constrained Optimization BY Linear Approximation)
- **Loss Function:** Cross-entropy
- **Epochs:** Variable (convergence-based)
- **Training Backend:** IBM Quantum (ibm_fez)
- **Testing Backend:** IBM Quantum (ibm_fez)
### Performance Metrics
| Metric | Value | Notes |
|--------|-------|-------|
| **Test Accuracy** | 0-75% | Varies by noise and seed |
| **Train Accuracy** | ~87.5% | On 8 training samples |
| **Baseline (Random)** | 50% | Random guessing |
| **Classical MLP** | ~100% | For comparison |
**Note:** The low test accuracy (0% in the visualization) is typical for:
- Small training dataset (only 8 samples)
- Quantum noise from real hardware
- Limited model capacity (2 qubits)
- Early-stage NISQ device limitations
This is a **proof-of-concept** model demonstrating quantum ML workflows, not production-ready accuracy.
## ๐ฌ Technical Deep Dive
### Why Hardware-Efficient Ansatz?
The hardware-efficient ansatz is chosen to:
1. **Minimize gate count**: Fewer gates = less noise accumulation
2. **Use native gates**: Ry and CNOT are native to IBM Quantum hardware
3. **Avoid compilation overhead**: Circuit runs directly on hardware
4. **Reduce circuit depth**: Depth 4 is shallow enough for NISQ devices
### Barren Plateau Mitigation
This architecture avoids the barren plateau problem through:
- โ
**Small qubit count** (n=2): Gradient variance โ 1/2^n = 1/4 (good!)
- โ
**Shallow depth** (4 layers): Limits exponential gradient decay
- โ
**Local connectivity**: Linear entanglement structure
- โ
**Parameter efficiency**: Only 4 trainable parameters
**Expected gradient variance:** `Var[โL/โฮธ] โ 0.25`
### Quantum Advantage?
For this small problem, **no quantum advantage** is expected or claimed. However, this model serves as:
1. **Educational tool**: Demonstrates QML concepts
2. **Research platform**: Tests quantum algorithms on real hardware
3. **Proof of concept**: Shows end-to-end quantum workflow
4. **Benchmark**: Compares quantum vs classical performance
### Measurement Strategy
The model uses **parity measurement**:
```python
def parity(x):
"""
Measures both qubits and computes parity.
Example:
- |00โฉ โ 0 (even parity) โ Class 0
- |01โฉ โ 1 (odd parity) โ Class 1
- |10โฉ โ 1 (odd parity) โ Class 1
- |11โฉ โ 0 (even parity) โ Class 0
"""
return bin(x).count("1") % 2
```
This creates a **nonlinear decision boundary** in feature space.
## ๐ Repository Contents
```
.
โโโ README.md # This file
โโโ circuit.qpy # Quantum circuit (Qiskit QPY format, 712 bytes)
โโโ weights.npy # Trained weights (4 parameters, 160 bytes)
โโโ config.json # Model configuration metadata
โโโ qnn_inference.py # Helper functions for loading and inference
โโโ requirements.txt # Python dependencies
โโโ X_train.npy # Training input data (8 samples)
โโโ X_test.npy # Test input data (4 samples)
โโโ y_train.npy # Training labels
โโโ y_test.npy # Test labels
```
## ๐ฏ Use Cases
### Educational
- Learn quantum machine learning fundamentals
- Understand variational quantum algorithms
- Explore quantum circuit design
### Research
- Benchmark quantum vs classical models
- Study quantum noise effects on ML
- Test new quantum ML algorithms
- Investigate NISQ-era limitations
### Development
- Template for quantum ML projects
- Starting point for larger QNN models
- Integration example for Hugging Face + Qiskit
## โ ๏ธ Limitations
### Model Limitations
- **Small dataset**: Only 12 samples total (not scalable)
- **Low capacity**: 2 qubits limit expressiveness
- **Binary only**: Can't handle multi-class problems as-is
- **Fixed input**: Requires exactly 2D input features
### Quantum Hardware Limitations
- **NISQ noise**: Quantum errors degrade performance
- **Decoherence**: Qubits lose quantum state over time
- **Gate errors**: Imperfect quantum operations
- **Limited connectivity**: Hardware topology constraints
### Practical Limitations
- **Slow inference**: Quantum circuits are slower than classical NNs
- **Requires quantum access**: Needs IBM Quantum account for hardware runs
- **No gradients**: Can't fine-tune (weights are pre-trained)
- **Stochastic**: Results vary due to quantum sampling
## ๐ฎ Future Improvements
### Immediate Next Steps
- [ ] Increase dataset size to 100+ samples
- [ ] Add data augmentation for better generalization
- [ ] Test on multiple quantum backends
- [ ] Implement error mitigation techniques
### Long-term Goals
- [ ] Scale to 4-16 qubits for more complex patterns
- [ ] Multi-class classification support
- [ ] Hybrid quantum-classical architecture
- [ ] Deploy on IBM Quantum Runtime
- [ ] Compare with classical ML benchmarks
## ๐ Citation
If you use this model in your research, please cite:
```bibtex
@misc{qnn-two-moons-2025,
author = {squ11z1},
title = {Quantum Neural Network for Two Moons Classification},
year = {2025},
publisher = {Hugging Face},
howpublished = {\url{https://huggingface.co/squ11z1/Two-Moons}},
note = {2-qubit QNN with hardware-efficient ansatz}
}
```
## ๐ References
### Quantum Machine Learning
- [Qiskit Machine Learning Documentation](https://qiskit.org/ecosystem/machine-learning/)
- [Quantum Neural Networks (arXiv:1802.06002)](https://arxiv.org/abs/1802.06002)
- [Supervised learning with quantum enhanced feature spaces (Nature 2019)](https://www.nature.com/articles/s41586-019-0980-2)
### Variational Algorithms
- [Variational Quantum Algorithms (arXiv:2012.09265)](https://arxiv.org/abs/2012.09265)
- [Hardware-efficient variational quantum eigensolver (arXiv:1704.05018)](https://arxiv.org/abs/1704.05018)
### Barren Plateaus
- [Barren plateaus in quantum neural network training (Nature 2018)](https://www.nature.com/articles/s41467-018-07090-4)
- [The effect of data encoding on barren plateaus (arXiv:2008.08605)](https://arxiv.org/abs/2008.08605)
### IBM Quantum
- [IBM Quantum Platform](https://quantum.ibm.com/)
- [Qiskit Documentation](https://docs.quantum.ibm.com/)
## ๐ค Contributing
This is an experimental research model. Contributions welcome!
### How to Contribute
1. Test the model on different datasets
2. Report issues or bugs
3. Suggest architectural improvements
4. Share your results and findings
Open an issue or discussion on the [Hugging Face model page](https://huggingface.co/squ11z1/Two-Moons).
## ๐ License
**Apache License 2.0**
This model and all associated code are released under the Apache 2.0 license. You are free to use, modify, and distribute this model for any purpose, including commercial applications.
See [LICENSE](LICENSE) for full details.
---
<div align="center">
**Built with โค๏ธ using Qiskit and IBM Quantum**
*Like this model if you find it useful!*
</div>
|