Spaces:
Running
Running
Update GUIDE.md
Browse files
GUIDE.md
CHANGED
|
@@ -1,62 +1,122 @@
|
|
| 1 |
# 🌌 QSBench: Complete User Guide
|
| 2 |
|
| 3 |
-
Welcome to **QSBench Analytics Hub**.
|
|
|
|
| 4 |
|
| 5 |
---
|
| 6 |
|
| 7 |
-
##
|
| 8 |
|
| 9 |
-
|
| 10 |
|
| 11 |
-
|
| 12 |
-
|
| 13 |
-
|
| 14 |
-
4. **Transpilation (10q):** Circuits optimised and compiled for a specific hardware topology.
|
| 15 |
|
| 16 |
-
|
| 17 |
-
|
| 18 |
-
|
| 19 |
-
|
| 20 |
-
|
| 21 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 22 |
|
| 23 |
---
|
| 24 |
|
| 25 |
-
## 📊 2. Feature
|
| 26 |
|
| 27 |
-
|
|
|
|
| 28 |
|
| 29 |
-
**
|
| 30 |
-
|
| 31 |
-
|
| 32 |
-
|
| 33 |
-
|
| 34 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 35 |
|
| 36 |
---
|
| 37 |
|
| 38 |
-
## 🎯 3. Target
|
| 39 |
|
| 40 |
-
|
| 41 |
|
| 42 |
-
|
| 43 |
-
* **Why are other targets hidden?** The dataset contains local expectation values (`ideal_expval_Z_q0`, `error_Z_global`, etc.). They are excluded from the list of available features to avoid data leakage when training the regressor.
|
| 44 |
|
| 45 |
---
|
| 46 |
|
| 47 |
-
##
|
| 48 |
|
| 49 |
-
|
| 50 |
-
|
| 51 |
-
|
| 52 |
-
|
| 53 |
-
|
| 54 |
-
|
| 55 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 56 |
|
| 57 |
---
|
| 58 |
|
| 59 |
## 🔗 5. Project Resources
|
| 60 |
-
|
| 61 |
-
|
| 62 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
# 🌌 QSBench: Complete User Guide
|
| 2 |
|
| 3 |
+
Welcome to the **QSBench Analytics Hub**.
|
| 4 |
+
This platform is designed to bridge the gap between quantum circuit topology and machine learning, allowing researchers to study how structural characteristics influence quantum simulation outcomes.
|
| 5 |
|
| 6 |
---
|
| 7 |
|
| 8 |
+
## ⚠️ Important: Demo Dataset Notice
|
| 9 |
|
| 10 |
+
The datasets currently loaded in this hub are **v1.0.0-demo versions**.
|
| 11 |
|
| 12 |
+
- **Scale**: These are small *shards* (subsets) of the full QSBench library.
|
| 13 |
+
- **Accuracy**: Because the training data is limited in size, ML models trained here will show lower accuracy and higher variance compared to models trained on full-scale production datasets.
|
| 14 |
+
- **Purpose**: These sets are intended for **demonstration and prototyping** of analytical pipelines before moving to high-performance computing (HPC) environments.
|
|
|
|
| 15 |
|
| 16 |
+
---
|
| 17 |
+
|
| 18 |
+
## 📂 1. Dataset Architecture & Selection
|
| 19 |
+
|
| 20 |
+
QSBench provides high-fidelity simulation data for the Quantum Machine Learning (QML) community.
|
| 21 |
+
We provide four distinct environments to test how different noise models affect data:
|
| 22 |
+
|
| 23 |
+
### Core (Clean)
|
| 24 |
+
Ideal state-vector simulations.
|
| 25 |
+
Used as a **"Golden Reference"** to understand the theoretical limits of a circuit's expressivity without physical interference.
|
| 26 |
+
|
| 27 |
+
### Depolarizing Noise
|
| 28 |
+
Simulates the effect of qubits losing their state toward a maximally mixed state.
|
| 29 |
+
This is the standard **"white noise"** of quantum computing.
|
| 30 |
+
|
| 31 |
+
### Amplitude Damping
|
| 32 |
+
Represents **T1 relaxation (energy loss)**.
|
| 33 |
+
This is an asymmetric noise model where qubits decay from ∣1⟩ to ∣0⟩, critical for studying superconducting hardware.
|
| 34 |
+
|
| 35 |
+
### Transpilation (10q)
|
| 36 |
+
Circuits are mapped to a **hardware topology (heavy-hex or grid)**.
|
| 37 |
+
Used to study how SWAP gates and routing overhead affect final results.
|
| 38 |
|
| 39 |
---
|
| 40 |
|
| 41 |
+
## 📊 2. Feature Engineering: Structural Metrics
|
| 42 |
|
| 43 |
+
Why do we extract these specific features?
|
| 44 |
+
In QML, the **structure ("shape") of a circuit directly impacts performance**.
|
| 45 |
|
| 46 |
+
- **gate_entropy**
|
| 47 |
+
Measures distribution of gates.
|
| 48 |
+
High entropy → complex, less repetitive circuits → harder for classical models to learn.
|
| 49 |
+
|
| 50 |
+
- **meyer_wallach**
|
| 51 |
+
Quantifies **global entanglement**.
|
| 52 |
+
Entanglement provides quantum advantage but increases sensitivity to noise.
|
| 53 |
+
|
| 54 |
+
- **adjacency**
|
| 55 |
+
Represents qubit interaction graph density.
|
| 56 |
+
High adjacency → faster information spread, but higher risk of cross-talk errors.
|
| 57 |
+
|
| 58 |
+
- **cx_count (Two-Qubit Gates)**
|
| 59 |
+
The most critical complexity metric.
|
| 60 |
+
On NISQ devices, CNOT gates are **10x–100x noisier** than single-qubit gates.
|
| 61 |
|
| 62 |
---
|
| 63 |
|
| 64 |
+
## 🎯 3. Multi-Target Regression (The Bloch Vector)
|
| 65 |
|
| 66 |
+
Unlike traditional benchmarks that focus on a single observable, QSBench targets the **full global Bloch vector**:
|
| 67 |
|
| 68 |
+
[⟨X⟩global, ⟨Y⟩global, ⟨Z⟩global]
|
|
|
|
| 69 |
|
| 70 |
---
|
| 71 |
|
| 72 |
+
### Why predict all three?
|
| 73 |
|
| 74 |
+
A quantum state is a point on (or inside) the **Bloch sphere**.
|
| 75 |
+
|
| 76 |
+
- Predicting only Z gives an incomplete picture
|
| 77 |
+
- Multi-target regression learns correlations between:
|
| 78 |
+
- circuit structure
|
| 79 |
+
- full quantum state orientation
|
| 80 |
+
- behavior in Hilbert space
|
| 81 |
+
|
| 82 |
+
---
|
| 83 |
+
|
| 84 |
+
## 🤖 4. Using the ML Analytics Module
|
| 85 |
+
|
| 86 |
+
The Hub uses a **Random Forest Regressor** to establish a baseline of predictability.
|
| 87 |
+
|
| 88 |
+
### Workflow
|
| 89 |
+
|
| 90 |
+
1. **Select Dataset**
|
| 91 |
+
Choose a noise model and observe how it affects predictability.
|
| 92 |
+
|
| 93 |
+
2. **Select Features**
|
| 94 |
+
Recommended starting set:
|
| 95 |
+
- `gate_entropy`
|
| 96 |
+
- `meyer_wallach`
|
| 97 |
+
- `depth`
|
| 98 |
+
- `cx_count`
|
| 99 |
+
|
| 100 |
+
3. **Execute Baseline**
|
| 101 |
+
Performs an **80/20 train-test split**.
|
| 102 |
+
|
| 103 |
+
4. **Analyze the Triple Parity Plot**
|
| 104 |
+
|
| 105 |
+
- 🔴 **Diagonal Red Line** → perfect prediction
|
| 106 |
+
- 📈 **Clustering near line** → strong predictive signal
|
| 107 |
+
- 🔍 **Basis comparison**:
|
| 108 |
+
- Z often easier to predict
|
| 109 |
+
- X/Y depend more on circuit structure
|
| 110 |
+
- reveals architectural biases (HEA vs QFT, etc.)
|
| 111 |
|
| 112 |
---
|
| 113 |
|
| 114 |
## 🔗 5. Project Resources
|
| 115 |
+
|
| 116 |
+
- 🤗 Hugging Face Datasets — download dataset shards
|
| 117 |
+
- 💻 GitHub Repository — QSBench generator source code
|
| 118 |
+
- 🌐 Official Website — documentation and benchmarking leaderboards
|
| 119 |
+
|
| 120 |
+
---
|
| 121 |
+
|
| 122 |
+
*QSBench — Synthetic Quantum Dataset Benchmarks*
|