Spaces:
Running
Running
File size: 2,890 Bytes
60874b4 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 | # π CNOT Count Regression Guide
Welcome to the **CNOT Count Regression Hub**.
This tool demonstrates how Machine Learning can predict the number of **CNOT (CX) gates** β the most noise-prone two-qubit operations β using only structural features of quantum circuits.
---
## β οΈ Important: Demo Dataset Notice
The datasets used here are **v1.0.0-demo** shards.
* **Constraint:** Reduced dataset size.
* **Impact:** Model accuracy may fluctuate depending on split and features.
* **Goal:** Demonstrate how circuit topology correlates with entangling gate usage.
---
## π― 1. What is Being Predicted?
The model predicts:
### `cx_count`
The total number of **CNOT gates** in a circuit.
Why this matters:
* CNOT gates are the **main source of noise** in NISQ devices
* They dominate **error rates and decoherence**
* Reducing them is key to **hardware-efficient quantum algorithms**
---
## π§ 2. How the Model Works
We train a **Random Forest Regressor** to map circuit features β `cx_count`.
### Input Features (examples):
* **Topology:**
* `adj_density` β connectivity density
* `adj_degree_mean` β average qubit interaction
* **Complexity:**
* `depth` β circuit depth
* `total_gates` β total number of operations
* **Structure:**
* `gate_entropy` β randomness vs regularity
* **QASM-derived:**
* `qasm_length`, `qasm_line_count`
* `qasm_gate_keyword_count`
The model learns how **structural patterns imply entangling cost**.
---
## π 3. Understanding the Output
After training, youβll see:
### A. Actual vs Predicted Plot
* Each point = one circuit
* Diagonal line = perfect prediction
* Spread = prediction error
π Tight clustering = good model
---
### B. Residual Distribution
* Shows prediction errors (`actual - predicted`)
* Centered around 0 = unbiased model
* Wide spread = instability
---
### C. Feature Importance
Top features driving predictions:
* High importance = strong influence on `cx_count`
* Helps identify:
* what increases entanglement cost
* which metrics matter most
---
## π 4. Explorer Tab
Inspect real circuits:
* View dataset slices (`train`, etc.)
* See raw and transpiled QASM
* Understand how circuits differ structurally
---
## βοΈ 5. Tips for Better Results
* Use **diverse features** (topology + QASM)
* Avoid too small datasets after filtering
* Tune:
* `max_depth`
* `n_estimators`
* Try different datasets:
* Noise changes structure β changes predictions
---
## π 6. Why This Matters
This tool helps answer:
* How expensive is a circuit in terms of **entangling operations**?
* Can we estimate noise **before execution**?
* Which designs are more **hardware-friendly**?
---
## π 7. Project Resources
* π€ [Hugging Face](https://huggingface.co/QSBench)
* π» [GitHub](https://github.com/QSBench)
* π [Website](https://qsbench.github.io) |