QSBench's picture
Update GUIDE.md
4629295 verified

A newer version of the Gradio SDK is available: 6.11.0

Upgrade

🌌 Quantum Noise Robustness Benchmark Guide

Welcome to the Quantum Noise Robustness Benchmark.
This tool demonstrates how Machine Learning can predict the impact of noise on quantum circuits using only structural and topological features β€” without running any expensive noisy simulations.


⚠️ Important: Demo Dataset Notice

This Hub uses v1.0.0-demo shards of the QSBench dataset family.

  • Limited Scale: Only a small subset of circuits is loaded for fast demonstration.
  • Complexity: Predicting quantum observables from pure structure is a non-trivial mapping.
  • Goal: Showcase the correlation between circuit topology and noise sensitivity β€” not to achieve production-level $R^2$ on a limited sample.

🎯 1. What is Being Predicted?

The model performs multi-target regression to estimate how much noise distorts the final signal.

Targets (The Error Vector)

  • error_Z_global β€” deviation in Z-basis expectation value
  • error_X_global β€” deviation in X-basis expectation value
  • error_Y_global β€” deviation in Y-basis expectation value

Formula: error = noisy_expval - ideal_expval

Unlike predicting the state itself, predicting the error shift allows us to understand the "noise fingerprint" left by the circuit's architecture.


🧩 2. How the Model β€œSees” a Circuit

The model never simulates quantum states. It uses structural proxies to guess the noise impact:

πŸ”Ή Topology Features

  • adj_density β€” how densely qubits are connected (proxy for crosstalk risk).
  • adj_degree_mean β€” average connectivity (proxy for entanglement speed).

πŸ”Ή Complexity & Entanglement

  • depth / total_gates β€” length of the decoherence window.
  • cx_count / two_qubit_gates β€” the most noise-sensitive components in NISQ hardware.
  • gate_entropy β€” measures circuit regularity vs. randomness.

πŸ”Ή QASM Signals

  • qasm_length & gate_keyword_count β€” capture the overall "instruction weight".

πŸ€– 3. Technical Overview: The ML Pipeline

To handle the non-linear nature of quantum data, we use:

  • HistGradientBoostingRegressor: A high-performance boosting algorithm designed for large tabular data.
  • MultiOutput Wrapper: Ensures all three axes ($X, Y, Z$) are learned in a unified context.
  • Robust Preprocessing: Median imputation for missing values and Standard Scaling for feature parity.

πŸ“Š 4. Interpreting the Analytics

A. Physics Emulation Plot (Crucial!)

  • Gray Points: Actual simulated noisy values.
  • Red Points: ML-predicted noisy values ($Ideal + Predicted Error$).
  • Insight: If red points follow the trend of gray points, the model has successfully "learned" the physics of the noise channel without a simulator.

B. Why is my $R^2$ near Zero?

Even with 200,000+ samples, structural metrics alone (like depth or entropy) provide a "complexity baseline" but do not capture specific gate rotation angles.

  1. The Result: Standard regressors (Random Forest/XGBoost) will hit a performance ceiling near R2β‰ˆ0, as they see the circuit's skeleton but not its parameters.

  2. The Opportunity: This makes QSBench the perfect playground for Graph Neural Networks (GNN) and Geometric Deep Learning, where models can integrate gate parameters as node features to break this "structural ceiling."


πŸ§ͺ 5. Experimentation Tips

  • Isolate Topology: Select only adj_* features to see how much qubit mapping alone affects noise.
  • The "CX" Test: Remove cx_count and see how much the MAE increases. This quantifies the "cost" of entanglement in your noise model.
  • Iteration Scaling: Increase Max Iterations (400 -> 800) to see if the model can find deeper patterns in the demo data.

πŸ”¬ 6. Key Insight

Noise is not random. It is a deterministic function of circuit complexity and hardware topology. Even without a quantum simulator, ML can "guess" the fidelity of a result just by looking at the circuit diagram.


πŸ”— 7. Project Resources