DaMsTaR commited on
Commit
7ba99c3
·
verified ·
1 Parent(s): 430420a

Upload 9 files

Browse files
01_data_generation.ipynb ADDED
The diff for this file is too large to render. See raw diff
 
02_stage1_training.ipynb ADDED
The diff for this file is too large to render. See raw diff
 
03_stage2_and_evaluation.ipynb ADDED
The diff for this file is too large to render. See raw diff
 
04_fitting_pipeline.ipynb ADDED
The diff for this file is too large to render. See raw diff
 
05_transmon_qubit.ipynb ADDED
The diff for this file is too large to render. See raw diff
 
06_data_test.ipynb ADDED
The diff for this file is too large to render. See raw diff
 
Fluxonium ML Replication Guide.md ADDED
@@ -0,0 +1,403 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Step-by-Step Guide to Replicating "Automatic Characterization of Fluxonium Superconducting Qubits Parameters with Deep Transfer Learning"
2
+
3
+ ## Overview
4
+
5
+ This guide provides a complete, actionable roadmap to replicate the results from Kung et al. (arXiv:2503.12099), which presents a machine learning approach for automatic characterization of fluxonium superconducting qubit parameters (E_J, E_C, E_L) using a Swin Transformer V2 model trained via deep transfer learning. The paper reports ~95.6% average accuracy across all three energy parameters. The authors mention that source code will be available on GitHub after publication, but since it may not yet be released, this guide reconstructs every detail needed for a from-scratch replication.
6
+
7
+ ***
8
+
9
+ ## Phase 1: Environment Setup
10
+
11
+ ### Hardware Requirements
12
+
13
+ - A GPU with at least 8 GB VRAM (NVIDIA RTX 3060 or better recommended). Swin Transformer V2 Tiny has ~28M parameters and is relatively lightweight.[^1]
14
+ - Sufficient CPU/RAM for generating 15,000+ spectrum simulations via QuTiP.
15
+
16
+ ### Software Dependencies
17
+
18
+ Install the following Python packages:
19
+
20
+ - **PyTorch** (≥1.12) with CUDA support
21
+ - **torchvision** — provides pre-built Swin Transformer V2 models (`swin_v2_t`, `swin_v2_b`)[^2][^1]
22
+ - **timm** (PyTorch Image Models) — alternative source for `swinv2_tiny_window8_256` and other variants[^3]
23
+ - **QuTiP** — Quantum Toolbox in Python for Hamiltonian diagonalization and spectrum computation[^4]
24
+ - **scqubits** — optional but helpful for fluxonium simulation and validation[^5][^6]
25
+ - **prodigyopt** — the Prodigy optimizer (`pip install prodigyopt`)[^7][^8]
26
+ - **scipy** — for `find_peaks_cwt` peak detection[^9]
27
+ - **numpy, matplotlib, PIL/Pillow**
28
+
29
+ ```
30
+ pip install torch torchvision timm qutip scqubits prodigyopt scipy numpy matplotlib pillow
31
+ ```
32
+
33
+ ***
34
+
35
+ ## Phase 2: Understanding the Fluxonium Hamiltonian
36
+
37
+ ### The Model Hamiltonian
38
+
39
+ The fluxonium qubit Hamiltonian is:
40
+
41
+ H = 4 * E_C * n^2 - E_J * cos(phi + phi_ext) + 0.5 * E_L * phi^2
42
+
43
+ where:
44
+ - **E_C** = charging energy (capacitance)
45
+ - **E_J** = Josephson energy
46
+ - **E_L** = inductive energy
47
+ - **phi** = phase operator across inductance
48
+ - **n** = displacement charge operator
49
+ - **phi_ext** = external magnetic flux (varied over one flux quantum period)
50
+
51
+ ### Parameter Ranges
52
+
53
+ The training data spans these experimentally relevant ranges:
54
+
55
+ | Parameter | Range (GHz) | Span |
56
+ |-----------|-------------|------|
57
+ | E_C | 0.5 – 3.0 | 2.5 GHz |
58
+ | E_L | 0.1 – 2.0 | 1.9 GHz |
59
+ | E_J | 2.0 – 10.0 | 8.0 GHz |
60
+
61
+ ### Transitions Considered
62
+
63
+ The energy transitions used are: **0→1, 0→2, 0→3, 0→4, 0→5, 1→2, and 1→3**, all within the frequency window of **4.0–8.0 GHz**.
64
+
65
+ ***
66
+
67
+ ## Phase 3: Generating Training Data
68
+
69
+ This is the most computationally intensive phase. There are two distinct datasets to generate.
70
+
71
+ ### Dataset 1: Pure Spectrum Dataset (N = 15,392)
72
+
73
+ This dataset contains only the bare transition energies (no coupling/readout effects), making it fast to compute.
74
+
75
+ **For each parameter combination (E_C, E_L, E_J):**
76
+
77
+ 1. **Sample parameters** randomly or on a grid within the ranges above. The paper uses 15,392 unique combinations.
78
+ 2. **Sweep phi_ext** with 256 points per flux period (0 to 2π).
79
+ 3. **Diagonalize the Hamiltonian** at each flux point using QuTiP. Use `scqubits.Fluxonium` or build the Hamiltonian matrix directly in QuTiP with a sufficiently large cutoff (typically 110 states).[^5]
80
+ 4. **Compute transition energies** between all relevant level pairs (0-1, 0-2, ..., 1-3).
81
+ 5. **Filter transitions** to retain only those within 4.0–8.0 GHz.
82
+ 6. **Render as an image**: Plot each valid transition point as a black dot on a 2D image (x-axis = phi_ext, y-axis = frequency in GHz). The image serves as input to the Swin Transformer.
83
+
84
+ **Example code sketch for a single spectrum:**
85
+
86
+ ```python
87
+ import scqubits as scq
88
+ import numpy as np
89
+
90
+ def generate_pure_spectrum(EC, EL, EJ, n_flux=256, cutoff=110):
91
+ fluxonium = scq.Fluxonium(EJ=EJ, EC=EC, EL=EL, flux=0.0, cutoff=cutoff)
92
+ flux_vals = np.linspace(0.0, 1.0, n_flux) # in units of Phi_0
93
+
94
+ transitions = [(0,1), (0,2), (0,3), (0,4), (0,5), (1,2), (1,3)]
95
+ spectrum_points = []
96
+
97
+ for flux in flux_vals:
98
+ fluxonium.flux = flux
99
+ evals = fluxonium.eigenvals(evals_count=6)
100
+ for (i, j) in transitions:
101
+ if j < len(evals):
102
+ freq = evals[j] - evals[i]
103
+ if 4.0 <= freq <= 8.0:
104
+ spectrum_points.append((flux, freq))
105
+
106
+ return spectrum_points
107
+ ```
108
+
109
+ **Image generation**: Convert each spectrum into a fixed-resolution image (e.g., 256×256 pixels). The Swin Transformer V2 Tiny expects 256×256 input. Plot flux on the x-axis and frequency on the y-axis, with black dots on a white background. Save as PNG or convert directly to a tensor.[^1]
110
+
111
+ ### Dataset 2: Dispersive Readout Dataset (N = 469)
112
+
113
+ This dataset simulates a more realistic measurement scenario including dispersive readout effects:
114
+
115
+ 1. **Readout resonator** at 6.00 GHz with linewidth 7 MHz and coupling strength g = 100 MHz.
116
+ 2. **Compute the dispersive shift** for each transition using second-order perturbation theory.
117
+ 3. **Calculate voltage change** in readout response caused by dispersive shift for a saturation drive at every transition and flux value.
118
+ 4. **Threshold**: Exclude data points where readout voltage change < 10% of maximum magnitude at readout resonance.
119
+ 5. **Render as image** similarly to the pure spectrum, but now transition points carry varying intensities based on signal magnitude.
120
+
121
+ This computation is >100× slower per spectrum than the pure dataset, which is why only 469 samples are used. The dispersive readout dataset is critical for the transfer learning step.
122
+
123
+ ***
124
+
125
+ ## Phase 4: Model Architecture — Swin Transformer V2
126
+
127
+ ### Model Selection
128
+
129
+ The paper uses **Swin Transformer V2**, chosen for its lightweight architecture compared to ResNet and DenseNet alternatives. The exact variant isn't specified, but the **Swin V2 Tiny** model is the most practical choice:[^10][^11]
130
+
131
+ | Property | Swin V2 Tiny |
132
+ |----------|-------------|
133
+ | Parameters | ~28.3M[^1] |
134
+ | Input resolution | 256 × 256 |
135
+ | GFLOPs | 5.94[^1] |
136
+ | Embed dim | 96 |
137
+ | Depths | [^12][^12][^13][^12] |
138
+ | Num heads | [^14][^13][^15][^7] |
139
+ | Window size | 8 |
140
+
141
+ ### Loading the Model
142
+
143
+ ```python
144
+ import torchvision.models as models
145
+ import torch.nn as nn
146
+
147
+ # Load pretrained Swin V2 Tiny (ImageNet weights)
148
+ model = models.swin_v2_t(weights=models.Swin_V2_T_Weights.IMAGENET1K_V1)
149
+
150
+ # Modify the classification head for regression (3 outputs: EC, EL, EJ)
151
+ model.head = nn.Linear(model.head.in_features, 3)
152
+ ```
153
+
154
+ Alternatively, using `timm`:
155
+
156
+ ```python
157
+ import timm
158
+
159
+ model = timm.create_model('swinv2_tiny_window8_256', pretrained=True, num_classes=3)
160
+ ```
161
+
162
+ ### Input Preprocessing
163
+
164
+ The spectrum images should be converted to 3-channel (RGB) tensors of size 256×256. Apply standard ImageNet normalization (mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]) since the model is pretrained on ImageNet.[^2][^1]
165
+
166
+ ***
167
+
168
+ ## Phase 5: Two-Stage Transfer Learning Training
169
+
170
+ This is the core methodological contribution. The training proceeds in two stages.
171
+
172
+ ### Stage 1: Pre-train on Pure Spectrum Dataset
173
+
174
+ - **Dataset**: 15,392 pure spectrum images
175
+ - **Labels**: Corresponding [E_C, E_L, E_J] vectors (continuous values)
176
+ - **Loss function**: Mean Squared Error (MSE):
177
+
178
+ Loss = (1/N) * Σ (F_NN(S_E^i) - E^i)^2
179
+
180
+ - **Optimizer**: Prodigy with default lr=1.0. Prodigy is parameter-free and adaptively estimates the learning rate.[^8][^7]
181
+
182
+ ```python
183
+ from prodigyopt import Prodigy
184
+
185
+ optimizer = Prodigy(model.parameters(), lr=1.0, weight_decay=0.01)
186
+ ```
187
+
188
+ - **Training details**: Train until convergence. Use a validation split (~10-15%) from the pure dataset to monitor overfitting. The paper does not specify exact epoch counts, so train until validation loss plateaus (likely 50–200 epochs depending on batch size).
189
+ - **Batch size**: Not explicitly stated; start with 32 or 64.
190
+ - **Scheduler**: Cosine annealing is recommended with Prodigy.[^7]
191
+
192
+ ```python
193
+ scheduler = torch.optim.lr_scheduler.CosineAnnealingLR(optimizer, T_max=total_iterations)
194
+ ```
195
+
196
+ ### Stage 2: Fine-tune on Dispersive Readout Dataset
197
+
198
+ - **Dataset**: 469 dispersive readout spectrum images
199
+ - **Initialization**: Load all weights from Stage 1
200
+ - **Loss function**: Same MSE loss
201
+ - **Optimizer**: Prodigy (reinitialize for the new stage)
202
+ - **Training**: Fine-tune the entire model on the smaller, more realistic dataset. This transfer learning step is critical — the pure spectrum pre-training provides a strong initialization, and the dispersive dataset aligns the model with experimental conditions.
203
+ - **Caution**: With only 469 samples, overfitting is a risk. Use aggressive data augmentation (random horizontal flips, small rotations, slight noise injection) and early stopping.
204
+
205
+ ***
206
+
207
+ ## Phase 6: Evaluation and Validation
208
+
209
+ ### Test Dataset
210
+
211
+ Generate **512 test spectra** with non-repetitive parameter combinations distinct from training data, within the same parameter ranges.
212
+
213
+ ### Accuracy Metric
214
+
215
+ The paper defines accuracy per parameter as:
216
+
217
+ Acc(E_ν) = (1/N_test) * Σ (1 - |E_ν^i - E_ν^{true,i}| / R(E_ν^{test}))
218
+
219
+ where R(E_ν^{test}) is the training range (2.5 GHz for E_C, 1.9 GHz for E_L, 8.0 GHz for E_J). This differs from standard classification accuracy — it measures how close predictions are relative to the parameter range.
220
+
221
+ ### Target Accuracies
222
+
223
+ | Parameter | Target Accuracy | Implied Average Deviation |
224
+ |-----------|----------------|--------------------------|
225
+ | E_C | 94.5% | 0.125 GHz |
226
+ | E_L | 97.1% | 0.095 GHz |
227
+ | E_J | 95.3% | 0.4 GHz |
228
+ | **Overall** | **95.6%** | — |
229
+
230
+ These are the benchmarks from the paper.
231
+
232
+ ### Error and Cost Metrics
233
+
234
+ The combined error function:
235
+
236
+ Error = 1 - (1/3) * Σ Acc(E_ν) for ν = C, L, J
237
+
238
+ The cost function measures spectral fit quality:
239
+
240
+ Cost = (1/N) * Σ (f(phi_i) - f_i)^2
241
+
242
+ where f(phi_i) is the transition frequency calculated from the predicted parameters.
243
+
244
+ ***
245
+
246
+ ## Phase 7: Automatic Fitting Pipeline (End-to-End)
247
+
248
+ Once the ML model is trained, the full automatic characterization pipeline works as follows:
249
+
250
+ ### Step 1: Preprocess Experimental Data
251
+
252
+ - Apply a **band-pass filter**: keep data points with signal magnitude > 2.5 standard deviations above background average and < 20% of maximum measured magnitude.
253
+ - Use **`scipy.signal.find_peaks_cwt`** to detect transition spectrum peaks at magnitude extrema.[^9]
254
+
255
+ ### Step 2: ML Initial Guess
256
+
257
+ - Feed the preprocessed spectrum image into the trained Swin Transformer V2 model.
258
+ - Obtain initial guesses: E_C^0, E_L^0, E_J^0.
259
+
260
+ ### Step 3: Transition Identification
261
+
262
+ - Simulate a spectrum using the ML-predicted parameters.
263
+ - Label each experimental data point by associating it with the nearest simulated transition, provided the nearest transition is within **0.3 GHz**.
264
+ - Exclude points that are far from any simulated transition or fall within regions where multiple transitions overlap within 0.3 GHz.
265
+
266
+ ### Step 4: Least-Squares Fitting
267
+
268
+ - Use the ML predictions as initial guesses for a least-squares fit (e.g., `scipy.optimize.least_squares` or `scipy.optimize.curve_fit`).
269
+ - Fit the labeled data points to the fluxonium Hamiltonian model.
270
+ - Constrain fitting to **5 iterations** as in the paper's benchmarks.
271
+ - Output final refined values of E_C, E_L, E_J.
272
+
273
+ ***
274
+
275
+ ## Phase 8: Reproducing Key Results
276
+
277
+ ### Result 1: Prediction Accuracy (Figure 4)
278
+
279
+ - Run inference on 512 test spectra.
280
+ - Plot predicted vs. true values for each of E_C, E_L, E_J.
281
+ - Compute average accuracy using the custom metric. Target: ~95.6% overall.
282
+
283
+ ### Result 2: Error and Cost Landscapes (Figures 5–6)
284
+
285
+ - Choose a test case, e.g., (E_C=1.28, E_J=6.50, E_L=0.70) GHz.
286
+ - Generate a 2D grid of initial parameter guesses.
287
+ - For each initial guess, run 5 fitting iterations and compute Error and Cost.
288
+ - Plot heatmaps showing that the ML prediction falls in the darkest (lowest error/cost) region.
289
+
290
+ ### Result 3: ML vs. Random Initial Guess (Table 1)
291
+
292
+ - For 60 parameter sets, compare:
293
+ - 512 random initial guesses → 5 fitting iterations → average Error and Cost
294
+ - ML initial guess → 5 fitting iterations → Error and Cost
295
+
296
+ | Method | Avg Error | Std Error | Avg Cost | Std Cost |
297
+ |--------|-----------|-----------|----------|----------|
298
+ | Random initial values | 0.218 | 0.098 | 0.146 | 0.130 |
299
+ | ML prediction | 0.037 | 0.088 | 0.024 | 0.083 |
300
+
301
+ The ML approach should yield nearly one order of magnitude improvement.
302
+
303
+ ### Result 4: Real Experimental Data (Figure 7)
304
+
305
+ - If access to real fluxonium measurement data is available, apply the full pipeline.
306
+ - The paper demonstrates successful characterization with only partial spectra (4.0–5.9 GHz instead of 4.0–8.0 GHz) and even with half-period symmetrized data.
307
+
308
+ ***
309
+
310
+ ## Phase 9: Practical Tips and Troubleshooting
311
+
312
+ ### Data Generation Optimization
313
+
314
+ - **Parallelization**: Use Python's `multiprocessing` to generate spectra in parallel. Each spectrum is independent.
315
+ - **Caching**: Save computed eigenvalues to disk (HDF5 or NumPy arrays) so you don't recompute if training is restarted.
316
+ - **scqubits cutoff**: Use cutoff=110 for the fluxonium Hilbert space. Lower cutoffs may miss higher transitions; higher cutoffs waste computation time.[^5]
317
+
318
+ ### Image Representation
319
+
320
+ - The paper plots spectra as black dots on a white background. Ensure consistent resolution (256×256) and normalization.
321
+ - Consider using a fixed pixel grid: map phi_ext ∈ [0, 2π] to x ∈ and frequency ∈ [4.0, 8.0] GHz to y ∈ .
322
+ - Each dot should be at least 1–2 pixels wide for visibility.
323
+
324
+ ### Training Stability
325
+
326
+ - Prodigy with lr=1.0 is recommended. If training is unstable, reduce `d_coef` to 0.5.[^7]
327
+ - For the fine-tuning stage (469 samples), consider freezing early layers of the Swin Transformer and only fine-tuning the later layers and the regression head.
328
+ - Monitor for overfitting by tracking validation loss closely in Stage 2.
329
+
330
+ ### Label Normalization
331
+
332
+ - Normalize target values to by dividing by the parameter range (e.g., E_C_normalized = (E_C - 0.5) / 2.5). This helps MSE loss treat all three parameters equally.[^16]
333
+ - At inference time, denormalize predictions back to physical units.
334
+
335
+ ***
336
+
337
+ ## Complete Replication Checklist
338
+
339
+ | Step | Task | Status |
340
+ |------|------|--------|
341
+ | 1 | Install all dependencies (PyTorch, QuTiP, scqubits, prodigyopt, timm) | ☐ |
342
+ | 2 | Implement fluxonium Hamiltonian spectrum generator | ☐ |
343
+ | 3 | Generate 15,392 pure spectrum images + labels | ☐ |
344
+ | 4 | Generate 469 dispersive readout spectrum images + labels | ☐ |
345
+ | 5 | Generate 512 test spectrum images + labels | ☐ |
346
+ | 6 | Set up Swin V2 Tiny model with 3-output regression head | ☐ |
347
+ | 7 | Stage 1: Train on pure spectrum dataset with Prodigy optimizer | ☐ |
348
+ | 8 | Stage 2: Fine-tune on dispersive readout dataset | ☐ |
349
+ | 9 | Evaluate on test set — target ~95.6% accuracy | ☐ |
350
+ | 10 | Implement automatic fitting pipeline (filter → ML → label → fit) | ☐ |
351
+ | 11 | Reproduce Error/Cost comparison (Table 1) | ☐ |
352
+ | 12 | (Optional) Apply to real experimental data | ☐ |
353
+
354
+ ***
355
+
356
+ ## Key References and Resources
357
+
358
+ - **Paper**: arXiv:2503.12099 — Kung et al., "Automatic Characterization of Fluxonium Superconducting Qubits Parameters with Deep Transfer Learning"
359
+ - **Swin Transformer V2**: Liu et al., CVPR 2022 — architecture details and pretrained weights[^10]
360
+ - **Prodigy Optimizer**: Mishchenko & Defazio, arXiv:2306.06101 — parameter-free adaptive optimizer[^8]
361
+ - **scqubits**: Koch et al., Quantum 5, 583 (2021) — Python package for superconducting qubit simulation[^6]
362
+ - **QuTiP**: Quantum Toolbox in Python — used for Hamiltonian diagonalization[^4]
363
+ - **torchvision SwinV2**: Official PyTorch implementation with ImageNet-pretrained weights[^1]
364
+
365
+ ---
366
+
367
+ ## References
368
+
369
+ 1. [swin_v2_t — Torchvision main documentation](https://docs.pytorch.org/vision/main/models/generated/torchvision.models.swin_v2_t.html)
370
+
371
+ 2. [swin_v2_b — Torchvision main documentation](https://docs.pytorch.org/vision/main/models/generated/torchvision.models.swin_v2_b.html) - Constructs a swin_v2_base architecture from Swin Transformer V2: Scaling Up Capacity and Resolution....
372
+
373
+ 3. [Loading a pre-trained SwinV2 transformer and modifying the architecture · huggingface pytorch-image-models · Discussion #1843](https://github.com/huggingface/pytorch-image-models/discussions/1843) - I am trying to create a SwinV2 transformer model by loading pretrained weights and later modifying s...
374
+
375
+ 4. [Accelerate Qubit Research with NVIDIA cuQuantum Integrations in ...](https://developer.nvidia.com/blog/accelerate-qubit-research-with-nvidia-cuquantum-integrations-in-qutip-and-scqubits/) - The outputs of scQubits can also easily serve as inputs for analog quantum dynamics simulations usin...
376
+
377
+ 5. [Fluxonium Qubit — scqubits Documentation](https://scqubits.readthedocs.io/en/v2.0_a/guide/qubits/fluxonium.html) - An instance of the fluxonium qubit is created as follows: fluxonium = scqubits.Fluxonium(EJ = 8.9, E...
378
+
379
+ 6. [Scqubits: a Python package for superconducting qubits](https://arxiv.org/abs/2107.08552) - $\textbf{scqubits}$ is an open-source Python package for simulating and analyzing superconducting ci...
380
+
381
+ 7. [prodigyopt](https://pypi.org/project/prodigyopt/) - An Adam-like optimizer for neural networks with adaptive estimation of learning rate
382
+
383
+ 8. [The Prodigy optimizer and its variants for training neural ...](https://github.com/konstmish/prodigy) - The Prodigy optimizer and its variants for training neural networks. - konstmish/prodigy
384
+
385
+ 9. [find_peaks_cwt — SciPy v1.17.0 Manual](https://docs.scipy.org/doc/scipy/reference/generated/scipy.signal.find_peaks_cwt.html)
386
+
387
+ 10. [Swin Transformer V2: Scaling Up Capacity and Resolution](https://ieeexplore.ieee.org/document/9879380/) - We present techniques for scaling Swin Transformer [35] up to 3 billion parameters and making it cap...
388
+
389
+ 11. [Swin Transformer V2: Advancing Computer Vision with Scalable ...](https://www.raulartigues.com/en/post/swin-transformer-v2-advancing-computer-vision-with-scalable-neural-architectures) - Architecture & Functionality​​ Swin Transformer V2 retains the hierarchical structure of its predece...
390
+
391
+ 12. [SwinCNet leveraging Swin Transformer V2 and CNN for precise color correction and detail enhancement in underwater image restoration](https://www.frontiersin.org/articles/10.3389/fmars.2025.1523729/full) - Underwater image restoration confronts three major challenges: color distortion, contrast degradatio...
392
+
393
+ 13. [Retinal vessel segmentation using a swin transformer-based encoder-decoder architecture](https://link.springer.com/10.1007/s11760-025-05089-1)
394
+
395
+ 14. [DUSFormer: Dual-Swin Transformer V2 Aggregate Network for Polyp Segmentation](https://ieeexplore.ieee.org/document/10387670/) - The convolutional neural network method has certain limitations in medical image segmentation. As a ...
396
+
397
+ 15. [Leveraging Swin Transformer for Local-to-Global Weakly Supervised
398
+ Semantic Segmentation](https://arxiv.org/pdf/2401.17828.pdf) - ...a
399
+ 0.98% mAP higher localization accuracy, outperforming state-of-the-art models.
400
+ It also yields c...
401
+
402
+ 16. [An Image Denoising Method Based on Swin Transformer V2 and U-Net Architecture](https://ieeexplore.ieee.org/document/10807930/) - To address the issue of image degradation caused by noise during image acquisition and transmission,...
403
+
README.md ADDED
@@ -0,0 +1 @@
 
 
1
+ # Quantum-ML
fluxonium_validation_one_sample.csv ADDED
@@ -0,0 +1,102 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ flux,f01_physical,f01_generated,abs_error
2
+ 0.0,6.425163508243808,6.437397628799654,0.012234120555845607
3
+ 0.01,6.42352852345965,6.381774083760494,0.041754439699156265
4
+ 0.02,6.418618309617855,6.448748259695395,0.030129950077539824
5
+ 0.03,6.410417084759853,6.448179924112928,0.037762839353074895
6
+ 0.04,6.398898536037083,6.320566203218969,0.07833233281811403
7
+ 0.05,6.384025812588874,6.331744460046562,0.05228135254231159
8
+ 0.06,6.365751530392332,6.370884209221877,0.005132678829545156
9
+ 0.07,6.344017804617133,6.331320944950644,0.012696859666489146
10
+ 0.08,6.3187563304278145,6.3180817788518056,0.0006745515760089305
11
+ 0.09,6.289888539233692,6.25563958194591,0.03424895728778221
12
+ 0.1,6.25732586411557,6.2926329130335645,0.03530704891799452
13
+ 0.11,6.220970155484,6.25219781201402,0.03122765653002002
14
+ 0.12,6.180714295734788,6.183365369691658,0.0026510739568701425
15
+ 0.13,6.1364430693532865,6.181700804274734,0.04525773492144758
16
+ 0.14,6.088034351913282,6.106804437458552,0.018770085545270376
17
+ 0.15,6.0353606867822,6.000860856342387,0.034499830439813195
18
+ 0.16,5.978291320795938,5.9930963378011,0.014805017005162568
19
+ 0.17,5.916694768168125,5.878196481997561,0.03849828617056428
20
+ 0.18,5.850441963708047,5.885710964373099,0.03526900066505245
21
+ 0.19,5.77941005039397,5.777405569337777,0.002004481056193441
22
+ 0.2,5.703486821218059,5.696064761241395,0.007422059976663675
23
+ 0.21,5.622575800693071,5.595237083273659,0.027338717419412184
24
+ 0.22,5.536601908615457,5.585685859321129,0.04908395070567195
25
+ 0.23,5.44551760078898,5.439313379107066,0.006204221681914035
26
+ 0.24,5.349309333757697,5.332112351501273,0.017196982256423965
27
+ 0.25,5.248004160551023,5.23386631073997,0.014137849811053371
28
+ 0.26,5.141676240449154,5.163047982244172,0.021371741795017662
29
+ 0.27,5.030453045867871,5.045125301008447,0.01467225514057624
30
+ 0.28,4.914521079268568,4.931091927649183,0.0165708483806144
31
+ 0.29,4.79413097385096,4.811428055107568,0.017297081256607783
32
+ 0.3,4.669601939440017,4.755587191590312,0.08598525215029529
33
+ 0.31,4.541325619532735,4.525008416997969,0.016317202534765762
34
+ 0.32,4.4097695325001505,4.389203441139709,0.02056609136044152
35
+ 0.33,4.2754803625695725,4.242808109112474,0.03267225345709868
36
+ 0.34,4.139087427255185,4.163818454875238,0.02473102762005297
37
+ 0.35000000000000003,4.001306661143093,4.046633897622543,0.04532723647945058
38
+ 0.36,3.8629454058473467,3.858370516461083,0.004574889386263603
39
+ 0.37,3.724908165759098,3.6911766281858926,0.03373153757320546
40
+ 0.38,3.5882032579420233,3.5551010682094177,0.03310218973260559
41
+ 0.39,3.4539499246596552,3.480070648202377,0.02612072354272188
42
+ 0.4,3.323384955738428,3.3532259516598444,0.029840995921416535
43
+ 0.41000000000000003,3.1978671564876935,3.219674318738733,0.021807162251039625
44
+ 0.42,3.0788770947478112,3.0521574701156218,0.026719624632189465
45
+ 0.43,2.968008541861015,2.97732961312191,0.009321071260894875
46
+ 0.44,2.866947105311197,2.8716319370814882,0.004684831770291442
47
+ 0.45,2.777431172770884,2.786211326015748,0.008780153244864142
48
+ 0.46,2.7011911230764225,2.7361782158046775,0.034987092728254954
49
+ 0.47000000000000003,2.6398655697776823,2.648842732796127,0.008977163018444756
50
+ 0.48,2.5948985945820935,2.6221563721386185,0.02725777755652503
51
+ 0.49,2.567428870836973,2.570142110553779,0.0027132397168063704
52
+ 0.5,2.558188090753175,2.5697959782449344,0.011607887491759339
53
+ 0.51,2.56742887083697,2.5927745333335346,0.025345662496564447
54
+ 0.52,2.594898594582096,2.53639508049246,0.058503514089636166
55
+ 0.53,2.639865569777613,2.627031053897175,0.012834515880438158
56
+ 0.54,2.701191123076481,2.6823060780917083,0.018885044984772836
57
+ 0.55,2.7774311727708847,2.751780793664577,0.025650379106307852
58
+ 0.56,2.8669471053112785,2.855900387896466,0.011046717414812335
59
+ 0.5700000000000001,2.9680085418609643,3.0280291099246073,0.060020568063642976
60
+ 0.58,3.078877094747741,3.0441147431955082,0.03476235155223284
61
+ 0.59,3.1978671564876926,3.2367426738404923,0.038875517352799704
62
+ 0.6,3.3233849557385104,3.255819226628977,0.06756572910953329
63
+ 0.61,3.453949924659657,3.440504587652307,0.013445337007349956
64
+ 0.62,3.588203257942026,3.5947376491908942,0.006534391248868232
65
+ 0.63,3.7249081657590954,3.7484444725527877,0.02353630679369223
66
+ 0.64,3.862945405847344,3.8915005223728754,0.028555116525531332
67
+ 0.65,4.001306661143093,4.033158849165846,0.03185218802275358
68
+ 0.66,4.1390874272551,4.125086424823035,0.014001002432064702
69
+ 0.67,4.2754803625695725,4.256917348066978,0.018563014502594122
70
+ 0.68,4.4097695325001505,4.444216503355105,0.03444697085495463
71
+ 0.6900000000000001,4.541325619532735,4.533644920520581,0.0076806990121536245
72
+ 0.7000000000000001,4.669601939440012,4.618384264724381,0.05121767471563121
73
+ 0.71,4.79413097385096,4.748630497108106,0.045500476742853735
74
+ 0.72,4.91452107926857,4.877605885274992,0.03691519399357812
75
+ 0.73,5.030453045867816,5.05041360890317,0.01996056303535454
76
+ 0.74,5.1416762404492085,5.147394507461831,0.005718267012622569
77
+ 0.75,5.248004160550968,5.2757265352551315,0.0277223747041635
78
+ 0.76,5.3493093337577395,5.332155518857381,0.017153814900358277
79
+ 0.77,5.445517600788897,5.451882828806578,0.006365228017681801
80
+ 0.78,5.536601908615451,5.561718808214723,0.025116899599271214
81
+ 0.79,5.6225758006930295,5.610155811424555,0.012419989268474652
82
+ 0.8,5.703486821218055,5.721825941977896,0.018339120759840455
83
+ 0.81,5.7794100503939685,5.752834310797826,0.02657573959614279
84
+ 0.8200000000000001,5.85044196370805,5.83586567369267,0.014576290015380522
85
+ 0.8300000000000001,5.916694768168124,5.901368330215668,0.015326437952456473
86
+ 0.84,5.978291320795938,5.930279419390916,0.04801190140502154
87
+ 0.85,6.035360686782202,6.054912200081395,0.019551513299193246
88
+ 0.86,6.088034351913363,6.069188264178917,0.018846087734445405
89
+ 0.87,6.1364430693532865,6.1369446971415185,0.0005016277882319287
90
+ 0.88,6.180714295734734,6.200015847805969,0.019301552071235406
91
+ 0.89,6.2209701554839985,6.238897986257043,0.017927830773044384
92
+ 0.9,6.2573258641155665,6.284040486235855,0.026714622120288745
93
+ 0.91,6.289888539233692,6.285934434358901,0.003954104874790865
94
+ 0.92,6.318756330427871,6.301761278542997,0.016995051884874712
95
+ 0.93,6.344017804617188,6.340817189136391,0.0032006154807966425
96
+ 0.9400000000000001,6.3657515303923295,6.298006549048625,0.06774498134370432
97
+ 0.9500000000000001,6.384025812588874,6.325925529993099,0.05810028259577482
98
+ 0.96,6.398898536037082,6.345793319454688,0.0531052165823942
99
+ 0.97,6.410417084759766,6.3703785089023075,0.040038575857458625
100
+ 0.98,6.418618309617854,6.434668890336192,0.016050580718337315
101
+ 0.99,6.423528523459574,6.3871743422860225,0.03635418117355105
102
+ 1.0,6.425163508243822,6.40998061702042,0.015182891223401995