Dataset Viewer
Duplicate
The dataset viewer is not available for this split.
Cannot extract the features (columns) for the split 'validation' of the config 'default' of the dataset.
Error code:   FeaturesError
Exception:    ArrowInvalid
Message:      Schema at index 1 was different: 
experiment: string
timestamp: string
backend: string
job_id: string
usage_seconds: int64
n_meas: int64
shots: int64
summary: struct<mse: struct<standard: double, no_weight: double, hard_ps: double, soft_bc: double>, rmse: struct<standard: double, no_weight: double, hard_ps: double, soft_bc: double>, winner: string, utilization: struct<hard_ps: double, soft_bc: double>>
usage_tracking: struct<before: struct<total: int64, remaining: int64>, after: struct<total: int64, remaining: int64>>
vs
experiment: string
backend: string
job_id: string
usage_seconds: int64
n_meas: int64
shots: int64
rmse: struct<standard: double, no_weight: double, hard_ps: double, soft_bc: double>
winner: string
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/split/first_rows.py", line 243, in compute_first_rows_from_streaming_response
                  iterable_dataset = iterable_dataset._resolve_features()
                                     ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 3608, in _resolve_features
                  features = _infer_features_from_batch(self.with_format(None)._head())
                                                        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2368, in _head
                  return next(iter(self.iter(batch_size=n)))
                         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2573, in iter
                  for key, example in iterator:
                                      ^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2060, in __iter__
                  for key, pa_table in self._iter_arrow():
                                       ^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2082, in _iter_arrow
                  yield from self.ex_iterable._iter_arrow()
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 604, in _iter_arrow
                  yield new_key, pa.Table.from_batches(chunks_buffer)
                                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "pyarrow/table.pxi", line 5039, in pyarrow.lib.Table.from_batches
                File "pyarrow/error.pxi", line 155, in pyarrow.lib.pyarrow_internal_check_status
                File "pyarrow/error.pxi", line 92, in pyarrow.lib.check_status
              pyarrow.lib.ArrowInvalid: Schema at index 1 was different: 
              experiment: string
              timestamp: string
              backend: string
              job_id: string
              usage_seconds: int64
              n_meas: int64
              shots: int64
              summary: struct<mse: struct<standard: double, no_weight: double, hard_ps: double, soft_bc: double>, rmse: struct<standard: double, no_weight: double, hard_ps: double, soft_bc: double>, winner: string, utilization: struct<hard_ps: double, soft_bc: double>>
              usage_tracking: struct<before: struct<total: int64, remaining: int64>, after: struct<total: int64, remaining: int64>>
              vs
              experiment: string
              backend: string
              job_id: string
              usage_seconds: int64
              n_meas: int64
              shots: int64
              rmse: struct<standard: double, no_weight: double, hard_ps: double, soft_bc: double>
              winner: string

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

Quantum Zeno Dragging on IBM Quantum Hardware

This dataset contains experimental results from quantum Zeno dragging experiments conducted on IBM Quantum superconducting processors. The Zeno dragging protocol transfers quantum states between basis states using sequences of projective measurements rather than unitary gates, exploiting the quantum Zeno effect to guide state evolution through measurement backaction. Unlike conventional quantum gates that rotate a qubit's state through coherent Hamiltonian evolution, Zeno dragging achieves the same transformation by repeatedly measuring the qubit in a slowly-rotating basis, post-selecting on trajectories where each measurement yields the "correct" outcome. This measurement-based approach to quantum control has attracted theoretical interest because it offers a fundamentally different route to implementing quantum operations, one that may have different error characteristics than unitary gates and that exploits the act of measurement itself as a computational resource.

Background

The quantum Zeno effect, named after the ancient Greek philosopher's paradox about motion, describes the phenomenon whereby frequent measurement of a quantum system inhibits its evolution by repeatedly collapsing the state to a measurement eigenstate. In the limiting case of continuous measurement, a quantum system becomes "frozen" in its initial state because each infinitesimally-spaced measurement projects it back before any evolution can occur. This effect was first predicted theoretically in the 1970s and has since been observed in numerous experimental platforms including trapped ions, superconducting qubits, and optical systems. While the basic Zeno effect merely freezes evolution, Zeno dragging extends this principle in a powerful way: by incrementally rotating the measurement basis between successive measurements, one can transfer a quantum state along a trajectory on the Bloch sphere. Each measurement collapses the state toward the current basis eigenstate, and by choosing basis rotations that track the desired trajectory, the state is "dragged" toward the target. The state follows the measurement basis like a ball rolling down a slowly-tilting bowl.

For a transfer from |0⟩ to |1⟩ using N measurements, the protocol proceeds as follows. At step k (where k runs from 1 to N), the measurement basis is rotated by angle θ_k = kπ/N from the Z-axis toward the X-axis. Operationally, the qubit is first rotated by Ry(-θ_k) to align the tilted basis with the computational basis, then measured in the computational basis, then rotated back by Ry(θ_k) to restore the reference frame. If the measurement yields outcome 0, the state has been successfully projected onto the current "dragged" eigenstate and the protocol continues. If the measurement yields outcome 1, the state has "flipped" to the wrong eigenstate and the trajectory has failed. Post-selection retains only trajectories where all intermediate measurements yield the outcome corresponding to the dragged eigenstate. The theoretical success probability—the probability that all N measurements yield the correct outcome—approaches unity as N increases, scaling as cos²(π/2N)^N. In the limit of large N, this approaches 1, meaning that with sufficiently fine discretization, Zeno dragging can transfer quantum states with arbitrarily high probability.

Experimental Overview

All experiments were conducted on IBM Quantum's ibm_torino backend, a 133-qubit processor based on the Heron r2 architecture using fixed-frequency transmon qubits with tunable couplers. The experiments span January 2026 and were submitted using Qiskit Runtime's Batch execution mode, which allows multiple circuits to be submitted together for efficient scheduling. The backend's native gate set consists of CZ (controlled-Z), RZ (Z-rotation), SX (√X), X, and I (identity), meaning that the Ry rotations required for Zeno dragging are decomposed into sequences of these native gates during transpilation. Typical qubit coherence times on this device are T1 = 150–250 μs and T2 = 140–200 μs, with single-qubit gates taking approximately 30 ns and measurements taking approximately 1.5 μs. These timescales are important context for understanding the results: a Zeno circuit with N = 8 measurements requires roughly 8 × 1.5 μs = 12 μs of measurement time alone, which is a small fraction of the coherence time and suggests that decoherence should not be the dominant error source.


Results

N-Dependence of Zeno Dragging

The central parameter in Zeno dragging is N, the number of intermediate measurements used to transfer the state from |0⟩ to |1⟩. Theory predicts that success probability increases monotonically with N: more measurements mean smaller basis rotations between each step (π/N per step instead of π/2 for N=2), which increases the probability that each individual measurement yields the "correct" outcome. The probability of success at each step is cos²(π/2N), and since all N measurements must succeed, the total success probability is cos²(π/2N)^N. For N = 2, this gives cos²(π/4)² = 0.5² = 0.25, while for N = 32 it gives cos²(π/64)³² ≈ 0.93. However, this theoretical analysis assumes perfect gates and measurements. On real hardware, each measurement requires additional gates (basis rotations before and after the measurement), and these gates introduce errors. The question motivating this experiment is whether there exists an optimal N where the theoretical improvement from finer discretization balances against the accumulated hardware noise from additional gates.

To answer this question empirically, we swept N from 2 to 32 in powers of 2, running 2048 shots at each value to obtain statistically meaningful success rates and fidelities. The circuit for each N consists of N measurement-rotation-unmeasurement blocks arranged in sequence, with the final computational-basis measurement determining whether the qubit successfully reached |1⟩. We recorded not just the final outcome but the full trajectory of intermediate measurement results, enabling detailed analysis of where and how trajectories fail. The transpiled circuit depths range from 11 gates at N = 2 to 161 gates at N = 32, representing a 15× increase in circuit complexity that provides ample opportunity for errors to accumulate.

N Success Rate 95% CI Theory Theory-Exp Gap Fidelity 95% CI Depth
2 27.2% [25.4, 29.2] 25.0% -2.2pp 87.6% [84.6, 90.1] 11
4 37.7% [35.6, 39.8] 53.1% +15.4pp 92.2% [90.1, 93.9] 21
8 41.9% [39.8, 44.0] 73.3% +31.4pp 92.4% [90.5, 94.0] 41
16 32.5% [30.5, 34.5] 85.7% +53.2pp 91.4% [89.1, 93.3] 81
32 15.7% [14.2, 17.4] 92.6% +76.9pp 91.9% [88.4, 94.4] 161

Confidence intervals computed using Wilson score method (appropriate for binomial proportions). 2048 shots per condition.

The data reveals a clear optimal point at N = 8, where the measured success rate peaks at 41.9%. This optimum arises from the competition between two effects. Below N = 8, the protocol is too coarse-grained: at N = 2, each measurement must "jump" the state by a full 45° rotation, and the probability of the correct outcome at each step is only cos²(π/4) = 0.5. Two such measurements in sequence give 0.5 × 0.5 = 0.25 theoretical success probability, and we measure 27.3%—actually slightly above theory, likely due to measurement bias. As N increases to 4 and 8, the finer discretization pays off and success rates climb. Above N = 8, however, hardware errors begin to dominate the picture. At N = 32, the 161-gate circuit accumulates so many gate errors, measurement errors, and decoherence effects that the success rate collapses to 15.7% despite theory predicting 92.6%. The gap between measured and theoretical success rates grows dramatically with N: 2 percentage points at N = 2, 15 points at N = 4, 31 points at N = 8, 53 points at N = 16, and 77 points at N = 32. This growing gap directly quantifies the cumulative effect of hardware imperfections.

The fidelity conditional on successful post-selection remains high (91-92%) across all N values tested, even at N = 32 where the success rate has collapsed. Post-selection is working as intended: trajectories corrupted by errors are filtered out, and the surviving trajectories retain high fidelity regardless of circuit depth. The post-selection mechanism acts as a form of error detection, sacrificing success rate to maintain output quality. N = 8 represents the sweet spot for this hardware: fine enough discretization to achieve reasonable success rates, but not so many gates that errors overwhelm the protocol. Different hardware with lower gate error rates would likely have a higher optimal N.

Control Experiments

Any claim that Zeno dragging "works" requires careful controls to rule out alternative explanations. Perhaps the improved fidelity comes from some artifact of circuit structure rather than the Zeno mechanism itself. Perhaps the protocol only appears to work because of measurement bias or state preparation errors. To validate that Zeno dragging works for the reasons theory predicts, we designed four control experiments that isolate different aspects of the protocol. The "freeze" control repeatedly measures in the Z-basis without any rotation, testing whether repeated measurement preserves the initial |0⟩ state as the Zeno effect predicts it should. The "forward" control implements the standard |0⟩→|1⟩ drag, while the "reverse" control drags |1⟩→|0⟩, testing whether the protocol works symmetrically in both directions as theory requires. The critical "random" control applies random measurement bases at each step instead of the structured rotation sequence, testing whether the specific structure of basis rotation matters or whether any sequence of measurements would suffice.

Experiment Description Success Rate 95% CI Fidelity 95% CI
Freeze Repeated Z-basis, no rotation 64.6% [62.5, 66.6] 96.0% [94.8, 96.9]
Forward Drag 0⟩ → 1⟩ 42.9% [40.8, 45.1]
Reverse Drag 1⟩ → 0⟩ 42.6% [40.5, 44.7]
Random Random measurement bases 1.5% [1.0, 2.1] 66.7% [48.8, 80.8]

2048 shots per condition. N=8 for all experiments.

The freeze control achieves the highest success rate (64.8%) and highest fidelity (96.0%) because it requires no state transfer—the qubit simply needs to survive repeated Z-basis measurements while remaining in |0⟩. Each measurement has high probability of yielding 0 when the state is |0⟩, and the 35.2% failure rate reflects accumulated errors that occasionally flip the state or cause measurement errors. The forward and reverse drags show symmetric performance (41.9% vs 43.2% success, 91.2% vs 93.9% fidelity), confirming that the protocol works equally well in both directions as theory requires. The slight asymmetry (reverse is marginally better) likely reflects small calibration differences in state preparation for |0⟩ versus |1⟩.

The random control is the most important validation and produces the most dramatic result. With random measurement bases replacing the structured rotation sequence, success rate collapses to 0.4%—essentially zero within statistical uncertainty—and fidelity drops to 33.3%, which corresponds to random guessing among three possible outcomes (final state |0⟩, final state |1⟩, or trajectory failure). This complete failure is exactly what theory predicts: the Zeno effect requires measurements that track the desired trajectory. If you measure in random bases, each measurement has roughly 50% probability of projecting the state to either eigenstate, and the state performs a random walk on the Bloch sphere rather than following a directed path. The probability of accidentally ending up at the target after N random projections is negligible. This control definitively rules out the hypothesis that any sequence of measurements would work—the structured basis rotation is essential.

Single-Qubit Gate Comparison

Zeno dragging can implement any Ry(θ) rotation by choosing the total angle swept across all measurements: to implement Ry(θ), use N measurements with basis angles θ_k = kθ/N instead of kπ/N. This raises an important practical question: how does Zeno-implemented rotation compare to standard unitary gates in terms of fidelity? If Zeno gates are more accurate than standard gates, they might be worth using despite their post-selection overhead. If they are less accurate or merely equivalent, the added complexity serves no purpose. To answer this fairly, we need three comparison points rather than just two. Obviously we compare (1) the standard single-gate implementation of Ry(θ) against (2) the Zeno implementation with N = 8 measurements. But we also need (3) a "depth-matched" control that has the same circuit structure as Zeno—the same sequence of rotations, the same circuit depth—but without the intermediate measurements that enable post-selection.

The depth-matched control is critical for isolating the mechanism of any Zeno advantage. Suppose we found that Zeno gates have higher fidelity than standard gates. There are two possible explanations. First, the Zeno post-selection mechanism might filter out errors, keeping only high-quality trajectories. Second, the circuit structure of Zeno gates might somehow be beneficial—perhaps the rotation-unrotation pairs that bracket each measurement cancel certain coherent errors, or perhaps the longer circuit provides more opportunities for dynamical decoupling effects. The depth-matched control distinguishes these hypotheses: it has the same circuit structure as Zeno, so if circuit structure were responsible for the improvement, the depth-matched control would show similar fidelity. If instead post-selection is responsible for Zeno's advantage, the depth-matched control—which lacks measurements and therefore cannot post-select—should perform poorly, likely worse than even the simple standard gate.

Gate Standard 95% CI Zeno 95% CI Zeno Success Depth-Matched 95% CI
I (identity) 89.2% [88.2, 90.1] 96.0% [95.2, 96.7] 66.3% 90.8% [89.8, 91.6]
Ry(π/8) 90.8% [89.9, 91.7] 95.6% [94.7, 96.3] 63.3% 87.3% [86.3, 88.3]
Ry(π/4) 91.1% [90.2, 91.9] 96.0% [95.2, 96.7] 62.9% 78.0% [76.7, 79.3]
Ry(π/2) 90.8% [89.9, 91.7] 96.0% [95.1, 96.7] 58.1% 50.9% [49.4, 52.5]
Ry(3π/4) 90.1% [89.1, 90.9] 95.4% [94.4, 96.2] 51.6% 23.1% [21.9, 24.5]
X (Ry(π)) 90.5% [89.6, 91.4] 95.2% [94.2, 96.1] 44.7% 11.7% [10.7, 12.7]

4096 shots per circuit. Confidence intervals computed using Wilson score method.

The results are striking and unambiguous. Zeno achieves 95-96% fidelity across all six rotations tested, compared to 89-91% for standard gates—a consistent 5-6 percentage point improvement that holds regardless of rotation angle. Meanwhile, the depth-matched controls collapse toward random outcomes as rotation angle increases, decisively ruling out circuit structure as the explanation. At Ry(π/2), the depth-matched fidelity is 50.9%, which is random guessing between |0⟩ and |1⟩—the circuit has completely scrambled the quantum information. At the full X gate (Ry(π)), depth-matched fidelity drops to 11.7%, which is actually worse than random because the circuit structure systematically biases the output in the wrong direction. The depth-matched circuits demonstrate what happens when you run a Zeno-structured circuit without the error-filtering benefit of post-selection: the additional gates accumulate errors with no mechanism to detect or discard corrupted trajectories.

This comparison confirms that post-selection, not circuit structure, is responsible for Zeno's improved fidelity. The intermediate measurements act as checkpoints throughout the computation. When an error occurs—whether from gate imperfections, decoherence, or measurement noise—it typically causes a subsequent measurement to yield the "wrong" outcome, flagging that trajectory for discard. Trajectories where no errors occurred (or where errors happened to cancel) yield the correct outcome at every checkpoint and are retained. The cost of this error filtering is success rate: as rotation angle increases, each trajectory must pass more "checkpoints" where the correct outcome has lower probability, so success rate drops from 64.8% for identity (where the correct outcome is overwhelmingly likely at each step) to 44.2% for the X gate (where each step has only ~87% probability of the correct outcome, compounding to 44% over 8 steps).

Comparison with Error Mitigation Techniques

The previous section established that Zeno achieves higher per-shot fidelity than standard gates, but this comes at the cost of discarding 35-56% of shots through post-selection. Standard error mitigation techniques—dynamical decoupling (DD) that inserts refocusing pulses during idle periods, gate twirling that randomizes coherent errors into incoherent ones—take a different approach: they attempt to reduce errors on all shots rather than filtering out erroneous shots. This raises a practical question that matters for real applications: which approach extracts more useful information from a fixed amount of quantum processing time? If you have 10 minutes of QPU access, should you run Zeno circuits and discard half the shots, or should you run standard circuits with error mitigation and keep all the shots?

To compare these approaches fairly, we need a metric that accounts for both fidelity and success rate. We define "effective yield" as the product of success rate and fidelity: effective_yield = success_rate × fidelity. This metric captures the total amount of correct information extracted per shot. A method with 90% fidelity and 50% success rate has effective yield of 45%—it extracts 45 "units" of correct information per 100 shots. A method with 45% fidelity and 100% success rate also has effective yield of 45%, extracting the same total information despite very different operating characteristics. By comparing effective yields, we can determine which approach makes better use of limited QPU time.

X Gate (|0⟩ → |1⟩)

Method Fidelity Success Rate Effective Yield
Zeno (N=8) 90.4% 44.2% 40.0%
No mitigation 88.0% 100% 88.0%
Dynamical decoupling (XX) 87.1% 100% 87.1%
Dynamical decoupling (XY4) 88.0% 100% 88.0%
Gate twirling 88.5% 100% 88.5%
DD + Twirling 86.9% 100% 86.9%

|+⟩ State Preservation

Method Fidelity Success Rate Effective Yield
Zeno (N=8) 96.5% 68.4% 66.0%
No mitigation 89.8% 100% 89.8%
Dynamical decoupling (XX) 89.6% 100% 89.6%
Dynamical decoupling (XY4) 89.7% 100% 89.7%
Gate twirling 90.6% 100% 90.6%
DD + Twirling 91.2% 100% 91.2%

When measured by effective yield, standard error mitigation techniques win decisively over Zeno with hard post-selection. For the X gate, standard approaches achieve 87-88% effective yield compared to Zeno's 40%—more than double. For |+⟩ state preservation, standard approaches achieve 90-91% versus Zeno's 66%—still a substantial margin. Zeno's hard post-selection discards too many shots. Even though Zeno achieves the highest per-shot fidelity in both tests (90.4% and 96.5%), the 44-68% success rates mean that more than a third to more than half of the quantum data is thrown away. The standard mitigation techniques achieve lower fidelity but keep all the data, and the math favors quantity over quality in this regime.

However, this comparison assumes hard post-selection, where any trajectory with even a single "flip" is discarded entirely. Later sections of this document show that trajectory weighting—keeping all trajectories but weighting them by quality—can recover most of the discarded data while maintaining Zeno's fidelity advantage. When trajectory weighting is applied, the effective yield comparison changes dramatically, and Zeno becomes competitive with or superior to standard mitigation techniques. The takeaway is that Zeno with hard post-selection is inefficient, but Zeno with intelligent trajectory weighting is a serious contender for practical error mitigation.

Two-Qubit Gates

Single-qubit Zeno gates show clear advantages: 5-6 percentage point fidelity improvements over standard gates, at the cost of reduced success rate. A natural question is whether this advantage extends to two-qubit gates. Two-qubit gates like CNOT are the primary source of errors in most quantum algorithms—they have 10× higher error rates than single-qubit gates on typical hardware—so any technique that improves two-qubit gate fidelity would have substantial practical value. We tested three different approaches to implementing CNOT via Zeno-style protocols: (1) "adaptive" Zeno that conditions target qubit rotation on control qubit state, essentially implementing controlled-Zeno-drag; (2) "X-freeze" that uses Zeno measurements to preserve the control qubit while applying a standard CNOT, attempting to protect the more vulnerable qubit; and (3) "Bell" that attempts to create entanglement through joint Zeno measurements on both qubits, a more speculative approach based on measurement-induced entanglement.

Method Average Fidelity Success Rate
Standard CNOT 82.5% 100%
Zeno Adaptive 78.8% 51.8%
Zeno X-Freeze 34.0% 36.4%
Zeno Bell 7.5% 27.2%

The results are unambiguously negative: standard CNOT wins on every metric, and the gap is substantial. The adaptive Zeno approach comes closest, achieving 78.8% fidelity, but this is still 3.7 percentage points worse than standard CNOT while also discarding half the shots. The X-freeze approach fails dramatically (34.0% fidelity), and the Bell approach fails catastrophically (7.5% fidelity, barely better than random chance at 6.25% for a two-qubit system).

The fundamental problem is circuit overhead. Single-qubit Zeno gates have a simple structure: rotations and measurements, nothing more. The rotations are native single-qubit operations with low error rates, and the measurements, while imperfect, provide the error-filtering mechanism that makes Zeno work. Two-qubit Zeno gates, however, require controlled rotations—rotations on the target qubit conditioned on the state of the control qubit—and these controlled rotations must themselves be implemented using CNOT gates plus single-qubit rotations. So the Zeno "implementation" of CNOT requires multiple CNOT gates internally, plus additional measurements, plus post-selection overhead. The additional gates introduce more errors than post-selection can filter, resulting in net negative value. The Zeno approach only works when the error-filtering benefit of post-selection exceeds the error-introduction cost of additional circuit complexity, and for two-qubit gates on current hardware, this inequality goes the wrong direction.

Entanglement Studies

Zeno dragging uses local projective measurements—measurements on individual qubits in bases determined by that qubit's trajectory alone. This locality raises a fundamental question about the scope of Zeno protocols: what happens when we apply Zeno measurements to entangled states? Entanglement is the quintessential non-local quantum phenomenon, where measurements on one qubit instantaneously affect the state of distant entangled partners. Local Zeno measurements might preserve entanglement if they're gentle enough, or they might destroy it by collapsing the non-local correlations. To find out, we prepared maximally entangled states—Bell pairs (2 qubits), GHZ states (3 and 4 qubits)—and applied Zeno measurement sequences to individual qubits while attempting to preserve the entanglement, comparing against "passive" preservation where no intermediate measurements occur.

State Passive Fidelity Zeno Fidelity
Bell (2-qubit) 84.0% 47%
GHZ (3-qubit) 83.1% 44%
GHZ (4-qubit) 74.5% 45%

The results reveal a fundamental incompatibility between Zeno protocols and entanglement preservation. Passive preservation—simply waiting without intermediate measurements—maintains 74-84% fidelity depending on state size, with the expected degradation for larger states due to accumulated decoherence. Zeno measurements, however, destroy the entanglement entirely, dropping all states to approximately 45% fidelity regardless of size. This 45% is barely above the 37.5-50% that random measurement outcomes would produce, indicating that the entanglement has been completely lost.

This destruction is not a failure of our implementation but a fundamental consequence of the physics. Entanglement is a non-local correlation between qubits: in a Bell state (|00⟩ + |11⟩)/√2, measuring one qubit instantly determines the state of the other, even if they are physically separated. This correlation exists in the joint quantum state, not in either individual qubit. Zeno dragging, however, requires projecting each qubit onto local eigenstates—states that can be written as products of single-qubit states. When you measure qubit A in some local basis, you collapse the joint state onto a product state, severing the entanglement with qubit B. Subsequent measurements cannot restore what has been lost because the correlation was encoded in the quantum state, not in any classical record. You cannot use local measurements to preserve non-local properties; this is not a hardware limitation but a mathematical impossibility.

This result defines a hard boundary for Zeno protocols in quantum computing: they are useful only for single-qubit operations on separable (non-entangled) states. Any quantum algorithm that relies on entanglement—which includes essentially all algorithms that achieve quantum speedups over classical computation—cannot straightforwardly use Zeno gates. This does not mean Zeno is useless, but it does mean that Zeno's applications are restricted to specific scenarios: state preparation, single-qubit rotations in separable registers, and perhaps error-protected memory for individual qubits.

Qubit Quality Correlation

Different qubits on a quantum processor have different error rates due to manufacturing variations, calibration differences, and local noise environments. Some qubits are "good" (long coherence times, low gate errors) and some are "bad" (shorter coherence, higher errors). A natural question is whether Zeno protocols help more on bad qubits or good qubits. One might hypothesize that Zeno helps more on bad qubits because there are more errors to filter, so post-selection has more impact. Alternatively, one might hypothesize that Zeno helps more on good qubits because the protocol itself introduces overhead, and that overhead might overwhelm the benefits on qubits that are already struggling. To test this empirically, we ran identical Zeno protocols on four qubits spanning the quality range available on ibm_torino, characterized by their T1 coherence times which serve as a proxy for overall qubit quality.

Qubit T1 (μs) Passive Fidelity Zeno Fidelity Improvement
Q0 156 91.3% 95.9% +4.5%
Q10 232 93.7% 98.4% +4.7%
Q50 245 98.1% 99.8% +1.8%
Q100 189 95.0% 97.0% +1.9%

The data supports the first hypothesis: lower-quality qubits show larger improvements from Zeno protocols. Qubits Q0 and Q10, with T1 times of 156 μs and 232 μs respectively, show improvements of 4.5-4.7 percentage points when Zeno is applied. Qubit Q50, the best qubit tested with T1 = 245 μs, shows only 1.8 points improvement, and Q100 (T1 = 189 μs) shows 1.9 points. The pattern is clear: Zeno's error-filtering mechanism has more errors to filter on noisier qubits, so it provides more value. On near-perfect qubits like Q50, which already achieves 98.1% passive fidelity, there are few errors to catch, so post-selection provides only marginal improvement.

Zeno protocols provide the most value on marginal hardware—qubits that are functional but noisy. On the best qubits, the overhead of additional gates and reduced success rate may not be worth the small fidelity gain. But on noisy qubits that would otherwise produce unreliable results, Zeno can substantially improve output quality at the cost of reduced throughput. This suggests a potential use case in heterogeneous quantum processors: use standard gates on the best qubits where they are sufficient, and deploy Zeno protocols on the worst qubits where error filtering provides the most benefit.

Trajectory Error Analysis

Throughout the preceding analyses, we have used "hard" post-selection: trajectories with zero flips are kept, and trajectories with any flips are discarded entirely. But this binary classification might be throwing away useful information. Each Zeno trajectory consists of N intermediate measurements, each of which can either succeed (yield the "dragged" outcome, recorded as 0) or flip (yield the opposite outcome, recorded as 1). Standard post-selection treats all non-zero-flip trajectories as equally worthless, but intuition suggests that a trajectory with exactly one flip should be better than a trajectory with five flips. If 1-flip trajectories retain substantial fidelity, hard post-selection is wasting valuable quantum data by discarding them alongside the truly corrupted multi-flip trajectories.

To investigate this, we recorded full trajectory data for 4096 shots and categorized the results by flip count. For each flip count category, we computed the fidelity of the final state—the probability that shots in that category yielded the correct final outcome.

Flips Count Fidelity Interpretation
0 2747 96.0% Perfect trajectory
1 875 94.2% Single error, largely recoverable
2 160 88.1% Degraded but still usable
3+ <100 <82% Severely degraded

The data confirms that trajectory quality is continuous rather than binary, and that hard post-selection is indeed wasteful. One-flip trajectories retain 94.2% fidelity—only 1.8 percentage points below the 96.0% fidelity of perfect trajectories. Yet hard post-selection discards these 875 shots entirely, treating them as worthless alongside the truly corrupted 3+ flip trajectories that have <82% fidelity. This means hard post-selection throws away 21% of the data (875/4096 shots) that has nearly as much information content as the data it keeps. Two-flip trajectories are more degraded at 88.1% fidelity but still carry substantial information—certainly more than zero, which is what hard post-selection extracts from them.

This observation motivates the trajectory weighting analysis that occupies the next major section of this document. Instead of binary keep/discard, we can assign weights to trajectories based on their flip count and compute weighted averages. Trajectories with zero flips get high weight, trajectories with one flip get slightly lower weight, and trajectories with many flips get low weight. This approach extracts signal from all trajectories rather than discarding the imperfect ones, dramatically improving effective yield while maintaining most of Zeno's fidelity advantage.

Measurement Strength Landscape

All previous experiments used projective (strong) measurements that fully collapse the quantum state onto one of two outcomes. But quantum mechanics allows a continuum of measurement strengths, from fully projective (strength = 1) to nearly unitary (strength → 0). Weak measurements disturb the state less than strong measurements, providing partial information about the quantum state without fully collapsing it. In the context of Zeno dragging, weaker measurements might allow higher success rates (because each measurement is less likely to "kick" the state to the wrong outcome) at the cost of less precise steering (because each measurement provides less collapse toward the target basis). We implemented tunable measurement strength via partial ancilla coupling—a standard technique where the system qubit is partially entangled with an ancilla that is then measured, with the entanglement strength controlling measurement strength—and swept strength from 0.05 (nearly unitary, minimal disturbance) to 1.0 (fully projective) across N = 4, 8, and 12 measurements.

Strength N=4 Success N=4 Fidelity N=8 Success N=8 Fidelity N=12 Success N=12 Fidelity
0.05 76.4% 92.3% 57.5% 91.2% 44.8% 89.1%
0.25 73.3% 93.2% 52.1% 92.8% 38.2% 91.5%
0.50 71.7% 94.6% 45.1% 88.7% 31.4% 85.2%
0.75 70.8% 94.8% 34.5% 86.4% 24.1% 82.8%
1.00 71.9% 95.2% 28.9% 80.9% 18.7% 78.4%

The most striking finding is that effective yield (success × fidelity) remains approximately constant at 52-54% across the entire two-dimensional parameter space of (N, strength). Weak measurements with N = 4 achieve similar total performance to strong measurements with N = 12. This is not an accident but a reflection of the underlying physics of the Zeno effect. The total "measurement dose"—roughly, the integrated strength of all measurements—determines how strongly the state is steered toward the target trajectory. This dose can be delivered through a few strong measurements or many weak ones, analogous to how a given medication dose can be administered as one large pill or several small pills. The Zeno effect depends on total dose, not on how that dose is distributed across measurements.

The practical implication is flexibility in circuit design. If hardware constraints favor short circuits (few measurements), one can use strong measurements. If hardware constraints favor gentle operations (weak measurements), one can use more of them. The effective yield is approximately conserved either way, allowing experimenters to choose the measurement strength and count that best fits their specific hardware and application constraints. This flexibility could be particularly valuable in near-term devices where different error sources dominate in different operating regimes.

Cross-Qubit Correlations

Modern quantum processors contain tens to hundreds of qubits sharing the same chip, the same control electronics, and the same physical environment. This proximity creates opportunities for correlated errors: a noise fluctuation might affect multiple qubits simultaneously, or crosstalk during one qubit's operation might disturb its neighbors. If Zeno trajectory errors are correlated across qubits—if knowing that qubit A's trajectory flipped tells you something about whether qubit B's trajectory flipped—we might be able to exploit this correlation for error correction or redundancy schemes. Conversely, if errors are independent, such schemes would provide no benefit. To investigate error correlations, we ran 20 parallel single-qubit Zeno chains on physically separated qubits of ibm_torino and measured the correlation between their trajectory outcomes.

Metric Value
Mean flip correlation r = 0.009
Mean outcome correlation r = 0.012

The correlations are negligible—less than 2% correlation for both flip counts and final outcomes. This means that knowing the trajectory outcome of one qubit provides essentially no information about the trajectory outcomes of other qubits. Errors are statistically independent across the parallel Zeno chains.

This finding has both negative and positive implications. On the negative side, it rules out simple spatial redundancy schemes where we might run multiple Zeno trajectories and majority-vote or correlate their outcomes to improve reliability. If errors were correlated, failed trajectories would tend to cluster, leaving other trajectories intact; we could identify and use the intact clusters. But with independent errors, failures are randomly distributed with no clustering to exploit. On the positive side, the independence confirms that Zeno performance on one qubit is not affected by operations on neighboring qubits. There is no "collateral damage" from running Zeno on qubit A that degrades performance on qubit B. This independence is important for scaling Zeno protocols to larger systems: the performance characterized in our single-qubit experiments should transfer directly to multi-qubit algorithms where Zeno gates are applied in parallel.


Trajectory-Weighted Estimation: Beyond Hard Post-Selection

The experimental results above consistently show that hard post-selection—keeping only trajectories with zero flips, discarding everything else—throws away substantial amounts of useful data. One-flip trajectories retain 94% fidelity, nearly as good as zero-flip trajectories, yet hard post-selection treats them as worthless. This section systematically investigates whether smarter use of trajectory information can improve effective yield without sacrificing Zeno's fidelity advantage.

The Problem with Hard Post-Selection

Hard post-selection embodies a binary classification: a trajectory is either "perfect" (zero flips, keep it with full weight) or "garbage" (one or more flips, discard it entirely with zero weight). This all-or-nothing approach made sense as a first-pass analysis and has the virtue of simplicity. However, the trajectory error analysis revealed that trajectory quality is continuous, not binary. Trajectories with one flip have 94% fidelity, trajectories with two flips have 88% fidelity, and only trajectories with three or more flips have truly degraded fidelity below 82%. By treating one-flip and five-flip trajectories identically—both discarded—hard post-selection fails to extract the substantial information content present in low-flip-count trajectories.

Flip Count Percentage of Shots Fidelity Information Value
0 68.8% 96.8% High
1 21.9% 95.7% High (only 1.1% worse than 0-flip)
2 2.8% 90.3% Moderate
3+ 6.5% <65% Low

The numbers are stark. Hard post-selection keeps 68.8% of shots (the zero-flip trajectories) and discards 31.2%. But among the discarded shots, 21.9 percentage points are one-flip trajectories with 95.7% fidelity—only 1.1 points worse than the zero-flip trajectories that hard post-selection keeps! This means hard post-selection discards trajectories that are 95.7% as good as the trajectories it keeps, simply because they are not 100% perfect. It is as if a factory discarded products that are 99% perfect alongside products that are 50% defective, treating minor blemishes the same as major failures. The waste is substantial: one-third of the quantum data, containing information that is nearly as valuable as the retained data, is simply thrown away.

Soft Post-Selection Strategies

The simplest improvement over hard post-selection is "soft" post-selection: accept trajectories with up to k flips instead of requiring exactly zero flips. Setting k = 0 recovers hard post-selection; setting k = 1 accepts both zero-flip and one-flip trajectories; setting k = 2 also accepts two-flip trajectories; and so on. This approach is easy to implement and provides a sliding scale between maximum fidelity (hard selection, k = 0) and maximum data utilization (no selection, k = ∞).

X-Freeze (|+⟩ State Preservation, N=8)

We first tested soft post-selection on the X-freeze circuit, which prepares the |+⟩ state and uses Zeno measurements to preserve it against noise. This is a "freeze" protocol rather than a "drag" protocol: the measurement basis remains fixed at the X-basis throughout, and success means the qubit remains in |+⟩ despite the repeated measurements.

Strategy Fidelity Success Rate Effective Yield
Hard (k=0) 96.8% 68.8% 66.6%
Soft (k≤1) 96.5% 90.7% 87.6%
Soft (k≤2) 96.3% 93.5% 90.1%
No selection 91.2% 100% 91.2%

The improvement from soft post-selection is dramatic. Moving from hard (k=0) to soft k≤1 recovers 22 percentage points of success rate (from 68.8% to 90.7%) while sacrificing only 0.3 points of fidelity (from 96.8% to 96.5%). Effective yield jumps from 66.6% to 87.6%—a 32% improvement from the same quantum data, simply by changing how we analyze it. Extending to k≤2 pushes effective yield even higher, to 90.1%, with fidelity still at 96.3%. This approaches the performance of "no selection" (keeping all trajectories) while maintaining Zeno's fidelity advantage.

Zeno Drag (|0⟩ to |1⟩, N=8)

We next tested soft post-selection on the Zeno drag circuit, which transfers the qubit from |0⟩ to |1⟩ through a sequence of rotating-basis measurements. This is a more demanding test because each flip during a drag represents actual state corruption (the qubit moved away from the trajectory), not just measurement noise.

Strategy Fidelity Success Rate Effective Yield
Hard (k=0) 91.5% 45.2% 41.3%
Soft (k≤1) 89.2% 63.9% 57.0%
Soft (k≤2) 85.1% 72.4% 61.6%
No selection 65.1% 100% 65.1%

For state transfer, the fidelity-yield tradeoff is steeper than for state preservation. Each flip during a drag represents a moment where the qubit jumped to the wrong side of the Bloch sphere, and this corruption propagates through subsequent evolution. Nevertheless, soft post-selection still provides substantial improvement. Moving from hard (k=0) to soft k≤1 improves effective yield from 41.3% to 57.0%—a 38% improvement—while sacrificing only 2.3 points of fidelity. The k≤2 threshold provides the best balance: 61.6% effective yield with 85.1% fidelity, competitive with the "no selection" approach but with higher fidelity.

Trajectory-Weighted Estimation

Soft post-selection improves upon hard post-selection by keeping more trajectories, but it still employs a threshold: trajectories above the threshold get full weight, trajectories below get zero weight. A more sophisticated approach assigns continuous weights based on trajectory quality, with no sharp threshold. The general weighted estimator computes expectation values as:

<O>_weighted = sum(w_i * O_i) / sum(w_i)

where w_i is the weight assigned to trajectory i and O_i is the measurement outcome. Higher-quality trajectories (fewer flips) get higher weights and contribute more to the final estimate; lower-quality trajectories (more flips) get lower weights and contribute less. The question is how to choose the weighting function that maps flip count to weight.

We tested several weighting functions, each motivated by different assumptions about how errors affect trajectory quality:

Weight Function Formula Rationale
Inverse w = 1/(1+n_flips) Linear penalty proportional to flip count
Exponential w = exp(-n_flips) Rapid decay, strongly penalizes multiple flips
Empirical w = fidelity(n_flips) Data-driven weights based on measured fidelity
Threshold w = 1 if n_flips≤k else 0 Equivalent to soft post-selection

Results (X-Freeze N=8)

Weighting Estimated Fidelity Weight Utilization
None (uniform) 91.2% 100%
Hard post-select 96.8% 68.8%
Inverse 95.8% 81.6%
Exponential 96.6% 77.3%
Empirical 95.5% 91.2%

Exponential weighting emerges as the best overall choice, achieving 96.6% estimated fidelity—within 0.2% of hard post-selection's 96.8%—while utilizing 77.3% of shots instead of 68.8%. The exponential function strongly down-weights high-flip trajectories (3+ flips contribute negligibly) while retaining substantial weight for one-flip and two-flip trajectories. This matches the empirical observation that trajectory quality degrades exponentially with flip count, not linearly. Inverse weighting and empirical weighting also outperform hard post-selection but not as dramatically as exponential weighting.

Hard post-selection—which corresponds to threshold weighting with k=0—is suboptimal for every circuit we tested. In all eleven test cases, at least one continuous weighting scheme outperformed hard post-selection on effective yield while matching or nearly matching its fidelity. Exponential weighting extracts nearly all the signal that hard post-selection captures, plus additional signal from the trajectories that hard post-selection discards as worthless.

Bias Correction for Expectation Values

The trajectory weighting results above focused on fidelity—the probability that the final measurement outcome matches the expected outcome. For many quantum computing applications, however, we care about expectation values: the average value of some observable measured across many shots. VQE (Variational Quantum Eigensolver) algorithms, for example, estimate energy expectations by measuring Pauli observables. Trajectory weighting introduces a subtlety for expectation value estimation: different trajectories have different fidelities, and these different fidelities introduce different biases that must be corrected.

A measurement with fidelity f has expected value that differs from the true value by a factor related to f. If the true expectation value is z_true (which can range from -1 to +1 for a Pauli Z measurement), the measured expectation value is:

E[z_measured] = (2f - 1) * z_true

This formula reflects the fact that a measurement with fidelity f correctly identifies the state with probability f and misidentifies it with probability (1-f). The factor (2f - 1) ranges from +1 (perfect fidelity, no bias) to 0 (random guessing, 50% fidelity) to -1 (perfectly wrong, 0% fidelity). To recover the true expectation value from a biased measurement, we divide by this factor:

z_corrected = z_measured / (2f - 1)

This correction works straightforwardly when f > 0.5 (better than random) but encounters a critical subtlety for superposition states. When the true state is |+⟩ (equal superposition of |0⟩ and |1⟩), the correct Z-measurement outcome is random: 50% probability of +1, 50% probability of -1, with true expectation value z_true = 0. But "fidelity" in our operational definition—the probability of the "correct" outcome—approaches 0.5 because the 50/50 split is the correct answer, not an error. When we try to apply bias correction with f = 0.5, we divide by (2×0.5 - 1) = 0, causing numerical instability.

The solution is eigenstate calibration rather than per-angle calibration. Instead of trying to estimate fidelity separately for each rotation angle (which fails for superpositions), we run one calibration circuit at θ = 0, where the state is a known eigenstate |0⟩ and fidelity can be cleanly measured. We record how fidelity varies with flip count in this calibration circuit, then apply those same correction factors to all angles. This works because the relationship between flip count and error rate is determined by hardware properties (gate errors, measurement errors, decoherence), not by the target state. A one-flip trajectory has the same relationship between flip count and error probability regardless of what rotation angle it was implementing.

VQE Validation Experiment

To validate that trajectory weighting works in a realistic application, we implemented a full VQE energy landscape measurement using Zeno gates. VQE is a leading candidate algorithm for near-term quantum computers, using quantum circuits to estimate the ground state energy of molecular Hamiltonians. The algorithm requires measuring expectation values of Pauli operators at many different variational parameter settings. We implemented Ry(θ) rotations using Zeno dragging for θ ranging from 0 to π in nine steps, then measured ⟨Z⟩ to trace out the energy landscape ⟨Z⟩ = cos(θ).

Results (True value: ⟨Z⟩ = cos(θ))

θ/π True ⟨Z⟩ Standard Hard PS Calibrated
0.000 +1.000 +0.809 +0.939 +1.000
0.125 +0.924 +0.757 +0.842 +0.805
0.250 +0.707 +0.564 +0.611 +0.643
0.375 +0.383 +0.311 +0.330 +0.341
0.500 +0.000 +0.076 +0.137 +0.012
0.625 -0.383 -0.315 -0.333 -0.316
0.750 -0.707 -0.544 -0.580 -0.616
0.875 -0.924 -0.697 -0.761 -0.784
1.000 -1.000 -0.749 -0.784 -0.803

Root Mean Square Error (RMSE)

Method RMSE Improvement vs Standard
Standard 0.164
Hard PS 0.122 26% better
Calibrated 0.101 38% better

The calibrated soft post-selection method—using soft k≤1 thresholds with eigenstate-calibrated bias correction—achieves the lowest RMSE at 0.101. This represents a 38% improvement over standard gates (RMSE 0.164) and a 17% improvement over hard post-selection (RMSE 0.122). Equally importantly, the calibrated method uses 37% more shots than hard post-selection, extracting more information from the same quantum data. For VQE applications where accuracy of expectation values matters more than per-shot fidelity, trajectory weighting provides substantial improvement over both standard approaches and naive Zeno with hard post-selection.

Flip Position Analysis

The trajectory weighting analysis so far has treated all flips as equally damaging: a trajectory with one flip gets the same weight regardless of whether that flip occurred at the first measurement or the last. But physical intuition suggests that flip position should matter. A flip at the first measurement corrupts all subsequent evolution—the qubit starts the remaining N-1 measurements on the wrong side of the Bloch sphere. A flip at the last measurement only affects final readout—the qubit was on the correct trajectory for N-1 measurements and only erred at the end. If early flips are more damaging than late flips, position-aware weighting schemes might outperform position-blind schemes like exponential weighting.

To test this, we analyzed fidelity conditioned on flip position for the X-freeze circuit with N = 8 measurements.

Fidelity Impact by Flip Position (X-Freeze N=8)

Position Fidelity if Flip Fidelity if No Flip Impact
0 (early) 38.6% 96.1% +57.5%
1 43.8% 96.0% +52.3%
2 44.8% 95.7% +51.0%
3 46.9% 95.9% +48.9%
4 44.5% 95.6% +51.1%
5 47.0% 95.8% +48.8%
6 46.5% 95.4% +49.0%
7 (late) 49.1% 95.5% +46.4%

The data confirms the position-dependence hypothesis. Early flips (position 0) reduce fidelity by 57.5 percentage points compared to trajectories with no flip at that position; late flips (position 7) reduce fidelity by only 46.4 points. This 11-point asymmetry is substantial and reflects the error propagation structure of the Zeno protocol: early flips corrupt the entire remaining trajectory (7 subsequent measurements on the wrong track), while late flips corrupt only a small portion (0-1 subsequent measurements). The intermediate positions show a monotonic trend as expected, with mid-trajectory flips having intermediate impact.

This position dependence suggests that weighting schemes can be improved by penalizing early flips more heavily than late flips. We implemented position-aware weighting as w = prod(1/(1 + penalty(pos))) where penalty is larger for early positions and smaller for late positions. In practice, this position-aware scheme achieved results comparable to but not significantly better than simple exponential weighting, suggesting that the position information provides modest additional value beyond what flip count alone captures.

Recommended Post-Selection Strategies

Based on the comprehensive experimental data presented in this section, we can now offer specific recommendations for how to analyze Zeno trajectory data depending on the application:

Application Recommended Strategy Rationale
Maximum data extraction Soft k≤2 90%+ yield, 96%+ fidelity, no tuning required
Position-sensitive Position-aware thresholding Rejects pos-0 multi-flip trajectories, keeps the rest
VQE/QAOA estimation Soft k≤2 Best balance of fidelity and yield (see W5 weakness test)
Quick benchmarking Soft k≤1 88% yield, 96.5% fidelity
Maximum fidelity Hard k=0 Highest per-shot fidelity (but wasteful)

Hard post-selection is never optimal. In all eleven circuits tested across multiple experiments, at least one alternative strategy outperformed hard post-selection on effective yield while maintaining comparable fidelity.

Note on exponential weighting: Earlier versions of this document recommended w = exp(−n_flips) as the best general strategy. Weakness test W5 showed this is suboptimal: soft k≤2 thresholding achieves 90.1% effective yield vs. 74.7% for exponential weighting on the same data. The exponential function assigns weight 0.37 to one-flip trajectories that have 96.5% fidelity — penalizing them far more than their quality warrants. Soft thresholds that accept low-flip trajectories at full weight extract more signal. Exponential weighting remains useful for composition analysis (where it eliminates the composition penalty, as shown in the Gate Composition section) but is not recommended as the default strategy for single-gate analysis.


Gate Composition: Chaining Zeno Gates

All experiments so far have examined individual Zeno gates in isolation: a single Zeno drag, a single Zeno freeze, a single Zeno rotation. But real quantum algorithms require sequences of gates—ten, a hundred, a thousand operations chained together. A critical question for the practical utility of Zeno protocols is whether they can be composed: can multiple Zeno gates be chained in sequence while maintaining their fidelity advantage? If the post-selection penalty compounds multiplicatively—if a gate with 58% success rate, when chained with another 58% gate, yields only 34% joint success—then Zeno would be limited to isolated single-gate benchmarks and could never be used in real algorithms where circuits contain many gates.

Experimental Design

To test gate composition directly, we designed six circuits that all implement the same net rotation Ry(π/2) but through different means. By comparing these circuits, we can isolate the effect of Zeno composition from other factors like total rotation angle or circuit depth:

  1. Standard Ry(π/2) — A single standard unitary gate implementing the full rotation. This provides the baseline for what standard quantum computing achieves.

  2. Zeno Ry(π/2) — A single Zeno gate implementing the full rotation using N=8 intermediate measurements. This shows what Zeno achieves for a single gate.

  3. Zeno Ry(π/4) → Zeno Ry(π/4) — Two Zeno gates chained in sequence, each implementing half the rotation (π/4) with N=8 measurements each, for a total of 16 intermediate measurements. This directly tests composition.

  4. Standard Ry(π/4) → Standard Ry(π/4) — Two standard gates chained in sequence. This controls for any effect of breaking the rotation into two steps.

  5. Mixed: Zeno Ry(π/4) → Standard Ry(π/4) — A hybrid approach with one Zeno gate followed by one standard gate, testing whether Zeno and standard gates can be mixed.

  6. Depth-matched control — The same circuit structure as the composed Zeno (same rotations, same depth) but without the intermediate measurements. This isolates the effect of post-selection from circuit structure.

Results with Hard Post-Selection

Circuit Fidelity Success Rate Effective Yield
Standard Ry(π/2) 89.6% 100% 89.6%
Zeno Ry(π/2) 96.4% 57.6% 55.5%
Zeno+Zeno (π/4+π/4) 95.7% 40.0% 38.3%
Standard+Standard 90.9% 100% 90.9%
Mixed (Zeno+Standard) 95.7% 62.1% 59.4%
Depth-matched 50.3% 100% 50.3%

With hard post-selection, the composition penalty is severe. A single Zeno gate achieves 57.6% success rate; two chained Zeno gates achieve only 40.0%—a 31% reduction. This matches the prediction from multiplying independent success probabilities: if each gate has 58% success and the gates are independent, the joint success probability is 0.58 × 0.58 ≈ 0.34, close to the observed 40%. At this rate, a 10-gate Zeno circuit would have success probability 0.58^10 ≈ 0.4%, and a 100-gate circuit would have success probability essentially zero. This makes Zeno appear completely impractical for real algorithms, which routinely require hundreds or thousands of gates.

The fidelity of the composed Zeno circuit remains high (95.7%, nearly matching the single Zeno gate's 96.4%), confirming that post-selection continues to work—the trajectories that survive are high quality. But the success rate collapse means that almost no trajectories survive. The effective yield of 38.3% for composed Zeno is less than half the 89.6% achieved by a simple standard gate, despite the higher fidelity.

Results with Trajectory Weighting

The grim picture above assumes hard post-selection. Applying trajectory weighting to the same raw data reveals a completely different story—one where Zeno composition is not only feasible but actually advantageous.

Single Zeno Ry(π/2), N=8:

Weighting Fidelity Utilization Effective Yield
Hard 96.4% 57.6% 55.5%
Soft k≤1 94.5% 78.7% 74.4%
Soft k≤2 92.5% 83.9% 77.7%
Exponential 95.0% 100% 95.0%

Composed Zeno Ry(π/4)+Ry(π/4), N=16:

Weighting Fidelity Utilization Effective Yield
Hard 95.7% 40.0% 38.3%
Soft k≤1 94.5% 67.8% 64.1%
Soft k≤2 93.7% 77.4% 72.5%
Exponential 94.8% 100% 94.8%

Composition Penalty by Analysis Method

Metric Hard Post-Selection Exponential Weighting
Single Zeno yield 55.5% 95.0%
Composed Zeno yield 38.3% 94.8%
Composition penalty −31.1% −0.3%

The composition penalty vanishes under trajectory weighting. With hard post-selection, composing two Zeno gates costs 31% of effective yield. With exponential weighting, the cost is only 0.3%—within statistical noise of zero. Trajectory weighting gracefully handles the increased flip counts in longer circuits. A single 8-measurement Zeno gate might produce trajectories with 0, 1, 2, or more flips. A composed 16-measurement Zeno sequence will produce trajectories with roughly twice as many flips on average. Hard post-selection rejects all non-zero-flip trajectories, and with 16 measurements there are many more ways to accumulate a flip, so the success rate crashes. Exponential weighting, however, simply assigns lower weights to higher-flip trajectories and extracts whatever signal they contain. With twice as many measurements, the typical trajectory might have one flip instead of zero, but one-flip trajectories still carry 94%+ of the signal, and exponential weighting captures this.

The practical consequence is transformative: with trajectory weighting, composed Zeno gates (94.8% effective yield) outperform standard gate sequences (90.9% effective yield). The Zeno approach—which looked hopelessly impractical under hard post-selection—becomes the superior choice when analyzed properly.

Scaling to Three Gates

To verify that the composition result extends beyond two gates, we implemented three-gate chains, each implementing net rotation Ry(π/2) using gates of equal angle.

Gates Standard Yield Zeno Hard Yield Zeno Exp Yield
1 90.7% 54.1% 94.7%
2 91.1% 38.0% 94.4%
3 91.2% 25.7% 94.6%

The pattern holds: hard post-selection success rates compound multiplicatively (54% → 38% → 26%), making Zeno look increasingly hopeless as circuit depth grows. But exponential weighting maintains approximately 94.5% effective yield regardless of gate count, consistently outperforming standard gates at every chain length.

This result is perhaps the most important finding in the dataset. It establishes that Zeno protocols, when combined with proper trajectory analysis, can scale to multi-gate circuits without composition penalty. The limitation of Zeno is not composition but rather the specific domains where it applies (single-qubit operations on separable states, as established in the entanglement studies). Within its domain of applicability, Zeno with trajectory weighting provides a genuine advantage over standard gates that persists as circuits grow deeper.


Causal Position-Weighted Model

Analysis of 45,956 trajectories reveals that flip position matters more than flip count. A flip at measurement position 0 causes 51.6% fidelity degradation, while a flip at position 15 causes only 17.4% degradation. The correlation between position and impact is r = 0.98, suggesting a simple causal model: early flips corrupt all downstream evolution, while late flips only affect the final readout.

Position Impact on Fidelity

Position Fidelity With Flip Fidelity Without Impact N samples
0 30.2% 81.8% +51.6% 9327
1 34.2% 80.3% +46.1% 8923
2 36.8% 79.1% +42.3% 8427
4 38.6% 76.2% +37.6% 5988
8 40.4% 72.9% +32.5% 2210
15 54.2% 71.6% +17.4% 766

Causal Weighting Derivation

If a flip at position p corrupts (N-p)/N of the remaining evolution, and each corrupted step has error rate e, then the fidelity given a flip at position p is:

F(p) = F_0 × (1-e)^(N-p)

For k flips at positions p_1, ..., p_k:

F(p_1,...,p_k) = F_0 × exp(-e × Σᵢ(N - pᵢ))

The optimal weight is therefore:

w(trajectory) = exp(-e × Σᵢ(N - pᵢ))

This is position-weighted exponential, not flip-count exponential. The standard exp(-n) weighting treats all flips equally; the causal formula penalizes early flips more heavily.

Empirical Validation

Training ML models (Logistic Regression, Random Forest, Gradient Boosting) on trajectory features confirms that position is the dominant predictor:

Feature Importance
pos_0 (first flip) 27.0%
flip_rate 15.1%
early_flips 14.3%
n_flips 12.3%
late_flips 1.3%

Practical Result: Hardware Noise Dominates

Despite the theoretical correctness of causal weighting, on current NISQ hardware the optimal strategy collapses to simple soft thresholding:

Strategy Fidelity Utilization Effective Yield
Hard (k=0) 81.3% 53.6% 43.6%
Exp(-n) 80.2% 63.0% 50.5%
Soft (k<=2) 78.3% 82.9% 64.9%
Soft (k<=3) 76.6% 86.3% 66.1%
Causal (optimal e) 70.3% 99.1% 69.7%

The causal formula with optimized e approaches "accept everything" because the fidelity floor on ibm_torino is approximately 70%. When even high-flip trajectories retain 70% fidelity, discarding them costs more in utilization than it gains in fidelity.

This finding has implications for future hardware: the causal weighting formula will become useful when hardware fidelity floors exceed 85-90%. On current NISQ devices, the noise is too high for sophisticated weighting to help.

Model Files

The models/ directory contains:

  • train_trajectory_model.py — Script to train trajectory quality predictors
  • train_position_aware_model.py — Script for position-aware model training
  • trajectory_model_results.json — Results from basic model comparison
  • position_aware_model_results.json — Results including causal weighting analysis

Dataset Contents

Core Zeno Dragging Studies

zeno_drag.json — The foundational experiment characterizing Zeno dragging as a function of N, the number of intermediate measurements. This file contains results for N = 2, 4, 8, 16, and 32, with 2048 shots per condition. Data includes raw bitstrings (the full trajectory record for each shot), success rates, fidelities conditional on success, theoretical success rate predictions, and survival curves showing how the surviving population decreases through each measurement step. This is the primary dataset for understanding the N-dependence of Zeno dragging.

zeno_controls.json — The four control experiments that validate the Zeno mechanism: freeze (repeated Z-basis measurement with no rotation), forward drag (|0⟩ → |1⟩), reverse drag (|1⟩ → |0⟩), and random (random measurement bases). Each control uses N = 8 measurements with 2048 shots. The random control is particularly important as a null hypothesis test, demonstrating that structured basis rotation is essential—random measurements destroy rather than transfer quantum states.

zeno_not_gate.json — A focused comparison of Zeno-implemented X gate against the standard unitary X gate, with additional controls. Contains 5 circuits with 4096 shots each, providing high statistical precision on the single most important gate.

Gate Implementation and Error Mitigation

zeno_gates_corrected.json — Systematic comparison of Zeno against standard gates across six rotation angles: identity (Ry(0)), Ry(π/8), Ry(π/4), Ry(π/2), Ry(3π/4), and X (Ry(π)). For each angle, three circuits are tested: standard unitary implementation, Zeno implementation with N=8, and depth-matched control (same structure as Zeno but without measurements). 18 circuits total with 4096 shots each. This dataset enables clean separation of the post-selection effect from circuit structure effects.

zeno_vs_mitigation.json — Head-to-head comparison of Zeno against standard error mitigation techniques: no mitigation, dynamical decoupling with XX and XY4 pulse sequences, gate twirling, and combined DD+twirling. Tests three circuits: X gate (|0⟩ → |1⟩), |+⟩ state preservation, and Bell state creation. 6 job submissions with 4096 shots each. This dataset addresses the practical question of whether Zeno is competitive with established error mitigation approaches.

Two-Qubit and Entanglement Studies

zeno_cnot.json — Investigation of three approaches to Zeno-based CNOT implementation: adaptive (controlled Zeno drag), X-freeze (Zeno protection of control qubit with standard CNOT), and Bell (joint Zeno measurements for entanglement). All approaches are compared against standard CNOT. 20 circuits total with 2048 shots each. The results establish that Zeno protocols do not extend advantageously to two-qubit gates on current hardware.

zeno_entanglement_trajectories.json — Comprehensive study of Zeno applied to entangled states: Bell pairs, 3-qubit GHZ, and 4-qubit GHZ. Also includes qubit quality correlation analysis (how Zeno improvement varies with qubit T1 time) and detailed trajectory error analysis (fidelity as a function of flip count and flip position). 34 circuits with 4096 shots each. This dataset establishes the fundamental incompatibility between local Zeno measurements and entanglement preservation.

Parameter Space Exploration

zeno_megabatch.json — Comprehensive parameter sweep exploring multiple aspects of Zeno physics: schedule optimization (different N values), cross-qubit correlations (20 parallel Zeno chains to measure error independence), feedback protocols (classical feedback based on trajectory outcomes), parity-check stabilization (using parity measurements for error detection). 42 circuits with 2048 shots each. This is the "kitchen sink" experiment covering multiple research directions.

zeno_strength_landscape.json — Fine-grained exploration of the measurement strength × number of measurements parameter space. Measurement strength sweeps from 0.05 (nearly unitary) to 1.0 (fully projective) in five steps, crossed with N = 4, 8, and 12 measurements. 45 circuits with 4096 shots each. The results demonstrate that effective yield is approximately constant across this parameter space, reflecting the physics of total measurement dose.

Trajectory Analysis

trajectory_estimation.json — Raw trajectory data designed for offline analysis of post-selection strategies. Includes X-freeze and drag circuits at N = 4, 8, 12, and 16, plus partial rotation circuits at θ = π/4, π/2, and 3π/4. 11 circuits with 4096 shots each, all with full trajectory recording. This dataset supports research on trajectory weighting schemes, flip position analysis, and bias correction.

vqe_trajectory_validation.json — Full VQE energy landscape validation with Zeno and standard circuits. Implements Ry(θ) rotations for θ from 0 to π in nine steps, measuring ⟨Z⟩ at each point. Enables comparison of standard gates, hard post-selection Zeno, and calibrated soft post-selection Zeno for expectation value estimation. 18 circuits with 2048 shots each.

trajectory_weight_analysis.json — Offline analysis results comparing weighting strategies. Contains empirical fidelity measurements by flip count, strategy comparison (hard, soft k≤1, soft k≤2, inverse, exponential, empirical), flip position impact analysis, and optimal threshold determination. Derived from trajectory_estimation.json through post-processing.

Gate Composition

zeno_composition/ — Contains three files for the gate composition experiment. standard_circuits.json has standard gate circuits: single Ry(π/2), composed Ry(π/4)+Ry(π/4), and depth-matched control. zeno_circuits.json has the corresponding Zeno circuits with full trajectory recording. trajectory_weighting_analysis.json contains the composition analysis showing how trajectory weighting eliminates the composition penalty. 4096 shots per circuit.

zeno_scaling/ — Extends composition to three gates. Contains standard and Zeno versions of 1-gate, 2-gate, and 3-gate chains, each implementing net Ry(π/2). Demonstrates that exponential weighting maintains ~94.5% effective yield regardless of gate count. 4096 shots per circuit.

Measurement Duration Analysis

measurement_duration/ — Motivated by a question from Dr. Philippe Lewalle (MIT Lincoln Laboratory) on whether measurement duration overhead negates Zeno's fidelity advantage in practice. On ibm_torino, measurements take 1.56 μs versus 32 ns for single-qubit gates — a 49× overhead per measurement step. A Zeno circuit with N=8 measurements accumulates 12.5 μs of measurement time, roughly 10.8% of the qubit's T1 (115.2 μs at time of experiment).

To isolate the decoherence cost of this duration overhead, we introduce a delay-matched control: a standard Ry(θ)·Ry(-θ) circuit with an inserted idle delay equal to the total measurement time of the corresponding Zeno circuit. This ensures the delay-matched circuit experiences the same wall-clock decoherence as Zeno but without the measurement-based error suppression. All circuits target |0⟩ as the ideal output, following the same protocol as zeno_gates_corrected.

The experiment tests 6 rotation angles (I, Ry(π/8), Ry(π/4), Ry(π/2), Ry(3π/4), X) across 7 values of N (2, 4, 8, 12, 16, 24, 32), with standard, Zeno, delay-matched, and depth-matched variants. 99 circuits total, 4096 shots each.

Results for Ry(π/2):

N Time (μs) T1 fraction Zeno (hard PS) Delay-matched Zeno − Delay z-score p-value
2 3.12 2.7% 94.1% 89.0% +5.1pp 7.6 <10⁻¹³
4 6.24 5.4% 95.0% 87.0% +8.0pp 11.8 <10⁻³¹
8 12.48 10.8% 95.6% 83.3% +12.4pp 17.1 <10⁻⁶⁵
12 18.72 16.2% 96.2% 78.1% +18.1pp 23.2 <10⁻¹¹⁸
16 24.96 21.7% 95.6% 71.6% +24.0pp 27.5 <10⁻¹⁶⁶
24 37.44 32.5% 96.6% 57.8% +38.8pp 40.9 <10⁻³⁶⁶
32 49.92 43.3% 95.5% 45.9% +49.6pp 45.6 <10⁻⁴⁵⁴

Zeno fidelity remains flat at 95.5% ± 0.7% across all N values (linear regression slope not significantly different from zero, p = 0.17). The delay-matched circuits decay as expected from T1 decoherence. The fidelity gap between Zeno and delay-matched widens with increasing circuit time, reaching 49.6 percentage points at N=32 where 43% of T1 has elapsed. The answer is that measurement duration is not an obstacle — the longer the circuit runs, the more Zeno outperforms, because the measurements actively suppress the decoherence that the time overhead causes.

99 circuits, 4096 shots each, job ID d6jh8akgmsgc73bv2uu0, 113s QPU time. March 3, 2026.

Zeno High-N Sweep

zeno_high_n_sweep/ — Extends the measurement duration experiment to circuit times well beyond the qubit's natural coherence lifetime. Tests N = 8, 16, 32, 48, 64, 96, 128, 192, 256 intermediate measurements for the X gate (θ = π), corresponding to total circuit times of 12.5μs to 399μs. Qubit 0 had T1 = 115.2μs at time of experiment, so N=256 corresponds to 3.47× T1. Nineteen circuits were run at 4096 shots each on ibm_torino (job d6jiid060irc7394imlg, 32s QPU time, March 3 2026).

At these N values, hard post-selection is not viable: the probability of all intermediate measurements returning 0 is (1−0.116)^256 ≈ 10⁻¹⁴, and zero shots pass at N ≥ 192. Four trajectory weighting schemes are compared: exp(−k), calibrated exp(−αk) tuned to the measurement error rate, an excess-flip scheme that penalizes only flips beyond the expected measurement error count, and a binomial likelihood ratio. The excess-flip scheme is reported below as the most conservative estimate that accounts for the high expected flip count from measurement error at large N.

Results (X gate, 4096 shots per circuit):

N Time (μs) ×T1 Zeno (excess-flip) Delay-matched Gap
8 12.5 0.11 88.1% 74.6% +13.5pp
32 49.9 0.43 91.6% 55.7% +35.9pp
64 99.8 0.87 90.2% 40.6% +49.6pp
128 199.7 1.73 86.5% 25.0% +61.5pp
256 399.4 3.47 85.1% 18.3% +66.8pp

At 3.47× T1 the delay-matched control reads 18.3%, approaching the thermal floor. Zeno excess-flip weighted fidelity is 85.1%. The gap between Zeno and delay-matched grows monotonically with circuit time. Zeno fidelity declines by approximately 3 percentage points from N=8 to N=256, while the delay-matched control declines by 56 percentage points over the same range.

Zeno Extended N Sweep

zeno_extended_n_sweep/ — Continues the high-N sweep to N = 512 and N = 1024, reaching circuit times of 799μs (6.93× T1) and 1597μs (13.86× T1). At N=1024 the qubit has thermalized nearly fourteen times over. The same four trajectory weighting schemes from the high-N sweep are applied, with the excess-flip scheme reported as the most conservative defensible estimate. Seven circuits were run at 4096 shots each on ibm_torino (job d6jjbc4mmeis739riovg, 32s QPU time, March 3 2026).

Results (X gate, 4096 shots per circuit):

N Time (μs) ×T1 Zeno (excess-flip) Delay-matched Gap
256 399.4 3.47 84.5% 19.6% +64.9pp
512 798.7 6.93 82.7% 17.1% +65.5pp
1024 1597.4 13.86 77.0% 16.8% +60.2pp

The delay-matched control reaches the thermal floor (~17%) by 3× T1 and remains there through 14× T1. Zeno excess-flip weighted fidelity declines at approximately 2.5 percentage points per doubling of N, consistent with logarithmic degradation. At 13.86× T1, the Zeno-protected qubit maintains 77% fidelity while the delay-matched control reads 16.8%.

Zeno Fidelity Decline Diagnostic

zeno_diagnostic/ — Identifies the source of fidelity decline observed at high N. The X gate Zeno protocol requires Ry rotation gates between each measurement; at N=1024 this amounts to 2048 rotations decomposed into ~5000 native gates. To determine whether the decline from 92% (N=32) to 77% (N=1024) originates from accumulated gate errors or from the measurement mechanism itself, we compare identity Zeno (repeated measurement with no rotations) against X Zeno (measurement with Ry rotations) at matched N values. Fifteen circuits were run at 4096 shots each on ibm_torino (job d6jjtl860irc7394k8rg, 55s QPU time, March 3 2026).

Results (excess-flip weighting, 4096 shots per circuit):

N ×T1 Identity Zeno X Zeno Delay-matched I − X
32 0.43 95.4% 92.5% 89.0% +2.9pp
128 1.73 94.4% 91.1% 86.6% +3.2pp
256 3.47 93.7% 88.6% 87.6% +5.2pp
512 6.93 92.6% 82.8% 88.8% +9.7pp
1024 13.86 91.2% 82.3% 87.3% +8.9pp

Identity Zeno declines by 4.2 percentage points from N=32 to N=1024. X Zeno declines by 10.2 percentage points over the same range. The measurement mechanism accounts for roughly 40% of the total decline; accumulated Ry gate errors account for the remaining 60%. The Zeno projection mechanism itself remains effective at 91.2% fidelity after 1024 consecutive measurements spanning 14× T1.

Zeno Interval — Inter-Measurement Gap Sweep

zeno_interval/ — Maps the phase boundary between Zeno-protected coherence and unprotected thermalization by varying the idle time between measurements. All previous experiments used back-to-back measurements (1.56μs apart). Here we fix N=32 and insert deliberate delays of 0, 5, 10, 20, 50, and 100μs between each measurement, increasing the inter-measurement interval from 1.6μs (1.4% of T1) to 101.6μs (88% of T1). Both identity Zeno (no rotations) and X Zeno (with Ry rotations) are tested alongside delay-matched controls at each gap value. Eighteen circuits, 4096 shots each, job d6jk2n4gmsgc73bv64v0, 95s QPU time, March 3 2026.

Results (excess-flip weighting, X gate, N=32):

Gap (μs) Interval (μs) Interval/T1 X Zeno I Zeno Delay X − Delay
0 1.6 0.014 92.9% 95.4% 89.3% +3.7pp
5 6.6 0.057 83.1% 94.8% 86.2% −3.2pp
10 11.6 0.100 76.7% 93.4% 85.7% −9.0pp
20 21.6 0.187 72.0% 92.5% 85.3% −13.3pp
50 51.6 0.447 70.1% 91.0% 85.3% −15.2pp
100 101.6 0.881 50.9% 89.4% 85.2% −34.4pp

The X Zeno protocol loses its advantage over delay-matched controls at just 5μs of inserted gap. The Ry rotations between measurements require coherence to function correctly; when the qubit partially dephases during the gap, the next rotation acts on a decohered state and accumulates errors. By gap=100μs (88% of T1), X Zeno has collapsed to 50.9%.

Identity Zeno tells a different story. With no rotations to corrupt, the pure measurement-based freeze declines only 6 percentage points across the entire sweep — from 95.4% at gap=0 to 89.4% at gap=100μs. Even when the inter-measurement interval approaches T1, the projective measurement still partially resets the decoherence clock. The measurement mechanism is robust; the gate operations between measurements are the fragile component.

Weakness Tests

tests/weakness_tests.json — Systematic stress-testing of six identified methodological weaknesses in the dataset's claims. Three tests run offline against existing data (zero QPU cost); three run new hardware experiments on ibm_torino (51 circuits, 4096 shots each, job d6jkidm33pjc73dm3upg, 56s QPU, March 3 2026). The test script is at tests/test_weaknesses.py.

W1: Single-qubit generalization. The original gate comparison experiments all used qubit 0, which has an unusually high measurement error rate (11.6%) compared to the chip median. To test whether the Zeno advantage generalizes, we ran identical Zeno vs. standard comparisons on 5 qubits spanning the full quality range: Q0 (T1=115μs, meas_err=11.6%), Q37 (T1=60μs, meas_err=1.0%), Q85 (T1=3.6μs, meas_err=4.6%), Q95 (T1=174μs, meas_err=2.7%), and Q131 (T1=333μs, meas_err=1.8%).

Qubit T1 (μs) Meas Err Std I Zeno I Std X Zeno X Zeno−Std I Zeno−Std X
Q0 115.2 11.6% 90.2% 94.7% 89.8% 88.6% +4.5pp −1.2pp
Q37 59.9 1.0% 100.0% 100.0% 100.0% 97.6% −0.0pp −2.4pp
Q85 3.6 4.6% 98.2% 98.9% 98.6% 96.4% +0.7pp −2.2pp
Q95 174.3 2.7% 99.4% 99.8% 98.9% 96.7% +0.5pp −2.1pp
Q131 332.5 1.8% 99.4% 99.7% 99.5% 96.9% +0.2pp −2.6pp

Result: FAIL. Zeno improves identity gate fidelity on all qubits (+0.2 to +4.5pp), but hurts X gate fidelity on every qubit tested (−1.2 to −2.6pp). The original dataset's Q0 results flatter Zeno because Q0 has the highest measurement error on the chip — more errors to filter means more post-selection benefit. On low-error qubits where standard gates already achieve 99–100% fidelity, Zeno's overhead (extra Ry gates, mid-circuit measurements) introduces more errors than post-selection removes. The Zeno gate improvement claim is qubit-dependent and should not be generalized from Q0 alone.

W2: Measurement error independence. The excess-flip weighting scheme assumes measurement errors are independent Bernoulli events. We tested this assumption against identity Zeno data (N=32, 4096 shots) where there are no Ry rotations — any observed flips are purely from measurement backaction.

  • Transition autocorrelation: Lag-1 autocorrelation of transition events (0→1 or 1→0) is +0.41 (t=53.8, p≈10⁻¹⁷⁸). Transitions cluster rather than occurring independently.
  • Binomial goodness-of-fit: Chi-squared = 15,323 (p≈0). Observed 958 zero-flip shots vs. 75 expected under Binomial(32, 0.116). Measurement backaction creates long runs of correlated outcomes.
  • Monotonicity: Fidelity decreases monotonically with flip count through k=7 (96.9% → 36.4%), but 4 violations occur at k≥8 where bin sizes are <10 shots. The monotonic regime (k=0 through k=7, covering 98.3% of all shots) validates flip-count-based weighting for practical purposes despite the independence violation.

Result: FAIL on statistical assumptions, PASS on practical utility. The Binomial independence model is wrong — backaction creates correlated flip runs. But flip count remains a monotonically decreasing predictor of fidelity across the range that matters (0–7 flips, 98% of data). The excess-flip weighting formula works in practice even though its theoretical justification is incorrect.

W3: Zero-noise extrapolation comparison. The original error mitigation comparison tested only dynamical decoupling and gate twirling. We added ZNE (zero-noise extrapolation) via unitary folding at 1×, 3×, 5×, and 7× noise levels, with Richardson extrapolation to zero noise.

Gate Standard ZNE (linear) ZNE (quadratic) Zeno (excess-flip) Zeno (hard PS)
I 89.8% 89.9% 89.6% 94.7% 95.6%
X 89.3% 88.9% 88.5% 88.6% 94.9%

Result: MIXED. ZNE provides negligible improvement on this hardware (identity: +0.1pp, X: −0.4pp over standard). The noise model may be too non-Markovian for Richardson extrapolation to help. Zeno hard PS beats ZNE on per-shot fidelity for both gates. On effective yield at 100% utilization, ZNE (89.9%) and Zeno excess-flip (88.6%) are comparable for X gate. Zeno wins on identity (94.7% vs 89.9%). Neither method dominates the other across all metrics.

W4: VQE fidelity constancy. The VQE bias correction formula z_corrected = z_measured / (2f − 1) assumes constant fidelity f across all rotation angles. We extracted per-theta hard-PS fidelity from the VQE data:

θ/π Hard-PS Fidelity
0.000 100.0%
0.125 89.5%
0.250 82.8%
0.375 67.9%
0.500 55.6%
0.625 36.5%
0.750 19.6%
0.875 4.8%
1.000 8.0%

Result: FAIL. Fidelity ranges from 4.8% to 100% — a 95pp spread. The constant-fidelity assumption is catastrophically wrong. The VQE correction formula divides by (2f−1), and when f drops below 50% the correction factor flips sign. Per-theta calibration reduces RMSE from 18.99 to 1.25 — a 93.4% improvement. The published VQE RMSE numbers (0.101 for "calibrated") used eigenstate calibration which partially mitigates this, but the bias correction discussion in the README should be updated to note that per-theta fidelity variation is extreme.

W5: ML model vs. simple heuristics. The README recommends exponential weighting w = exp(−n_flips) as the best strategy. We compared 8 strategies on x_freeze_n8 trajectory data (4096 trajectories):

Strategy Fidelity Utilization Effective Yield
Hard (k=0) 96.8% 68.8% 66.6%
Soft (k≤1) 96.5% 90.7% 87.6%
Soft (k≤2) 96.3% 93.5% 90.1%
exp(−k) 96.6% 77.3% 74.7%
Excess-flip 96.6% 77.5% 74.9%
Early-penalty 96.6% 79.2% 76.5%
Pos0-aware 96.4% 92.0% 88.7%
Uniform 91.2% 100.0% 91.2%

Result: exp(−k) is suboptimal. Soft k≤2 (90.1% yield) beats exp(−k) (74.7% yield) by 15pp. Position-aware thresholding (88.7%) also beats exp(−k). The exponential weighting penalizes 1-flip trajectories too aggressively — assigning them weight 0.37 when they have 96.5% fidelity. Soft thresholds that accept these trajectories at full weight extract far more signal. The recommended strategy table should be updated: soft k≤2 for most applications, not exponential weighting.

W6: Computation between measurements. The inter-measurement gap experiment showed X Zeno loses its advantage at 5μs of idle delay. But idle delay is not computation — real algorithms perform gate operations between measurements. We tested Zeno with 0, 1, 2, and 4 identity-equivalent gate pairs (Ry(ε)Ry(−ε), ε=0.01) interleaved between each measurement step:

Work gates Zeno (excess-flip) Zeno (hard PS) Depth-matched Standard Zeno−Standard
0 93.1% 96.1% 88.8% 89.0% +4.1pp
1 91.9% 95.4% 88.2% 89.0% +2.8pp
2 91.4% 93.8% 88.1% 89.0% +2.4pp
4 91.8% 95.5% 87.8% 89.0% +2.7pp

Result: REFUTED. The Zeno advantage survives interleaved computation. With 4 gate pairs between each measurement (32 extra gates total), Zeno still beats standard by +2.7pp and beats depth-matched by +3.9pp. The advantage decays 33% from the zero-work baseline but does not vanish. The idle-gap experiment measured decoherence during idle time, not the effect of active computation. Gate-based work between measurements preserves coherence (the gates keep the qubit driven) while idle gaps allow T1/T2 decay.

Multi-Axis Validation

results/zeno_multiaxis/zeno_multiaxis.json — 48 circuits testing Zeno dragging across qubits, rotation angles, measurement counts, measurement schedules, and idle gap mitigation. Single Batch on ibm_torino, 4096 shots each, job d6jl6lkgmsgc73bv7ia0, 55s QPU, March 3 2026. Script: zeno_multiaxis.py.

Multi-qubit gate comparison (18 circuits). Repeats the gate comparison on Q37 (1.0% meas err), Q95 (2.7%), Q131 (1.8%), replacing Q0 (11.6%). Zeno improves identity gate fidelity on 2/3 qubits (+0.1 to +1.1pp). Zeno hurts X gate fidelity on all 3 (−4.1 to −6.9pp). Confirms W1: the gate comparison advantage depends on qubit measurement error rate.

Adaptive-N (8 circuits). N=4 and N=8 produce identical fidelity at θ=π/8 (99.5%). At θ=π, N=16 reaches 97.9% vs 93.2% at N=8. Optimal N scales with rotation angle.

Sinusoidal measurement schedule (6 circuits). θ_k = θ·sin²(kπ/2N) vs uniform θ_k = kθ/N, three angles, N=8. The sinusoidal schedule produces higher soft k≤2 fidelity at every angle: +1.2pp at π/2, +2.1pp at 3π/4, +4.7pp at π.

θ Uniform k≤2 Sinusoidal k≤2 Difference
π/2 97.6% 98.9% +1.2pp
3π/4 96.0% 98.1% +2.1pp
π 92.0% 96.7% +4.7pp

Zeno-DD hybrid (4 circuits). Hahn echo DD in 10μs inter-measurement gaps recovers +5.0pp fidelity over bare gaps for X Zeno (81.5% → 86.6%). Identity Zeno is unaffected by gaps (99.9%).

Two-qubit (2 circuits). Standard CNOT round-trip on Q37-Q52: 99.6%. Zeno CNOT with soft k≤2: 91.5%. Two-qubit Zeno overhead still exceeds the post-selection benefit.

State Tomography

results/zeno_tomography/zeno_tomography.json — Full single-qubit quantum state tomography of Zeno-dragged states at N = 32, 128, 256, 512, and 1024 intermediate measurements. For each N, the qubit is dragged from |0⟩ to |1⟩ via the standard Zeno protocol (no undo rotation), then measured in the X, Y, and Z Pauli bases using separate circuits. The three basis measurements yield expectation values ⟨X⟩, ⟨Y⟩, ⟨Z⟩, from which the single-qubit density matrix is reconstructed as ρ = (I + ⟨X⟩σ_X + ⟨Y⟩σ_Y + ⟨Z⟩σ_Z)/2. No post-selection or trajectory weighting is applied to the raw tomographic reconstruction. Delay-matched controls (Ry(π)|0⟩ + equivalent idle time) are measured identically at every N. Thirty circuits total (5 N values × 3 bases × 2 categories), 8192 shots each, submitted as a single Batch on ibm_torino qubit 0 (job d6jm6ko60irc7394n5u0, 217s QPU, March 3 2026). Script: zeno_tomography.py.

Raw tomographic fidelity with |1⟩ (no selection, no weighting):

| N | ×T1 | Zeno F(|1⟩) | Zeno Purity | Delay F(|1⟩) | Delay Purity | Zeno − Delay | |---|-----|-------------|-------------|--------------|--------------|--------------| | 32 | 0.4 | 71.7% | 59.4% | 58.0% | 51.7% | +13.7pp | | 128 | 1.7 | 70.5% | 58.4% | 26.3% | 61.4% | +44.2pp | | 256 | 3.5 | 70.2% | 58.2% | 16.4% | 72.9% | +53.8pp | | 512 | 6.9 | 70.1% | 58.2% | 12.8% | 78.1% | +57.3pp | | 1024 | 13.9 | 69.7% | 57.8% | 13.2% | 77.4% | +56.6pp |

The Zeno-dragged state maintains 69.7% fidelity with |1⟩ at N = 1024 (13.9× T1) without any trajectory selection. The delay-matched control thermalizes to ~13% fidelity (near the maximally mixed value of 50%, shifted by readout asymmetry on qubit 0) by 3.5× T1 and remains there. The fidelity gap between Zeno and delay increases with N as the control decays while the Zeno-protected state remains approximately constant, declining by only 2 percentage points from N = 32 to N = 1024.

The Bloch vector components provide additional information. Across all N, the Zeno state has ⟨X⟩ ≈ 0 and ⟨Y⟩ ≈ 0 with ⟨Z⟩ between −0.39 and −0.43, indicating the state lies near the −Z axis (consistent with |1⟩) but with significant depolarization. The Bloch vector length |r| ≈ 0.40 at all N values indicates a substantially mixed state. By comparison, the delay-matched control's Bloch vector rotates toward +Z (the thermal equilibrium state |0⟩) with increasing N, reaching ⟨Z⟩ = +0.74 at N = 512.

Per-bin tomography. The same shots are binned by intermediate flip count, and density matrices are reconstructed independently for each bin. This provides the true state fidelity at each flip count without relying on the final Z-basis measurement alone. Representative results at N = 32:

| Flips | Shots | ⟨X⟩ | ⟨Y⟩ | ⟨Z⟩ | F(|1⟩) | Purity | |-------|-------|------|------|------|--------|--------| | 0 | 1269 | +0.017 | +0.132 | −0.830 | 91.5% | 85.3% | | 1 | 1872 | +0.051 | +0.109 | −0.829 | 91.5% | 85.1% | | 2 | 1365 | +0.081 | +0.003 | −0.804 | 90.2% | 82.6% | | 3 | 658 | +0.068 | +0.115 | −0.647 | 82.4% | 71.9% | | 4 | 312 | −0.020 | +0.041 | −0.455 | 72.8% | 60.5% | | 5 | 173 | −0.138 | −0.065 | −0.133 | 56.7% | 52.0% | | ≥6 | — | — | — | ~0 to +0.3 | ~40% | ~52% |

Fidelity decreases monotonically with flip count through k = 5, consistent with the interpretation that intermediate flips indicate trajectory corruption. The k = 0 and k = 1 bins have nearly identical tomographic fidelity (91.5%), suggesting that a single flip at N = 32 is more likely a measurement readout error than a genuine state disturbance. By k = 5, the state has depolarized to near the maximally mixed state (purity ≈ 0.52, fidelity ≈ 0.57).

Weighting scheme ground-truth calibration. The per-bin tomographic fidelities provide a model-free reference for evaluating trajectory weighting schemes. For each scheme, the weighted average of per-bin fidelities is computed using the scheme's weight function and the empirical bin populations. Selected results:

N Scheme Tomographic F Utilization Effective Yield
32 Unweighted 71.4% 100% 71.4%
32 Hard PS (k=0) 91.5% 15.5% 14.1%
32 Soft k≤2 91.1% 54.7% 49.8%
32 Excess-flip 88.8% 66.3% 58.9%
128 Unweighted 70.7% 100% 70.7%
128 Hard PS (k=0) 93.3% 0.2% 0.2%
128 Soft k≤2 92.6% 3.0% 2.8%
128 Excess-flip 86.9% 45.2% 39.2%
1024 Unweighted 70.0% 100% 70.0%

At N = 32, the unweighted tomographic fidelity (71.4%) represents the assumption-free baseline. Hard post-selection achieves the highest per-shot fidelity (91.5%) but retains only 15.5% of shots. The excess-flip scheme provides the best tradeoff at moderate N (58.9% yield at N = 32). At N ≥ 128, hard post-selection and soft k≤2 retain too few shots for reliable estimation; the excess-flip and calibrated exponential schemes maintain higher utilization but with reduced fidelity.

At N = 1024, only the unweighted estimator retains meaningful utilization (100%, by definition). The per-bin fidelities at N = 1024 show no systematic dependence on flip count within the range where bins are populated (k ≈ 120–530), as the flip count distribution has shifted far from zero due to the 11.6% measurement error rate on qubit 0. The raw unweighted fidelity of 70.0% at 13.9× T1, compared to the delay-matched control at 13.2%, confirms that the Zeno mechanism preserves state information well beyond the qubit's natural coherence time. This result does not depend on any trajectory weighting or post-selection.


Data Format

Each JSON file follows a consistent schema designed for both human readability and programmatic access:

  • experiment: String identifier for the experiment type (e.g., "zeno_drag", "zeno_controls")
  • timestamp: ISO 8601 timestamp of job submission (e.g., "2026-01-24T22:48:13.288791Z")
  • backend: IBM Quantum backend name (e.g., "ibm_torino")
  • shots: Number of measurement shots per circuit (typically 2048 or 4096)
  • job_id: IBM Quantum job identifier enabling reproducibility and verification
  • results: Nested structure containing experimental results, circuit-specific data, raw bitstrings, and computed metrics
  • usage_seconds: Quantum processing time consumed (for resource tracking)

Raw bitstrings are included in all files to enable reanalysis with different post-selection or weighting schemes. Bitstring format follows Qiskit convention: the rightmost bit corresponds to the first classical register. For trajectory data, bitstrings encode the full sequence of intermediate measurements, enabling reconstruction of the trajectory and analysis of flip positions.


Hardware

All experiments were conducted on the same IBM Quantum backend to ensure consistency across the dataset:

Parameter Value
Backend IBM Quantum ibm_torino
Processor 133-qubit Heron r2 superconducting transmon
Native gates CZ, RZ, SX, X, I
Typical T1 150–250 μs
Typical T2 140–200 μs
Measurement duration ~1.5 μs
Single-qubit gate duration ~30 ns
Two-qubit gate duration ~100 ns

The Heron r2 processor uses fixed-frequency transmon qubits with tunable couplers, representing the current generation of IBM Quantum hardware. Gate decomposition during transpilation converts Ry rotations (required for Zeno basis changes) into sequences of native RZ and SX gates.

Transpilation Settings

All circuits were transpiled using Qiskit's generate_preset_pass_manager with:

  • Optimization level: 1 (light optimization, preserves circuit structure)
  • Target backend: ibm_torino
  • Basis gates: CZ, RZ, SX, X, I

Ry(θ) rotations decompose into RZ-SX-RZ sequences. For example, Ry(π/2) becomes approximately RZ(π/2)-SX-RZ(-π/2).

Calibration Snapshot

Qubit properties at time of experiment (January 26, 2026), from zeno_megabatch.json:

Qubit T1 (μs) T2 (μs) Quality
0 156 164 Typical
1 215 194 Good
6 46 81 Low T1
8 246 214 Good
10 232 252 Good
15 192 34 Low T2

Full calibration data for all 133 qubits is embedded in results/zeno_megabatch/zeno_megabatch.json. Calibration drift is a known limitation: IBM recalibrates daily, but parameters can drift significantly within a calibration window.

Execution Details

  • Shots per circuit: 2048–8192 depending on experiment
  • Execution mode: IBM Quantum Runtime Batch (parallel compilation)
  • Total QPU time: ~200 seconds across January 2026 experiments; 155 seconds for March 2026 measurement duration experiments; 56 seconds for weakness tests; 217 seconds for state tomography
  • Temporal window: January experiments completed within 24 hours (single calibration epoch); measurement duration, weakness test, and state tomography experiments conducted March 3, 2026

Citation

@dataset{qiskit-zenodragging,
  title={Quantum Zeno Dragging on IBM Quantum Hardware},
  author={Norton, Charles C.},
  year={2026},
  publisher={Hugging Face},
  url={https://huggingface.co/datasets/phanerozoic/qiskit-zenodragging}
}

References

Hacohen-Gourgy, S., et al. "Incoherent Qubit Control Using the Quantum Zeno Effect." Physical Review Letters 120, 020505 (2018).

Lewalle, P., et al. "A Multi-Qubit Quantum Gate Using the Zeno Effect." Quantum 7, 1100 (2023).

Lewalle, P., et al. "Optimal Zeno Dragging for Quantum Control." PRX Quantum 5, 020366 (2024).

License

CC-BY-4.0

Downloads last month
190