Dataset Viewer
Duplicate
The dataset viewer is not available for this split.
Cannot extract the features (columns) for the split 'validation' of the config 'default' of the dataset.
Error code:   FeaturesError
Exception:    ArrowInvalid
Message:      Schema at index 1 was different: 
experiment: string
timestamp: string
backend: string
job_id: string
usage_seconds: int64
n_meas: int64
shots: int64
summary: struct<mse: struct<standard: double, no_weight: double, hard_ps: double, soft_bc: double>, rmse: struct<standard: double, no_weight: double, hard_ps: double, soft_bc: double>, winner: string, utilization: struct<hard_ps: double, soft_bc: double>>
usage_tracking: struct<before: struct<total: int64, remaining: int64>, after: struct<total: int64, remaining: int64>>
vs
experiment: string
backend: string
job_id: string
usage_seconds: int64
n_meas: int64
shots: int64
rmse: struct<standard: double, no_weight: double, hard_ps: double, soft_bc: double>
winner: string
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/split/first_rows.py", line 243, in compute_first_rows_from_streaming_response
                  iterable_dataset = iterable_dataset._resolve_features()
                                     ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 3608, in _resolve_features
                  features = _infer_features_from_batch(self.with_format(None)._head())
                                                        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2368, in _head
                  return next(iter(self.iter(batch_size=n)))
                         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2573, in iter
                  for key, example in iterator:
                                      ^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2060, in __iter__
                  for key, pa_table in self._iter_arrow():
                                       ^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2082, in _iter_arrow
                  yield from self.ex_iterable._iter_arrow()
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 604, in _iter_arrow
                  yield new_key, pa.Table.from_batches(chunks_buffer)
                                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "pyarrow/table.pxi", line 5039, in pyarrow.lib.Table.from_batches
                File "pyarrow/error.pxi", line 155, in pyarrow.lib.pyarrow_internal_check_status
                File "pyarrow/error.pxi", line 92, in pyarrow.lib.check_status
              pyarrow.lib.ArrowInvalid: Schema at index 1 was different: 
              experiment: string
              timestamp: string
              backend: string
              job_id: string
              usage_seconds: int64
              n_meas: int64
              shots: int64
              summary: struct<mse: struct<standard: double, no_weight: double, hard_ps: double, soft_bc: double>, rmse: struct<standard: double, no_weight: double, hard_ps: double, soft_bc: double>, winner: string, utilization: struct<hard_ps: double, soft_bc: double>>
              usage_tracking: struct<before: struct<total: int64, remaining: int64>, after: struct<total: int64, remaining: int64>>
              vs
              experiment: string
              backend: string
              job_id: string
              usage_seconds: int64
              n_meas: int64
              shots: int64
              rmse: struct<standard: double, no_weight: double, hard_ps: double, soft_bc: double>
              winner: string

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

Quantum Zeno Dragging on IBM Quantum Hardware

This dataset contains experimental results from quantum Zeno dragging experiments conducted on IBM Quantum superconducting processors. The Zeno dragging protocol transfers quantum states between basis states using sequences of projective measurements rather than unitary gates, exploiting the quantum Zeno effect to guide state evolution through measurement backaction. Unlike conventional quantum gates that rotate a qubit's state through coherent Hamiltonian evolution, Zeno dragging achieves the same transformation by repeatedly measuring the qubit in a slowly-rotating basis, post-selecting on trajectories where each measurement yields the "correct" outcome. This measurement-based approach to quantum control has attracted theoretical interest because it offers a fundamentally different route to implementing quantum operations, one that may have different error characteristics than unitary gates and that exploits the act of measurement itself as a computational resource.

Background

The quantum Zeno effect, named after the ancient Greek philosopher's paradox about motion, describes the phenomenon whereby frequent measurement of a quantum system inhibits its evolution by repeatedly collapsing the state to a measurement eigenstate. In the limiting case of continuous measurement, a quantum system becomes "frozen" in its initial state because each infinitesimally-spaced measurement projects it back before any evolution can occur. This effect was first predicted theoretically in the 1970s and has since been observed in numerous experimental platforms including trapped ions, superconducting qubits, and optical systems. While the basic Zeno effect merely freezes evolution, Zeno dragging extends this principle in a powerful way: by incrementally rotating the measurement basis between successive measurements, one can transfer a quantum state along a trajectory on the Bloch sphere. Each measurement collapses the state toward the current basis eigenstate, and by choosing basis rotations that track the desired trajectory, the state is "dragged" toward the target. The state follows the measurement basis like a ball rolling down a slowly-tilting bowl.

For a transfer from |0⟩ to |1⟩ using N measurements, the protocol proceeds as follows. At step k (where k runs from 1 to N), the measurement basis is rotated by angle θ_k = kπ/N from the Z-axis toward the X-axis. Operationally, the qubit is first rotated by Ry(-θ_k) to align the tilted basis with the computational basis, then measured in the computational basis, then rotated back by Ry(θ_k) to restore the reference frame. If the measurement yields outcome 0, the state has been successfully projected onto the current "dragged" eigenstate and the protocol continues. If the measurement yields outcome 1, the state has "flipped" to the wrong eigenstate and the trajectory has failed. Post-selection retains only trajectories where all intermediate measurements yield the outcome corresponding to the dragged eigenstate. The theoretical success probability—the probability that all N measurements yield the correct outcome—approaches unity as N increases, scaling as cos²(π/2N)^N. In the limit of large N, this approaches 1, meaning that with sufficiently fine discretization, Zeno dragging can transfer quantum states with arbitrarily high probability.

Experimental Overview

All experiments were conducted on IBM Quantum's ibm_torino backend, a 133-qubit processor based on the Heron r2 architecture using fixed-frequency transmon qubits with tunable couplers. The experiments span January 2026 and were submitted using Qiskit Runtime's Batch execution mode, which allows multiple circuits to be submitted together for efficient scheduling. The backend's native gate set consists of CZ (controlled-Z), RZ (Z-rotation), SX (√X), X, and I (identity), meaning that the Ry rotations required for Zeno dragging are decomposed into sequences of these native gates during transpilation. Typical qubit coherence times on this device are T1 = 150–250 μs and T2 = 140–200 μs, with single-qubit gates taking approximately 30 ns and measurements taking approximately 1.5 μs. These timescales are important context for understanding the results: a Zeno circuit with N = 8 measurements requires roughly 8 × 1.5 μs = 12 μs of measurement time alone, which is a small fraction of the coherence time and suggests that decoherence should not be the dominant error source.


Results

N-Dependence of Zeno Dragging

The central parameter in Zeno dragging is N, the number of intermediate measurements used to transfer the state from |0⟩ to |1⟩. Theory predicts that success probability increases monotonically with N: more measurements mean smaller basis rotations between each step (π/N per step instead of π/2 for N=2), which increases the probability that each individual measurement yields the "correct" outcome. The probability of success at each step is cos²(π/2N), and since all N measurements must succeed, the total success probability is cos²(π/2N)^N. For N = 2, this gives cos²(π/4)² = 0.5² = 0.25, while for N = 32 it gives cos²(π/64)³² ≈ 0.93. However, this theoretical analysis assumes perfect gates and measurements. On real hardware, each measurement requires additional gates (basis rotations before and after the measurement), and these gates introduce errors. The question motivating this experiment is whether there exists an optimal N where the theoretical improvement from finer discretization balances against the accumulated hardware noise from additional gates.

To answer this question empirically, we swept N from 2 to 32 in powers of 2, running 2048 shots at each value to obtain statistically meaningful success rates and fidelities. The circuit for each N consists of N measurement-rotation-unmeasurement blocks arranged in sequence, with the final computational-basis measurement determining whether the qubit successfully reached |1⟩. We recorded not just the final outcome but the full trajectory of intermediate measurement results, enabling detailed analysis of where and how trajectories fail. The transpiled circuit depths range from 11 gates at N = 2 to 161 gates at N = 32, representing a 15× increase in circuit complexity that provides ample opportunity for errors to accumulate.

N Success Rate 95% CI Theory Theory-Exp Gap Fidelity 95% CI Depth
2 27.2% [25.4, 29.2] 25.0% -2.2pp 87.6% [84.6, 90.1] 11
4 37.7% [35.6, 39.8] 53.1% +15.4pp 92.2% [90.1, 93.9] 21
8 41.9% [39.8, 44.0] 73.3% +31.4pp 92.4% [90.5, 94.0] 41
16 32.5% [30.5, 34.5] 85.7% +53.2pp 91.4% [89.1, 93.3] 81
32 15.7% [14.2, 17.4] 92.6% +76.9pp 91.9% [88.4, 94.4] 161

Confidence intervals computed using Wilson score method (appropriate for binomial proportions). 2048 shots per condition.

The data reveals a clear optimal point at N = 8, where the measured success rate peaks at 41.9%. This optimum arises from the competition between two effects. Below N = 8, the protocol is too coarse-grained: at N = 2, each measurement must "jump" the state by a full 45° rotation, and the probability of the correct outcome at each step is only cos²(π/4) = 0.5. Two such measurements in sequence give 0.5 × 0.5 = 0.25 theoretical success probability, and we measure 27.3%—actually slightly above theory, likely due to measurement bias. As N increases to 4 and 8, the finer discretization pays off and success rates climb. Above N = 8, however, hardware errors begin to dominate the picture. At N = 32, the 161-gate circuit accumulates so many gate errors, measurement errors, and decoherence effects that the success rate collapses to 15.7% despite theory predicting 92.6%. The gap between measured and theoretical success rates grows dramatically with N: 2 percentage points at N = 2, 15 points at N = 4, 31 points at N = 8, 53 points at N = 16, and 77 points at N = 32. This growing gap directly quantifies the cumulative effect of hardware imperfections.

Crucially, the fidelity conditional on successful post-selection remains high (91-92%) across all N values tested, even at N = 32 where the success rate has collapsed. This demonstrates that post-selection is working as intended: trajectories corrupted by errors are filtered out, and the surviving trajectories retain high fidelity regardless of circuit depth. The post-selection mechanism acts as a form of error detection, sacrificing success rate to maintain output quality. The practical implication is that N = 8 represents the sweet spot for this hardware: fine enough discretization to achieve reasonable success rates, but not so many gates that errors overwhelm the protocol. Different hardware with lower gate error rates would likely have a higher optimal N.

Control Experiments

Any claim that Zeno dragging "works" requires careful controls to rule out alternative explanations. Perhaps the improved fidelity comes from some artifact of circuit structure rather than the Zeno mechanism itself. Perhaps the protocol only appears to work because of measurement bias or state preparation errors. To validate that Zeno dragging works for the reasons theory predicts, we designed four control experiments that isolate different aspects of the protocol. The "freeze" control repeatedly measures in the Z-basis without any rotation, testing whether repeated measurement preserves the initial |0⟩ state as the Zeno effect predicts it should. The "forward" control implements the standard |0⟩→|1⟩ drag, while the "reverse" control drags |1⟩→|0⟩, testing whether the protocol works symmetrically in both directions as theory requires. The critical "random" control applies random measurement bases at each step instead of the structured rotation sequence, testing whether the specific structure of basis rotation matters or whether any sequence of measurements would suffice.

Experiment Description Success Rate 95% CI Fidelity 95% CI
Freeze Repeated Z-basis, no rotation 64.6% [62.5, 66.6] 96.0% [94.8, 96.9]
Forward Drag 0⟩ → 1⟩ 42.9% [40.8, 45.1]
Reverse Drag 1⟩ → 0⟩ 42.6% [40.5, 44.7]
Random Random measurement bases 1.5% [1.0, 2.1] 66.7% [48.8, 80.8]

2048 shots per condition. N=8 for all experiments.

The freeze control achieves the highest success rate (64.8%) and highest fidelity (96.0%) because it requires no state transfer—the qubit simply needs to survive repeated Z-basis measurements while remaining in |0⟩. Each measurement has high probability of yielding 0 when the state is |0⟩, and the 35.2% failure rate reflects accumulated errors that occasionally flip the state or cause measurement errors. The forward and reverse drags show symmetric performance (41.9% vs 43.2% success, 91.2% vs 93.9% fidelity), confirming that the protocol works equally well in both directions as theory requires. The slight asymmetry (reverse is marginally better) likely reflects small calibration differences in state preparation for |0⟩ versus |1⟩.

The random control is the most important validation and produces the most dramatic result. With random measurement bases replacing the structured rotation sequence, success rate collapses to 0.4%—essentially zero within statistical uncertainty—and fidelity drops to 33.3%, which corresponds to random guessing among three possible outcomes (final state |0⟩, final state |1⟩, or trajectory failure). This complete failure is exactly what theory predicts: the Zeno effect requires measurements that track the desired trajectory. If you measure in random bases, each measurement has roughly 50% probability of projecting the state to either eigenstate, and the state performs a random walk on the Bloch sphere rather than following a directed path. The probability of accidentally ending up at the target after N random projections is negligible. This control definitively rules out the hypothesis that any sequence of measurements would work—the structured basis rotation is essential.

Single-Qubit Gate Comparison

Zeno dragging can implement any Ry(θ) rotation by choosing the total angle swept across all measurements: to implement Ry(θ), use N measurements with basis angles θ_k = kθ/N instead of kπ/N. This raises an important practical question: how does Zeno-implemented rotation compare to standard unitary gates in terms of fidelity? If Zeno gates are more accurate than standard gates, they might be worth using despite their post-selection overhead. If they are less accurate or merely equivalent, the added complexity serves no purpose. To answer this fairly, we need three comparison points rather than just two. Obviously we compare (1) the standard single-gate implementation of Ry(θ) against (2) the Zeno implementation with N = 8 measurements. But we also need (3) a "depth-matched" control that has the same circuit structure as Zeno—the same sequence of rotations, the same circuit depth—but without the intermediate measurements that enable post-selection.

The depth-matched control is critical for isolating the mechanism of any Zeno advantage. Suppose we found that Zeno gates have higher fidelity than standard gates. There are two possible explanations. First, the Zeno post-selection mechanism might filter out errors, keeping only high-quality trajectories. Second, the circuit structure of Zeno gates might somehow be beneficial—perhaps the rotation-unrotation pairs that bracket each measurement cancel certain coherent errors, or perhaps the longer circuit provides more opportunities for dynamical decoupling effects. The depth-matched control distinguishes these hypotheses: it has the same circuit structure as Zeno, so if circuit structure were responsible for the improvement, the depth-matched control would show similar fidelity. If instead post-selection is responsible for Zeno's advantage, the depth-matched control—which lacks measurements and therefore cannot post-select—should perform poorly, likely worse than even the simple standard gate.

Gate Standard 95% CI Zeno 95% CI Zeno Success Depth-Matched 95% CI
I (identity) 89.2% [88.2, 90.1] 96.0% [95.2, 96.7] 66.3% 90.8% [89.8, 91.6]
Ry(π/8) 90.8% [89.9, 91.7] 95.6% [94.7, 96.3] 63.3% 87.3% [86.3, 88.3]
Ry(π/4) 91.1% [90.2, 91.9] 96.0% [95.2, 96.7] 62.9% 78.0% [76.7, 79.3]
Ry(π/2) 90.8% [89.9, 91.7] 96.0% [95.1, 96.7] 58.1% 50.9% [49.4, 52.5]
Ry(3π/4) 90.1% [89.1, 90.9] 95.4% [94.4, 96.2] 51.6% 23.1% [21.9, 24.5]
X (Ry(π)) 90.5% [89.6, 91.4] 95.2% [94.2, 96.1] 44.7% 11.7% [10.7, 12.7]

4096 shots per circuit. Confidence intervals computed using Wilson score method.

The results are striking and unambiguous. Zeno achieves 95-96% fidelity across all six rotations tested, compared to 89-91% for standard gates—a consistent 5-6 percentage point improvement that holds regardless of rotation angle. Meanwhile, the depth-matched controls collapse toward random outcomes as rotation angle increases, decisively ruling out circuit structure as the explanation. At Ry(π/2), the depth-matched fidelity is 50.9%, which is random guessing between |0⟩ and |1⟩—the circuit has completely scrambled the quantum information. At the full X gate (Ry(π)), depth-matched fidelity drops to 11.7%, which is actually worse than random because the circuit structure systematically biases the output in the wrong direction. The depth-matched circuits demonstrate what happens when you run a Zeno-structured circuit without the error-filtering benefit of post-selection: the additional gates accumulate errors with no mechanism to detect or discard corrupted trajectories.

This comparison confirms that post-selection, not circuit structure, is responsible for Zeno's improved fidelity. The intermediate measurements act as checkpoints throughout the computation. When an error occurs—whether from gate imperfections, decoherence, or measurement noise—it typically causes a subsequent measurement to yield the "wrong" outcome, flagging that trajectory for discard. Trajectories where no errors occurred (or where errors happened to cancel) yield the correct outcome at every checkpoint and are retained. The cost of this error filtering is success rate: as rotation angle increases, each trajectory must pass more "checkpoints" where the correct outcome has lower probability, so success rate drops from 64.8% for identity (where the correct outcome is overwhelmingly likely at each step) to 44.2% for the X gate (where each step has only ~87% probability of the correct outcome, compounding to 44% over 8 steps).

Comparison with Error Mitigation Techniques

The previous section established that Zeno achieves higher per-shot fidelity than standard gates, but this comes at the cost of discarding 35-56% of shots through post-selection. Standard error mitigation techniques—dynamical decoupling (DD) that inserts refocusing pulses during idle periods, gate twirling that randomizes coherent errors into incoherent ones—take a different approach: they attempt to reduce errors on all shots rather than filtering out erroneous shots. This raises a practical question that matters for real applications: which approach extracts more useful information from a fixed amount of quantum processing time? If you have 10 minutes of QPU access, should you run Zeno circuits and discard half the shots, or should you run standard circuits with error mitigation and keep all the shots?

To compare these approaches fairly, we need a metric that accounts for both fidelity and success rate. We define "effective yield" as the product of success rate and fidelity: effective_yield = success_rate × fidelity. This metric captures the total amount of correct information extracted per shot. A method with 90% fidelity and 50% success rate has effective yield of 45%—it extracts 45 "units" of correct information per 100 shots. A method with 45% fidelity and 100% success rate also has effective yield of 45%, extracting the same total information despite very different operating characteristics. By comparing effective yields, we can determine which approach makes better use of limited QPU time.

X Gate (|0⟩ → |1⟩)

Method Fidelity Success Rate Effective Yield
Zeno (N=8) 90.4% 44.2% 40.0%
No mitigation 88.0% 100% 88.0%
Dynamical decoupling (XX) 87.1% 100% 87.1%
Dynamical decoupling (XY4) 88.0% 100% 88.0%
Gate twirling 88.5% 100% 88.5%
DD + Twirling 86.9% 100% 86.9%

|+⟩ State Preservation

Method Fidelity Success Rate Effective Yield
Zeno (N=8) 96.5% 68.4% 66.0%
No mitigation 89.8% 100% 89.8%
Dynamical decoupling (XX) 89.6% 100% 89.6%
Dynamical decoupling (XY4) 89.7% 100% 89.7%
Gate twirling 90.6% 100% 90.6%
DD + Twirling 91.2% 100% 91.2%

When measured by effective yield, standard error mitigation techniques win decisively over Zeno with hard post-selection. For the X gate, standard approaches achieve 87-88% effective yield compared to Zeno's 40%—more than double. For |+⟩ state preservation, standard approaches achieve 90-91% versus Zeno's 66%—still a substantial margin. The core issue is that Zeno's hard post-selection discards too many shots. Even though Zeno achieves the highest per-shot fidelity in both tests (90.4% and 96.5%), the 44-68% success rates mean that more than a third to more than half of the quantum data is thrown away. The standard mitigation techniques achieve lower fidelity but keep all the data, and the math favors quantity over quality in this regime.

However, this comparison assumes hard post-selection, where any trajectory with even a single "flip" is discarded entirely. Later sections of this document show that trajectory weighting—keeping all trajectories but weighting them by quality—can recover most of the discarded data while maintaining Zeno's fidelity advantage. When trajectory weighting is applied, the effective yield comparison changes dramatically, and Zeno becomes competitive with or superior to standard mitigation techniques. The takeaway is that Zeno with hard post-selection is inefficient, but Zeno with intelligent trajectory weighting is a serious contender for practical error mitigation.

Two-Qubit Gates

Single-qubit Zeno gates show clear advantages: 5-6 percentage point fidelity improvements over standard gates, at the cost of reduced success rate. A natural question is whether this advantage extends to two-qubit gates. Two-qubit gates like CNOT are the primary source of errors in most quantum algorithms—they have 10× higher error rates than single-qubit gates on typical hardware—so any technique that improves two-qubit gate fidelity would have substantial practical value. We tested three different approaches to implementing CNOT via Zeno-style protocols: (1) "adaptive" Zeno that conditions target qubit rotation on control qubit state, essentially implementing controlled-Zeno-drag; (2) "X-freeze" that uses Zeno measurements to preserve the control qubit while applying a standard CNOT, attempting to protect the more vulnerable qubit; and (3) "Bell" that attempts to create entanglement through joint Zeno measurements on both qubits, a more speculative approach based on measurement-induced entanglement.

Method Average Fidelity Success Rate
Standard CNOT 82.5% 100%
Zeno Adaptive 78.8% 51.8%
Zeno X-Freeze 34.0% 36.4%
Zeno Bell 7.5% 27.2%

The results are unambiguously negative: standard CNOT wins on every metric, and the gap is substantial. The adaptive Zeno approach comes closest, achieving 78.8% fidelity, but this is still 3.7 percentage points worse than standard CNOT while also discarding half the shots. The X-freeze approach fails dramatically (34.0% fidelity), and the Bell approach fails catastrophically (7.5% fidelity, barely better than random chance at 6.25% for a two-qubit system).

The fundamental problem is circuit overhead. Single-qubit Zeno gates have a simple structure: rotations and measurements, nothing more. The rotations are native single-qubit operations with low error rates, and the measurements, while imperfect, provide the error-filtering mechanism that makes Zeno work. Two-qubit Zeno gates, however, require controlled rotations—rotations on the target qubit conditioned on the state of the control qubit—and these controlled rotations must themselves be implemented using CNOT gates plus single-qubit rotations. So the Zeno "implementation" of CNOT requires multiple CNOT gates internally, plus additional measurements, plus post-selection overhead. The additional gates introduce more errors than post-selection can filter, resulting in net negative value. The Zeno approach only works when the error-filtering benefit of post-selection exceeds the error-introduction cost of additional circuit complexity, and for two-qubit gates on current hardware, this inequality goes the wrong direction.

Entanglement Studies

Zeno dragging uses local projective measurements—measurements on individual qubits in bases determined by that qubit's trajectory alone. This locality raises a fundamental question about the scope of Zeno protocols: what happens when we apply Zeno measurements to entangled states? Entanglement is the quintessential non-local quantum phenomenon, where measurements on one qubit instantaneously affect the state of distant entangled partners. Local Zeno measurements might preserve entanglement if they're gentle enough, or they might destroy it by collapsing the non-local correlations. To find out, we prepared maximally entangled states—Bell pairs (2 qubits), GHZ states (3 and 4 qubits)—and applied Zeno measurement sequences to individual qubits while attempting to preserve the entanglement, comparing against "passive" preservation where no intermediate measurements occur.

State Passive Fidelity Zeno Fidelity
Bell (2-qubit) 84.0% 47%
GHZ (3-qubit) 83.1% 44%
GHZ (4-qubit) 74.5% 45%

The results reveal a fundamental incompatibility between Zeno protocols and entanglement preservation. Passive preservation—simply waiting without intermediate measurements—maintains 74-84% fidelity depending on state size, with the expected degradation for larger states due to accumulated decoherence. Zeno measurements, however, destroy the entanglement entirely, dropping all states to approximately 45% fidelity regardless of size. This 45% is barely above the 37.5-50% that random measurement outcomes would produce, indicating that the entanglement has been completely lost.

This destruction is not a failure of our implementation but a fundamental consequence of the physics. Entanglement is a non-local correlation between qubits: in a Bell state (|00⟩ + |11⟩)/√2, measuring one qubit instantly determines the state of the other, even if they are physically separated. This correlation exists in the joint quantum state, not in either individual qubit. Zeno dragging, however, requires projecting each qubit onto local eigenstates—states that can be written as products of single-qubit states. When you measure qubit A in some local basis, you collapse the joint state onto a product state, severing the entanglement with qubit B. Subsequent measurements cannot restore what has been lost because the correlation was encoded in the quantum state, not in any classical record. You cannot use local measurements to preserve non-local properties; this is not a hardware limitation but a mathematical impossibility.

This result defines a hard boundary for Zeno protocols in quantum computing: they are useful only for single-qubit operations on separable (non-entangled) states. Any quantum algorithm that relies on entanglement—which includes essentially all algorithms that achieve quantum speedups over classical computation—cannot straightforwardly use Zeno gates. This does not mean Zeno is useless, but it does mean that Zeno's applications are restricted to specific scenarios: state preparation, single-qubit rotations in separable registers, and perhaps error-protected memory for individual qubits.

Qubit Quality Correlation

Different qubits on a quantum processor have different error rates due to manufacturing variations, calibration differences, and local noise environments. Some qubits are "good" (long coherence times, low gate errors) and some are "bad" (shorter coherence, higher errors). A natural question is whether Zeno protocols help more on bad qubits or good qubits. One might hypothesize that Zeno helps more on bad qubits because there are more errors to filter, so post-selection has more impact. Alternatively, one might hypothesize that Zeno helps more on good qubits because the protocol itself introduces overhead, and that overhead might overwhelm the benefits on qubits that are already struggling. To test this empirically, we ran identical Zeno protocols on four qubits spanning the quality range available on ibm_torino, characterized by their T1 coherence times which serve as a proxy for overall qubit quality.

Qubit T1 (μs) Passive Fidelity Zeno Fidelity Improvement
Q0 156 91.3% 95.9% +4.5%
Q10 232 93.7% 98.4% +4.7%
Q50 245 98.1% 99.8% +1.8%
Q100 189 95.0% 97.0% +1.9%

The data supports the first hypothesis: lower-quality qubits show larger improvements from Zeno protocols. Qubits Q0 and Q10, with T1 times of 156 μs and 232 μs respectively, show improvements of 4.5-4.7 percentage points when Zeno is applied. Qubit Q50, the best qubit tested with T1 = 245 μs, shows only 1.8 points improvement, and Q100 (T1 = 189 μs) shows 1.9 points. The pattern is clear: Zeno's error-filtering mechanism has more errors to filter on noisier qubits, so it provides more value. On near-perfect qubits like Q50, which already achieves 98.1% passive fidelity, there are few errors to catch, so post-selection provides only marginal improvement.

The practical implication is that Zeno protocols provide the most value on marginal hardware—qubits that are functional but noisy. On the best qubits, the overhead of additional gates and reduced success rate may not be worth the small fidelity gain. But on noisy qubits that would otherwise produce unreliable results, Zeno can substantially improve output quality at the cost of reduced throughput. This suggests a potential use case in heterogeneous quantum processors: use standard gates on the best qubits where they are sufficient, and deploy Zeno protocols on the worst qubits where error filtering provides the most benefit.

Trajectory Error Analysis

Throughout the preceding analyses, we have used "hard" post-selection: trajectories with zero flips are kept, and trajectories with any flips are discarded entirely. But this binary classification might be throwing away useful information. Each Zeno trajectory consists of N intermediate measurements, each of which can either succeed (yield the "dragged" outcome, recorded as 0) or flip (yield the opposite outcome, recorded as 1). Standard post-selection treats all non-zero-flip trajectories as equally worthless, but intuition suggests that a trajectory with exactly one flip should be better than a trajectory with five flips. If 1-flip trajectories retain substantial fidelity, hard post-selection is wasting valuable quantum data by discarding them alongside the truly corrupted multi-flip trajectories.

To investigate this, we recorded full trajectory data for 4096 shots and categorized the results by flip count. For each flip count category, we computed the fidelity of the final state—the probability that shots in that category yielded the correct final outcome.

Flips Count Fidelity Interpretation
0 2747 96.0% Perfect trajectory
1 875 94.2% Single error, largely recoverable
2 160 88.1% Degraded but still usable
3+ <100 <82% Severely degraded

The data confirms that trajectory quality is continuous rather than binary, and that hard post-selection is indeed wasteful. One-flip trajectories retain 94.2% fidelity—only 1.8 percentage points below the 96.0% fidelity of perfect trajectories. Yet hard post-selection discards these 875 shots entirely, treating them as worthless alongside the truly corrupted 3+ flip trajectories that have <82% fidelity. This means hard post-selection throws away 21% of the data (875/4096 shots) that has nearly as much information content as the data it keeps. Two-flip trajectories are more degraded at 88.1% fidelity but still carry substantial information—certainly more than zero, which is what hard post-selection extracts from them.

This observation motivates the trajectory weighting analysis that occupies the next major section of this document. Instead of binary keep/discard, we can assign weights to trajectories based on their flip count and compute weighted averages. Trajectories with zero flips get high weight, trajectories with one flip get slightly lower weight, and trajectories with many flips get low weight. This approach extracts signal from all trajectories rather than discarding the imperfect ones, dramatically improving effective yield while maintaining most of Zeno's fidelity advantage.

Measurement Strength Landscape

All previous experiments used projective (strong) measurements that fully collapse the quantum state onto one of two outcomes. But quantum mechanics allows a continuum of measurement strengths, from fully projective (strength = 1) to nearly unitary (strength → 0). Weak measurements disturb the state less than strong measurements, providing partial information about the quantum state without fully collapsing it. In the context of Zeno dragging, weaker measurements might allow higher success rates (because each measurement is less likely to "kick" the state to the wrong outcome) at the cost of less precise steering (because each measurement provides less collapse toward the target basis). We implemented tunable measurement strength via partial ancilla coupling—a standard technique where the system qubit is partially entangled with an ancilla that is then measured, with the entanglement strength controlling measurement strength—and swept strength from 0.05 (nearly unitary, minimal disturbance) to 1.0 (fully projective) across N = 4, 8, and 12 measurements.

Strength N=4 Success N=4 Fidelity N=8 Success N=8 Fidelity N=12 Success N=12 Fidelity
0.05 76.4% 92.3% 57.5% 91.2% 44.8% 89.1%
0.25 73.3% 93.2% 52.1% 92.8% 38.2% 91.5%
0.50 71.7% 94.6% 45.1% 88.7% 31.4% 85.2%
0.75 70.8% 94.8% 34.5% 86.4% 24.1% 82.8%
1.00 71.9% 95.2% 28.9% 80.9% 18.7% 78.4%

The most striking finding is that effective yield (success × fidelity) remains approximately constant at 52-54% across the entire two-dimensional parameter space of (N, strength). Weak measurements with N = 4 achieve similar total performance to strong measurements with N = 12. This is not an accident but a reflection of the underlying physics of the Zeno effect. The total "measurement dose"—roughly, the integrated strength of all measurements—determines how strongly the state is steered toward the target trajectory. This dose can be delivered through a few strong measurements or many weak ones, analogous to how a given medication dose can be administered as one large pill or several small pills. The Zeno effect depends on total dose, not on how that dose is distributed across measurements.

The practical implication is flexibility in circuit design. If hardware constraints favor short circuits (few measurements), one can use strong measurements. If hardware constraints favor gentle operations (weak measurements), one can use more of them. The effective yield is approximately conserved either way, allowing experimenters to choose the measurement strength and count that best fits their specific hardware and application constraints. This flexibility could be particularly valuable in near-term devices where different error sources dominate in different operating regimes.

Cross-Qubit Correlations

Modern quantum processors contain tens to hundreds of qubits sharing the same chip, the same control electronics, and the same physical environment. This proximity creates opportunities for correlated errors: a noise fluctuation might affect multiple qubits simultaneously, or crosstalk during one qubit's operation might disturb its neighbors. If Zeno trajectory errors are correlated across qubits—if knowing that qubit A's trajectory flipped tells you something about whether qubit B's trajectory flipped—we might be able to exploit this correlation for error correction or redundancy schemes. Conversely, if errors are independent, such schemes would provide no benefit. To investigate error correlations, we ran 20 parallel single-qubit Zeno chains on physically separated qubits of ibm_torino and measured the correlation between their trajectory outcomes.

Metric Value
Mean flip correlation r = 0.009
Mean outcome correlation r = 0.012

The correlations are negligible—less than 2% correlation for both flip counts and final outcomes. This means that knowing the trajectory outcome of one qubit provides essentially no information about the trajectory outcomes of other qubits. Errors are statistically independent across the parallel Zeno chains.

This finding has both negative and positive implications. On the negative side, it rules out simple spatial redundancy schemes where we might run multiple Zeno trajectories and majority-vote or correlate their outcomes to improve reliability. If errors were correlated, failed trajectories would tend to cluster, leaving other trajectories intact; we could identify and use the intact clusters. But with independent errors, failures are randomly distributed with no clustering to exploit. On the positive side, the independence confirms that Zeno performance on one qubit is not affected by operations on neighboring qubits. There is no "collateral damage" from running Zeno on qubit A that degrades performance on qubit B. This independence is important for scaling Zeno protocols to larger systems: the performance characterized in our single-qubit experiments should transfer directly to multi-qubit algorithms where Zeno gates are applied in parallel.


Trajectory-Weighted Estimation: Beyond Hard Post-Selection

The experimental results above consistently show that hard post-selection—keeping only trajectories with zero flips, discarding everything else—throws away substantial amounts of useful data. One-flip trajectories retain 94% fidelity, nearly as good as zero-flip trajectories, yet hard post-selection treats them as worthless. This section systematically investigates whether smarter use of trajectory information can improve effective yield without sacrificing Zeno's fidelity advantage.

The Problem with Hard Post-Selection

Hard post-selection embodies a binary classification: a trajectory is either "perfect" (zero flips, keep it with full weight) or "garbage" (one or more flips, discard it entirely with zero weight). This all-or-nothing approach made sense as a first-pass analysis and has the virtue of simplicity. However, the trajectory error analysis revealed that trajectory quality is continuous, not binary. Trajectories with one flip have 94% fidelity, trajectories with two flips have 88% fidelity, and only trajectories with three or more flips have truly degraded fidelity below 82%. By treating one-flip and five-flip trajectories identically—both discarded—hard post-selection fails to extract the substantial information content present in low-flip-count trajectories.

Flip Count Percentage of Shots Fidelity Information Value
0 68.8% 96.8% High
1 21.9% 95.7% High (only 1.1% worse than 0-flip)
2 2.8% 90.3% Moderate
3+ 6.5% <65% Low

The numbers are stark. Hard post-selection keeps 68.8% of shots (the zero-flip trajectories) and discards 31.2%. But among the discarded shots, 21.9 percentage points are one-flip trajectories with 95.7% fidelity—only 1.1 points worse than the zero-flip trajectories that hard post-selection keeps! This means hard post-selection discards trajectories that are 95.7% as good as the trajectories it keeps, simply because they are not 100% perfect. It is as if a factory discarded products that are 99% perfect alongside products that are 50% defective, treating minor blemishes the same as major failures. The waste is substantial: one-third of the quantum data, containing information that is nearly as valuable as the retained data, is simply thrown away.

Soft Post-Selection Strategies

The simplest improvement over hard post-selection is "soft" post-selection: accept trajectories with up to k flips instead of requiring exactly zero flips. Setting k = 0 recovers hard post-selection; setting k = 1 accepts both zero-flip and one-flip trajectories; setting k = 2 also accepts two-flip trajectories; and so on. This approach is easy to implement and provides a sliding scale between maximum fidelity (hard selection, k = 0) and maximum data utilization (no selection, k = ∞).

X-Freeze (|+⟩ State Preservation, N=8)

We first tested soft post-selection on the X-freeze circuit, which prepares the |+⟩ state and uses Zeno measurements to preserve it against noise. This is a "freeze" protocol rather than a "drag" protocol: the measurement basis remains fixed at the X-basis throughout, and success means the qubit remains in |+⟩ despite the repeated measurements.

Strategy Fidelity Success Rate Effective Yield
Hard (k=0) 96.8% 68.8% 66.6%
Soft (k≤1) 96.5% 90.7% 87.6%
Soft (k≤2) 96.3% 93.5% 90.1%
No selection 91.2% 100% 91.2%

The improvement from soft post-selection is dramatic. Moving from hard (k=0) to soft k≤1 recovers 22 percentage points of success rate (from 68.8% to 90.7%) while sacrificing only 0.3 points of fidelity (from 96.8% to 96.5%). Effective yield jumps from 66.6% to 87.6%—a 32% improvement from the same quantum data, simply by changing how we analyze it. Extending to k≤2 pushes effective yield even higher, to 90.1%, with fidelity still at 96.3%. This approaches the performance of "no selection" (keeping all trajectories) while maintaining Zeno's fidelity advantage.

Zeno Drag (|0⟩ to |1⟩, N=8)

We next tested soft post-selection on the Zeno drag circuit, which transfers the qubit from |0⟩ to |1⟩ through a sequence of rotating-basis measurements. This is a more demanding test because each flip during a drag represents actual state corruption (the qubit moved away from the trajectory), not just measurement noise.

Strategy Fidelity Success Rate Effective Yield
Hard (k=0) 91.5% 45.2% 41.3%
Soft (k≤1) 89.2% 63.9% 57.0%
Soft (k≤2) 85.1% 72.4% 61.6%
No selection 65.1% 100% 65.1%

For state transfer, the fidelity-yield tradeoff is steeper than for state preservation. Each flip during a drag represents a moment where the qubit jumped to the wrong side of the Bloch sphere, and this corruption propagates through subsequent evolution. Nevertheless, soft post-selection still provides substantial improvement. Moving from hard (k=0) to soft k≤1 improves effective yield from 41.3% to 57.0%—a 38% improvement—while sacrificing only 2.3 points of fidelity. The k≤2 threshold provides the best balance: 61.6% effective yield with 85.1% fidelity, competitive with the "no selection" approach but with higher fidelity.

Trajectory-Weighted Estimation

Soft post-selection improves upon hard post-selection by keeping more trajectories, but it still employs a threshold: trajectories above the threshold get full weight, trajectories below get zero weight. A more sophisticated approach assigns continuous weights based on trajectory quality, with no sharp threshold. The general weighted estimator computes expectation values as:

<O>_weighted = sum(w_i * O_i) / sum(w_i)

where w_i is the weight assigned to trajectory i and O_i is the measurement outcome. Higher-quality trajectories (fewer flips) get higher weights and contribute more to the final estimate; lower-quality trajectories (more flips) get lower weights and contribute less. The question is how to choose the weighting function that maps flip count to weight.

We tested several weighting functions, each motivated by different assumptions about how errors affect trajectory quality:

Weight Function Formula Rationale
Inverse w = 1/(1+n_flips) Linear penalty proportional to flip count
Exponential w = exp(-n_flips) Rapid decay, strongly penalizes multiple flips
Empirical w = fidelity(n_flips) Data-driven weights based on measured fidelity
Threshold w = 1 if n_flips≤k else 0 Equivalent to soft post-selection

Results (X-Freeze N=8)

Weighting Estimated Fidelity Weight Utilization
None (uniform) 91.2% 100%
Hard post-select 96.8% 68.8%
Inverse 95.8% 81.6%
Exponential 96.6% 77.3%
Empirical 95.5% 91.2%

Exponential weighting emerges as the best overall choice, achieving 96.6% estimated fidelity—within 0.2% of hard post-selection's 96.8%—while utilizing 77.3% of shots instead of 68.8%. The exponential function strongly down-weights high-flip trajectories (3+ flips contribute negligibly) while retaining substantial weight for one-flip and two-flip trajectories. This matches the empirical observation that trajectory quality degrades exponentially with flip count, not linearly. Inverse weighting and empirical weighting also outperform hard post-selection but not as dramatically as exponential weighting.

The key insight is that hard post-selection—which corresponds to threshold weighting with k=0—is suboptimal for every circuit we tested. In all eleven test cases, at least one continuous weighting scheme outperformed hard post-selection on effective yield while matching or nearly matching its fidelity. Exponential weighting extracts nearly all the signal that hard post-selection captures, plus additional signal from the trajectories that hard post-selection discards as worthless.

Bias Correction for Expectation Values

The trajectory weighting results above focused on fidelity—the probability that the final measurement outcome matches the expected outcome. For many quantum computing applications, however, we care about expectation values: the average value of some observable measured across many shots. VQE (Variational Quantum Eigensolver) algorithms, for example, estimate energy expectations by measuring Pauli observables. Trajectory weighting introduces a subtlety for expectation value estimation: different trajectories have different fidelities, and these different fidelities introduce different biases that must be corrected.

A measurement with fidelity f has expected value that differs from the true value by a factor related to f. If the true expectation value is z_true (which can range from -1 to +1 for a Pauli Z measurement), the measured expectation value is:

E[z_measured] = (2f - 1) * z_true

This formula reflects the fact that a measurement with fidelity f correctly identifies the state with probability f and misidentifies it with probability (1-f). The factor (2f - 1) ranges from +1 (perfect fidelity, no bias) to 0 (random guessing, 50% fidelity) to -1 (perfectly wrong, 0% fidelity). To recover the true expectation value from a biased measurement, we divide by this factor:

z_corrected = z_measured / (2f - 1)

This correction works straightforwardly when f > 0.5 (better than random) but encounters a critical subtlety for superposition states. When the true state is |+⟩ (equal superposition of |0⟩ and |1⟩), the correct Z-measurement outcome is random: 50% probability of +1, 50% probability of -1, with true expectation value z_true = 0. But "fidelity" in our operational definition—the probability of the "correct" outcome—approaches 0.5 because the 50/50 split is the correct answer, not an error. When we try to apply bias correction with f = 0.5, we divide by (2×0.5 - 1) = 0, causing numerical instability.

The solution is eigenstate calibration rather than per-angle calibration. Instead of trying to estimate fidelity separately for each rotation angle (which fails for superpositions), we run one calibration circuit at θ = 0, where the state is a known eigenstate |0⟩ and fidelity can be cleanly measured. We record how fidelity varies with flip count in this calibration circuit, then apply those same correction factors to all angles. This works because the relationship between flip count and error rate is determined by hardware properties (gate errors, measurement errors, decoherence), not by the target state. A one-flip trajectory has the same relationship between flip count and error probability regardless of what rotation angle it was implementing.

VQE Validation Experiment

To validate that trajectory weighting works in a realistic application, we implemented a full VQE energy landscape measurement using Zeno gates. VQE is a leading candidate algorithm for near-term quantum computers, using quantum circuits to estimate the ground state energy of molecular Hamiltonians. The algorithm requires measuring expectation values of Pauli operators at many different variational parameter settings. We implemented Ry(θ) rotations using Zeno dragging for θ ranging from 0 to π in nine steps, then measured ⟨Z⟩ to trace out the energy landscape ⟨Z⟩ = cos(θ).

Results (True value: ⟨Z⟩ = cos(θ))

θ/π True ⟨Z⟩ Standard Hard PS Calibrated
0.000 +1.000 +0.809 +0.939 +1.000
0.125 +0.924 +0.757 +0.842 +0.805
0.250 +0.707 +0.564 +0.611 +0.643
0.375 +0.383 +0.311 +0.330 +0.341
0.500 +0.000 +0.076 +0.137 +0.012
0.625 -0.383 -0.315 -0.333 -0.316
0.750 -0.707 -0.544 -0.580 -0.616
0.875 -0.924 -0.697 -0.761 -0.784
1.000 -1.000 -0.749 -0.784 -0.803

Root Mean Square Error (RMSE)

Method RMSE Improvement vs Standard
Standard 0.164
Hard PS 0.122 26% better
Calibrated 0.101 38% better

The calibrated soft post-selection method—using soft k≤1 thresholds with eigenstate-calibrated bias correction—achieves the lowest RMSE at 0.101. This represents a 38% improvement over standard gates (RMSE 0.164) and a 17% improvement over hard post-selection (RMSE 0.122). Equally importantly, the calibrated method uses 37% more shots than hard post-selection, extracting more information from the same quantum data. For VQE applications where accuracy of expectation values matters more than per-shot fidelity, trajectory weighting provides substantial improvement over both standard approaches and naive Zeno with hard post-selection.

Flip Position Analysis

The trajectory weighting analysis so far has treated all flips as equally damaging: a trajectory with one flip gets the same weight regardless of whether that flip occurred at the first measurement or the last. But physical intuition suggests that flip position should matter. A flip at the first measurement corrupts all subsequent evolution—the qubit starts the remaining N-1 measurements on the wrong side of the Bloch sphere. A flip at the last measurement only affects final readout—the qubit was on the correct trajectory for N-1 measurements and only erred at the end. If early flips are more damaging than late flips, position-aware weighting schemes might outperform position-blind schemes like exponential weighting.

To test this, we analyzed fidelity conditioned on flip position for the X-freeze circuit with N = 8 measurements.

Fidelity Impact by Flip Position (X-Freeze N=8)

Position Fidelity if Flip Fidelity if No Flip Impact
0 (early) 38.6% 96.1% +57.5%
1 43.8% 96.0% +52.3%
2 44.8% 95.7% +51.0%
3 46.9% 95.9% +48.9%
4 44.5% 95.6% +51.1%
5 47.0% 95.8% +48.8%
6 46.5% 95.4% +49.0%
7 (late) 49.1% 95.5% +46.4%

The data confirms the position-dependence hypothesis. Early flips (position 0) reduce fidelity by 57.5 percentage points compared to trajectories with no flip at that position; late flips (position 7) reduce fidelity by only 46.4 points. This 11-point asymmetry is substantial and reflects the error propagation structure of the Zeno protocol: early flips corrupt the entire remaining trajectory (7 subsequent measurements on the wrong track), while late flips corrupt only a small portion (0-1 subsequent measurements). The intermediate positions show a monotonic trend as expected, with mid-trajectory flips having intermediate impact.

This position dependence suggests that weighting schemes can be improved by penalizing early flips more heavily than late flips. We implemented position-aware weighting as w = prod(1/(1 + penalty(pos))) where penalty is larger for early positions and smaller for late positions. In practice, this position-aware scheme achieved results comparable to but not significantly better than simple exponential weighting, suggesting that the position information provides modest additional value beyond what flip count alone captures.

Recommended Post-Selection Strategies

Based on the comprehensive experimental data presented in this section, we can now offer specific recommendations for how to analyze Zeno trajectory data depending on the application:

Application Recommended Strategy Rationale
Maximum data extraction Exponential weighting +92% effective yield vs hard PS
Position-sensitive Position-aware weighting Penalizes early flips more
VQE/QAOA estimation Exponential or soft k≤2 Best balance of fidelity and yield
Quick benchmarking Soft k≤2 90%+ yield, 96%+ fidelity
Maximum fidelity Hard k=0 Highest per-shot fidelity (but wasteful)

The overarching insight is that hard post-selection is never optimal. In all eleven circuits tested across multiple experiments, at least one alternative strategy outperformed hard post-selection on effective yield while maintaining comparable fidelity. For most applications, exponential weighting with formula w = exp(-n_flips) provides the best combination of simplicity and performance: it is easy to implement, requires no calibration or fitting, and extracts 92% more effective signal than hard post-selection from the same quantum data. The only scenario where hard post-selection remains appropriate is when maximum per-shot fidelity is required regardless of throughput cost—a rare requirement in practical quantum computing where shot budgets are typically the limiting resource.


Gate Composition: Chaining Zeno Gates

All experiments so far have examined individual Zeno gates in isolation: a single Zeno drag, a single Zeno freeze, a single Zeno rotation. But real quantum algorithms require sequences of gates—ten, a hundred, a thousand operations chained together. A critical question for the practical utility of Zeno protocols is whether they can be composed: can multiple Zeno gates be chained in sequence while maintaining their fidelity advantage? If the post-selection penalty compounds multiplicatively—if a gate with 58% success rate, when chained with another 58% gate, yields only 34% joint success—then Zeno would be limited to isolated single-gate benchmarks and could never be used in real algorithms where circuits contain many gates.

Experimental Design

To test gate composition directly, we designed six circuits that all implement the same net rotation Ry(π/2) but through different means. By comparing these circuits, we can isolate the effect of Zeno composition from other factors like total rotation angle or circuit depth:

  1. Standard Ry(π/2) — A single standard unitary gate implementing the full rotation. This provides the baseline for what standard quantum computing achieves.

  2. Zeno Ry(π/2) — A single Zeno gate implementing the full rotation using N=8 intermediate measurements. This shows what Zeno achieves for a single gate.

  3. Zeno Ry(π/4) → Zeno Ry(π/4) — Two Zeno gates chained in sequence, each implementing half the rotation (π/4) with N=8 measurements each, for a total of 16 intermediate measurements. This directly tests composition.

  4. Standard Ry(π/4) → Standard Ry(π/4) — Two standard gates chained in sequence. This controls for any effect of breaking the rotation into two steps.

  5. Mixed: Zeno Ry(π/4) → Standard Ry(π/4) — A hybrid approach with one Zeno gate followed by one standard gate, testing whether Zeno and standard gates can be mixed.

  6. Depth-matched control — The same circuit structure as the composed Zeno (same rotations, same depth) but without the intermediate measurements. This isolates the effect of post-selection from circuit structure.

Results with Hard Post-Selection

Circuit Fidelity Success Rate Effective Yield
Standard Ry(π/2) 89.6% 100% 89.6%
Zeno Ry(π/2) 96.4% 57.6% 55.5%
Zeno+Zeno (π/4+π/4) 95.7% 40.0% 38.3%
Standard+Standard 90.9% 100% 90.9%
Mixed (Zeno+Standard) 95.7% 62.1% 59.4%
Depth-matched 50.3% 100% 50.3%

With hard post-selection, the composition penalty is severe. A single Zeno gate achieves 57.6% success rate; two chained Zeno gates achieve only 40.0%—a 31% reduction. This matches the prediction from multiplying independent success probabilities: if each gate has 58% success and the gates are independent, the joint success probability is 0.58 × 0.58 ≈ 0.34, close to the observed 40%. At this rate, a 10-gate Zeno circuit would have success probability 0.58^10 ≈ 0.4%, and a 100-gate circuit would have success probability essentially zero. This makes Zeno appear completely impractical for real algorithms, which routinely require hundreds or thousands of gates.

The fidelity of the composed Zeno circuit remains high (95.7%, nearly matching the single Zeno gate's 96.4%), confirming that post-selection continues to work—the trajectories that survive are high quality. But the success rate collapse means that almost no trajectories survive. The effective yield of 38.3% for composed Zeno is less than half the 89.6% achieved by a simple standard gate, despite the higher fidelity.

Results with Trajectory Weighting

The grim picture above assumes hard post-selection. Applying trajectory weighting to the same raw data reveals a completely different story—one where Zeno composition is not only feasible but actually advantageous.

Single Zeno Ry(π/2), N=8:

Weighting Fidelity Utilization Effective Yield
Hard 96.4% 57.6% 55.5%
Soft k≤1 94.5% 78.7% 74.4%
Soft k≤2 92.5% 83.9% 77.7%
Exponential 95.0% 100% 95.0%

Composed Zeno Ry(π/4)+Ry(π/4), N=16:

Weighting Fidelity Utilization Effective Yield
Hard 95.7% 40.0% 38.3%
Soft k≤1 94.5% 67.8% 64.1%
Soft k≤2 93.7% 77.4% 72.5%
Exponential 94.8% 100% 94.8%

Composition Penalty by Analysis Method

Metric Hard Post-Selection Exponential Weighting
Single Zeno yield 55.5% 95.0%
Composed Zeno yield 38.3% 94.8%
Composition penalty −31.1% −0.3%

The composition penalty vanishes under trajectory weighting. With hard post-selection, composing two Zeno gates costs 31% of effective yield. With exponential weighting, the cost is only 0.3%—within statistical noise of zero. The mathematical reason is that trajectory weighting gracefully handles the increased flip counts in longer circuits. A single 8-measurement Zeno gate might produce trajectories with 0, 1, 2, or more flips. A composed 16-measurement Zeno sequence will produce trajectories with roughly twice as many flips on average. Hard post-selection rejects all non-zero-flip trajectories, and with 16 measurements there are many more ways to accumulate a flip, so the success rate crashes. Exponential weighting, however, simply assigns lower weights to higher-flip trajectories and extracts whatever signal they contain. With twice as many measurements, the typical trajectory might have one flip instead of zero, but one-flip trajectories still carry 94%+ of the signal, and exponential weighting captures this.

The practical consequence is transformative: with trajectory weighting, composed Zeno gates (94.8% effective yield) outperform standard gate sequences (90.9% effective yield). The Zeno approach—which looked hopelessly impractical under hard post-selection—becomes the superior choice when analyzed properly.

Scaling to Three Gates

To verify that the composition result extends beyond two gates, we implemented three-gate chains, each implementing net rotation Ry(π/2) using gates of equal angle.

Gates Standard Yield Zeno Hard Yield Zeno Exp Yield
1 90.7% 54.1% 94.7%
2 91.1% 38.0% 94.4%
3 91.2% 25.7% 94.6%

The pattern holds: hard post-selection success rates compound multiplicatively (54% → 38% → 26%), making Zeno look increasingly hopeless as circuit depth grows. But exponential weighting maintains approximately 94.5% effective yield regardless of gate count, consistently outperforming standard gates at every chain length.

This result is perhaps the most important finding in the dataset. It establishes that Zeno protocols, when combined with proper trajectory analysis, can scale to multi-gate circuits without composition penalty. The limitation of Zeno is not composition but rather the specific domains where it applies (single-qubit operations on separable states, as established in the entanglement studies). Within its domain of applicability, Zeno with trajectory weighting provides a genuine advantage over standard gates that persists as circuits grow deeper.


Causal Position-Weighted Model

Analysis of 45,956 trajectories reveals that flip position matters more than flip count. A flip at measurement position 0 causes 51.6% fidelity degradation, while a flip at position 15 causes only 17.4% degradation. The correlation between position and impact is r = 0.98, suggesting a simple causal model: early flips corrupt all downstream evolution, while late flips only affect the final readout.

Position Impact on Fidelity

Position Fidelity With Flip Fidelity Without Impact N samples
0 30.2% 81.8% +51.6% 9327
1 34.2% 80.3% +46.1% 8923
2 36.8% 79.1% +42.3% 8427
4 38.6% 76.2% +37.6% 5988
8 40.4% 72.9% +32.5% 2210
15 54.2% 71.6% +17.4% 766

Causal Weighting Derivation

If a flip at position p corrupts (N-p)/N of the remaining evolution, and each corrupted step has error rate e, then the fidelity given a flip at position p is:

F(p) = F_0 × (1-e)^(N-p)

For k flips at positions p_1, ..., p_k:

F(p_1,...,p_k) = F_0 × exp(-e × Σᵢ(N - pᵢ))

The optimal weight is therefore:

w(trajectory) = exp(-e × Σᵢ(N - pᵢ))

This is position-weighted exponential, not flip-count exponential. The standard exp(-n) weighting treats all flips equally; the causal formula penalizes early flips more heavily.

Empirical Validation

Training ML models (Logistic Regression, Random Forest, Gradient Boosting) on trajectory features confirms that position is the dominant predictor:

Feature Importance
pos_0 (first flip) 27.0%
flip_rate 15.1%
early_flips 14.3%
n_flips 12.3%
late_flips 1.3%

Practical Result: Hardware Noise Dominates

Despite the theoretical correctness of causal weighting, on current NISQ hardware the optimal strategy collapses to simple soft thresholding:

Strategy Fidelity Utilization Effective Yield
Hard (k=0) 81.3% 53.6% 43.6%
Exp(-n) 80.2% 63.0% 50.5%
Soft (k<=2) 78.3% 82.9% 64.9%
Soft (k<=3) 76.6% 86.3% 66.1%
Causal (optimal e) 70.3% 99.1% 69.7%

The causal formula with optimized e approaches "accept everything" because the fidelity floor on ibm_torino is approximately 70%. When even high-flip trajectories retain 70% fidelity, discarding them costs more in utilization than it gains in fidelity.

This finding has implications for future hardware: the causal weighting formula will become useful when hardware fidelity floors exceed 85-90%. On current NISQ devices, the noise is too high for sophisticated weighting to help.

Model Files

The models/ directory contains:

  • train_trajectory_model.py — Script to train trajectory quality predictors
  • train_position_aware_model.py — Script for position-aware model training
  • trajectory_model_results.json — Results from basic model comparison
  • position_aware_model_results.json — Results including causal weighting analysis

Dataset Contents

Core Zeno Dragging Studies

zeno_drag.json — The foundational experiment characterizing Zeno dragging as a function of N, the number of intermediate measurements. This file contains results for N = 2, 4, 8, 16, and 32, with 2048 shots per condition. Data includes raw bitstrings (the full trajectory record for each shot), success rates, fidelities conditional on success, theoretical success rate predictions, and survival curves showing how the surviving population decreases through each measurement step. This is the primary dataset for understanding the N-dependence of Zeno dragging.

zeno_controls.json — The four control experiments that validate the Zeno mechanism: freeze (repeated Z-basis measurement with no rotation), forward drag (|0⟩ → |1⟩), reverse drag (|1⟩ → |0⟩), and random (random measurement bases). Each control uses N = 8 measurements with 2048 shots. The random control is particularly important as a null hypothesis test, demonstrating that structured basis rotation is essential—random measurements destroy rather than transfer quantum states.

zeno_not_gate.json — A focused comparison of Zeno-implemented X gate against the standard unitary X gate, with additional controls. Contains 5 circuits with 4096 shots each, providing high statistical precision on the single most important gate.

Gate Implementation and Error Mitigation

zeno_gates_corrected.json — Systematic comparison of Zeno against standard gates across six rotation angles: identity (Ry(0)), Ry(π/8), Ry(π/4), Ry(π/2), Ry(3π/4), and X (Ry(π)). For each angle, three circuits are tested: standard unitary implementation, Zeno implementation with N=8, and depth-matched control (same structure as Zeno but without measurements). 18 circuits total with 4096 shots each. This dataset enables clean separation of the post-selection effect from circuit structure effects.

zeno_vs_mitigation.json — Head-to-head comparison of Zeno against standard error mitigation techniques: no mitigation, dynamical decoupling with XX and XY4 pulse sequences, gate twirling, and combined DD+twirling. Tests three circuits: X gate (|0⟩ → |1⟩), |+⟩ state preservation, and Bell state creation. 6 job submissions with 4096 shots each. This dataset addresses the practical question of whether Zeno is competitive with established error mitigation approaches.

Two-Qubit and Entanglement Studies

zeno_cnot.json — Investigation of three approaches to Zeno-based CNOT implementation: adaptive (controlled Zeno drag), X-freeze (Zeno protection of control qubit with standard CNOT), and Bell (joint Zeno measurements for entanglement). All approaches are compared against standard CNOT. 20 circuits total with 2048 shots each. The results establish that Zeno protocols do not extend advantageously to two-qubit gates on current hardware.

zeno_entanglement_trajectories.json — Comprehensive study of Zeno applied to entangled states: Bell pairs, 3-qubit GHZ, and 4-qubit GHZ. Also includes qubit quality correlation analysis (how Zeno improvement varies with qubit T1 time) and detailed trajectory error analysis (fidelity as a function of flip count and flip position). 34 circuits with 4096 shots each. This dataset establishes the fundamental incompatibility between local Zeno measurements and entanglement preservation.

Parameter Space Exploration

zeno_megabatch.json — Comprehensive parameter sweep exploring multiple aspects of Zeno physics: schedule optimization (different N values), cross-qubit correlations (20 parallel Zeno chains to measure error independence), feedback protocols (classical feedback based on trajectory outcomes), parity-check stabilization (using parity measurements for error detection). 42 circuits with 2048 shots each. This is the "kitchen sink" experiment covering multiple research directions.

zeno_strength_landscape.json — Fine-grained exploration of the measurement strength × number of measurements parameter space. Measurement strength sweeps from 0.05 (nearly unitary) to 1.0 (fully projective) in five steps, crossed with N = 4, 8, and 12 measurements. 45 circuits with 4096 shots each. The results demonstrate that effective yield is approximately constant across this parameter space, reflecting the physics of total measurement dose.

Trajectory Analysis

trajectory_estimation.json — Raw trajectory data designed for offline analysis of post-selection strategies. Includes X-freeze and drag circuits at N = 4, 8, 12, and 16, plus partial rotation circuits at θ = π/4, π/2, and 3π/4. 11 circuits with 4096 shots each, all with full trajectory recording. This dataset supports research on trajectory weighting schemes, flip position analysis, and bias correction.

vqe_trajectory_validation.json — Full VQE energy landscape validation with Zeno and standard circuits. Implements Ry(θ) rotations for θ from 0 to π in nine steps, measuring ⟨Z⟩ at each point. Enables comparison of standard gates, hard post-selection Zeno, and calibrated soft post-selection Zeno for expectation value estimation. 18 circuits with 2048 shots each.

trajectory_weight_analysis.json — Offline analysis results comparing weighting strategies. Contains empirical fidelity measurements by flip count, strategy comparison (hard, soft k≤1, soft k≤2, inverse, exponential, empirical), flip position impact analysis, and optimal threshold determination. Derived from trajectory_estimation.json through post-processing.

Gate Composition

zeno_composition/ — Contains three files for the gate composition experiment. standard_circuits.json has standard gate circuits: single Ry(π/2), composed Ry(π/4)+Ry(π/4), and depth-matched control. zeno_circuits.json has the corresponding Zeno circuits with full trajectory recording. trajectory_weighting_analysis.json contains the composition analysis showing how trajectory weighting eliminates the composition penalty. 4096 shots per circuit.

zeno_scaling/ — Extends composition to three gates. Contains standard and Zeno versions of 1-gate, 2-gate, and 3-gate chains, each implementing net Ry(π/2). Demonstrates that exponential weighting maintains ~94.5% effective yield regardless of gate count. 4096 shots per circuit.


Data Format

Each JSON file follows a consistent schema designed for both human readability and programmatic access:

  • experiment: String identifier for the experiment type (e.g., "zeno_drag", "zeno_controls")
  • timestamp: ISO 8601 timestamp of job submission (e.g., "2026-01-24T22:48:13.288791Z")
  • backend: IBM Quantum backend name (e.g., "ibm_torino")
  • shots: Number of measurement shots per circuit (typically 2048 or 4096)
  • job_id: IBM Quantum job identifier enabling reproducibility and verification
  • results: Nested structure containing experimental results, circuit-specific data, raw bitstrings, and computed metrics
  • usage_seconds: Quantum processing time consumed (for resource tracking)

Raw bitstrings are included in all files to enable reanalysis with different post-selection or weighting schemes. Bitstring format follows Qiskit convention: the rightmost bit corresponds to the first classical register. For trajectory data, bitstrings encode the full sequence of intermediate measurements, enabling reconstruction of the trajectory and analysis of flip positions.


Hardware

All experiments were conducted on the same IBM Quantum backend to ensure consistency across the dataset:

Parameter Value
Backend IBM Quantum ibm_torino
Processor 133-qubit Heron r2 superconducting transmon
Native gates CZ, RZ, SX, X, I
Typical T1 150–250 μs
Typical T2 140–200 μs
Measurement duration ~1.5 μs
Single-qubit gate duration ~30 ns
Two-qubit gate duration ~100 ns

The Heron r2 processor uses fixed-frequency transmon qubits with tunable couplers, representing the current generation of IBM Quantum hardware. Gate decomposition during transpilation converts Ry rotations (required for Zeno basis changes) into sequences of native RZ and SX gates.

Transpilation Settings

All circuits were transpiled using Qiskit's generate_preset_pass_manager with:

  • Optimization level: 1 (light optimization, preserves circuit structure)
  • Target backend: ibm_torino
  • Basis gates: CZ, RZ, SX, X, I

Ry(θ) rotations decompose into RZ-SX-RZ sequences. For example, Ry(π/2) becomes approximately RZ(π/2)-SX-RZ(-π/2).

Calibration Snapshot

Qubit properties at time of experiment (January 26, 2026), from zeno_megabatch.json:

Qubit T1 (μs) T2 (μs) Quality
0 156 164 Typical
1 215 194 Good
6 46 81 Low T1
8 246 214 Good
10 232 252 Good
15 192 34 Low T2

Full calibration data for all 133 qubits is embedded in results/zeno_megabatch/zeno_megabatch.json. Calibration drift is a known limitation: IBM recalibrates daily, but parameters can drift significantly within a calibration window.

Execution Details

  • Shots per circuit: 2048–4096 depending on experiment
  • Execution mode: IBM Quantum Runtime Batch (parallel compilation)
  • Total QPU time: ~200 seconds across all experiments
  • Temporal window: All jobs completed within 24 hours (single calibration epoch)

Citation

@dataset{qiskit-zenodragging,
  title={Quantum Zeno Dragging on IBM Quantum Hardware},
  author={Norton, Charles C.},
  year={2026},
  publisher={Hugging Face},
  url={https://huggingface.co/datasets/phanerozoic/qiskit-zenodragging}
}

References

Hacohen-Gourgy, S., et al. "Incoherent Qubit Control Using the Quantum Zeno Effect." Physical Review Letters 120, 020505 (2018).

Lewalle, P., et al. "A Multi-Qubit Quantum Gate Using the Zeno Effect." Quantum 7, 1100 (2023).

Lewalle, P., et al. "Optimal Zeno Dragging for Quantum Control." PRX Quantum 5, 020366 (2024).

License

CC-BY-4.0

Downloads last month
70