Dataset Viewer
Duplicate
The dataset viewer is not available for this split.
Cannot extract the features (columns) for the split 'train' of the config 'default' of the dataset.
Error code:   FeaturesError
Exception:    ArrowInvalid
Message:      Schema at index 1 was different: 
experiment: string
timestamp: string
backend: string
shots: int64
initial_state: string
results: list<item: struct<layers: int64, transpiled_depth: int64, p_correct: double, entropy: double, probabilities: list<item: double>, counts: struct<0000: int64, 0001: int64, 0010: int64, 0100: int64, 0101: int64, 0011: int64, 1000: int64, 1010: int64, 0110: int64, 1001: int64, 1101: int64, 1011: int64, 0111: int64, 1100: int64, 1110: int64, 1111: int64>>>
usage: struct<before: int64, after: int64, this_job: int64, remaining: int64>
vs
experiment: string
timestamp: string
backend: string
shots: int64
results: list<item: struct<image: string, circuit: string, original: list<item: list<item: double>>, reconstructed: list<item: list<item: double>>, fidelity: double, counts: struct<1010: int64, 1000: int64, 1011: int64, 0100: int64, 1110: int64, 0111: int64, 0010: int64, 0001: int64, 0101: int64, 1101: int64, 0000: int64, 1100: int64, 1001: int64, 0110: int64, 0011: int64, 1111: int64>>>
usage: struct<before: int64, after: int64, this_job: int64, remaining: int64>
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/split/first_rows.py", line 243, in compute_first_rows_from_streaming_response
                  iterable_dataset = iterable_dataset._resolve_features()
                                     ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 3496, in _resolve_features
                  features = _infer_features_from_batch(self.with_format(None)._head())
                                                        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2257, in _head
                  return next(iter(self.iter(batch_size=n)))
                         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2461, in iter
                  for key, example in iterator:
                                      ^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 1952, in __iter__
                  for key, pa_table in self._iter_arrow():
                                       ^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 1974, in _iter_arrow
                  yield from self.ex_iterable._iter_arrow()
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 563, in _iter_arrow
                  yield new_key, pa.Table.from_batches(chunks_buffer)
                                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "pyarrow/table.pxi", line 5039, in pyarrow.lib.Table.from_batches
                File "pyarrow/error.pxi", line 155, in pyarrow.lib.pyarrow_internal_check_status
                File "pyarrow/error.pxi", line 92, in pyarrow.lib.check_status
              pyarrow.lib.ArrowInvalid: Schema at index 1 was different: 
              experiment: string
              timestamp: string
              backend: string
              shots: int64
              initial_state: string
              results: list<item: struct<layers: int64, transpiled_depth: int64, p_correct: double, entropy: double, probabilities: list<item: double>, counts: struct<0000: int64, 0001: int64, 0010: int64, 0100: int64, 0101: int64, 0011: int64, 1000: int64, 1010: int64, 0110: int64, 1001: int64, 1101: int64, 1011: int64, 0111: int64, 1100: int64, 1110: int64, 1111: int64>>>
              usage: struct<before: int64, after: int64, this_job: int64, remaining: int64>
              vs
              experiment: string
              timestamp: string
              backend: string
              shots: int64
              results: list<item: struct<image: string, circuit: string, original: list<item: list<item: double>>, reconstructed: list<item: list<item: double>>, fidelity: double, counts: struct<1010: int64, 1000: int64, 1011: int64, 0100: int64, 1110: int64, 0111: int64, 0010: int64, 0001: int64, 0101: int64, 1101: int64, 0000: int64, 1100: int64, 1001: int64, 0110: int64, 0011: int64, 1111: int64>>>
              usage: struct<before: int64, after: int64, this_job: int64, remaining: int64>

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

qiskit-noise

NISQ noise characterization via quantum image encoding.

Contents

  • quantum_image_20260124_221920.json — 4x4 image encoding through identity, compression, and deep circuits

Experiment

4x4 images (plus, X, gradient, checkerboard) encoded into 4-qubit amplitudes via StatePreparation. Three circuit types tested:

  • Identity: encode → measure (baseline noise)
  • Compress: entangling circuit (information scrambling)
  • Deep: 50+ gate circuit that is mathematically identity (noise accumulation)

Key Results

Circuit Mean Fidelity Mechanism
Identity 0.957 State preparation + measurement error
Compress 0.399 Entanglement destroys local information
Deep 0.812 Gate errors compound: (0.995)^50 ≈ 0.78

Finding

NISQ artifacts differ from classical compression:

  • JPEG: deterministic blockiness
  • NISQ: probabilistic amplitude leakage between basis states

Hardware

IBM Quantum ibm_torino. 15 seconds QPU time.

Downloads last month
13