Dataset Preview
Duplicate
The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
The dataset generation failed
Error code:   DatasetGenerationError
Exception:    ArrowNotImplementedError
Message:      Cannot write struct type 'attributes' with no child field to Parquet. Consider adding a dummy child field.
Traceback:    Traceback (most recent call last):
                File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 1887, in _prepare_split_single
                  writer.write_table(table)
                File "/usr/local/lib/python3.12/site-packages/datasets/arrow_writer.py", line 673, in write_table
                  self._build_writer(inferred_schema=pa_table.schema)
                File "/usr/local/lib/python3.12/site-packages/datasets/arrow_writer.py", line 714, in _build_writer
                  self.pa_writer = pq.ParquetWriter(
                                   ^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/pyarrow/parquet/core.py", line 1070, in __init__
                  self.writer = _parquet.ParquetWriter(
                                ^^^^^^^^^^^^^^^^^^^^^^^
                File "pyarrow/_parquet.pyx", line 2363, in pyarrow._parquet.ParquetWriter.__cinit__
                File "pyarrow/error.pxi", line 155, in pyarrow.lib.pyarrow_internal_check_status
                File "pyarrow/error.pxi", line 92, in pyarrow.lib.check_status
              pyarrow.lib.ArrowNotImplementedError: Cannot write struct type 'attributes' with no child field to Parquet. Consider adding a dummy child field.
              
              During handling of the above exception, another exception occurred:
              
              Traceback (most recent call last):
                File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 1908, in _prepare_split_single
                  num_examples, num_bytes = writer.finalize()
                                            ^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/arrow_writer.py", line 688, in finalize
                  self._build_writer(self.schema)
                File "/usr/local/lib/python3.12/site-packages/datasets/arrow_writer.py", line 714, in _build_writer
                  self.pa_writer = pq.ParquetWriter(
                                   ^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/pyarrow/parquet/core.py", line 1070, in __init__
                  self.writer = _parquet.ParquetWriter(
                                ^^^^^^^^^^^^^^^^^^^^^^^
                File "pyarrow/_parquet.pyx", line 2363, in pyarrow._parquet.ParquetWriter.__cinit__
                File "pyarrow/error.pxi", line 155, in pyarrow.lib.pyarrow_internal_check_status
                File "pyarrow/error.pxi", line 92, in pyarrow.lib.check_status
              pyarrow.lib.ArrowNotImplementedError: Cannot write struct type 'attributes' with no child field to Parquet. Consider adding a dummy child field.
              
              The above exception was the direct cause of the following exception:
              
              Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1347, in compute_config_parquet_and_info_response
                  parquet_operations = convert_to_parquet(builder)
                                       ^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 980, in convert_to_parquet
                  builder.download_and_prepare(
                File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 884, in download_and_prepare
                  self._download_and_prepare(
                File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 947, in _download_and_prepare
                  self._prepare_split(split_generator, **prepare_split_kwargs)
                File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 1736, in _prepare_split
                  for job_id, done, content in self._prepare_split_single(
                                               ^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 1919, in _prepare_split_single
                  raise DatasetGenerationError("An error occurred while generating the dataset") from e
              datasets.exceptions.DatasetGenerationError: An error occurred while generating the dataset

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

shape
list
data_type
string
chunk_grid
dict
chunk_key_encoding
dict
fill_value
int64
codecs
list
attributes
dict
zarr_format
int64
node_type
string
storage_transformers
list
[ 60000, 16 ]
uint8
{ "name": "regular", "configuration": { "chunk_shape": [ 30000, 8 ] } }
{ "name": "default", "configuration": { "separator": "/" } }
0
[ { "name": "bytes", "configuration": null }, { "name": "zstd", "configuration": { "level": 0, "checksum": false } } ]
{}
3
array
[]
[ 60000, 16 ]
uint8
{ "name": "regular", "configuration": { "chunk_shape": [ 30000, 8 ] } }
{ "name": "default", "configuration": { "separator": "/" } }
0
[ { "name": "bytes", "configuration": null }, { "name": "zstd", "configuration": { "level": 0, "checksum": false } } ]
{}
3
array
[]
[ 60000, 16 ]
uint8
{ "name": "regular", "configuration": { "chunk_shape": [ 30000, 8 ] } }
{ "name": "default", "configuration": { "separator": "/" } }
0
[ { "name": "bytes", "configuration": null }, { "name": "zstd", "configuration": { "level": 0, "checksum": false } } ]
{}
3
array
[]
[ 60000, 16 ]
uint8
{ "name": "regular", "configuration": { "chunk_shape": [ 30000, 8 ] } }
{ "name": "default", "configuration": { "separator": "/" } }
0
[ { "name": "bytes", "configuration": null }, { "name": "zstd", "configuration": { "level": 0, "checksum": false } } ]
{}
3
array
[]
[ 60000, 1 ]
uint8
{ "name": "regular", "configuration": { "chunk_shape": [ 60000, 1 ] } }
{ "name": "default", "configuration": { "separator": "/" } }
0
[ { "name": "bytes", "configuration": null }, { "name": "zstd", "configuration": { "level": 0, "checksum": false } } ]
{}
3
array
[]
[ 60000, 1 ]
uint8
{ "name": "regular", "configuration": { "chunk_shape": [ 60000, 1 ] } }
{ "name": "default", "configuration": { "separator": "/" } }
0
[ { "name": "bytes", "configuration": null }, { "name": "zstd", "configuration": { "level": 0, "checksum": false } } ]
{}
3
array
[]
[ 60000, 100000 ]
int8
{ "name": "regular", "configuration": { "chunk_shape": [ 30000, 20 ] } }
{ "name": "default", "configuration": { "separator": "/" } }
0
[ { "name": "bytes", "configuration": null }, { "name": "zstd", "configuration": { "level": 0, "checksum": false } } ]
{}
3
array
[]

ASCAD v1 Fixed Key Dataset

Dataset Summary

This repository contains the ASCAD v1 (Fixed Key) dataset, officially known as the ANSSI Side-Channel Analysis Database. It contains raw electromagnetic (EM) side-channel traces acquired from an 8-bit ATMega8515 microcontroller running a masked AES-128 software implementation. It allows for streaming directly into deep learning frameworks without needing to download the entire dataset locally.

Dataset Generation

Pipeline Description:

This script downloads, extracts, and uploads the optimized ASCAD v1 Fixed Key dataset to Hugging Face Hub. Contains fixed key traces and metadata for side-channel analysis.

Chunking Parameters Used: The arrays were constructed using the following chunk sizes. Knowing this is helpful for optimizing your dataloader read speeds:

  • CHUNK_SIZE_Y = 30000
  • CHUNK_SIZE_X = 20
  • TOTAL_CHUNKS_Y = 2
  • TOTAL_CHUNKS_X = 5000

Pipeline Dependencies: If you need to reproduce the extraction environment, the following packages were used: zarr, huggingface-hub, hf-transfer, fsspec, hffs, rich, numpy, h5py, certifi

Dataset Structure

The data is structured into a root Zarr group containing the raw traces and a metadata sub-group containing the corresponding cryptographic variables.

  • /traces: int8 array containing the raw side-channel measurements.
  • /metadata/plaintext: Plaintext byte arrays.
  • /metadata/ciphertext: Ciphertext byte arrays.
  • /metadata/key: The fixed encryption key byte arrays.
  • /metadata/mask: Masking material used during the AES execution.
  • /metadata/rin: The r_in mask arrays.
  • /metadata/rout: The r_out mask arrays.

How to Use

You can stream or load this dataset directly from the Hugging Face Hub using zarr and fsspec (or hffs).

Python Example

Ensure you have the required dependencies installed:

pip install zarr fsspec huggingface-hub
import zarr
import fsspec

repo_id = "DLSCA/ascad-v1-fk"
store = fsspec.get_mapper(f"hf://datasets/{repo_id}")

# Create sub-mappers pointing to array directories
traces_store = fsspec.get_mapper(f"hf://datasets/{repo_id}/traces")
plaintext_store = fsspec.get_mapper(f"hf://datasets/{repo_id}/metadata/plaintext")
key_store = fsspec.get_mapper(f"hf://datasets/{repo_id}/metadata/key")

# Now open arrays directly
traces = zarr.open_array(traces_store, mode="r")
plaintext = zarr.open_array(plaintext_store, mode="r")
key = zarr.open_array(key_store, mode="r")

print(f"Traces shape: {traces.shape}")
print(f"Sample trace: {traces[0, :10]}")
Downloads last month
8,526