Dataset Viewer
The dataset viewer is not available for this subset.
Cannot get the split names for the config 'default' of the dataset.
Exception:    SplitsNotFoundError
Message:      The split names could not be parsed from the dataset config.
Traceback:    Traceback (most recent call last):
                File "/usr/local/lib/python3.12/site-packages/datasets/inspect.py", line 286, in get_dataset_config_info
                  for split_generator in builder._split_generators(
                                         ^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/webdataset/webdataset.py", line 80, in _split_generators
                  first_examples = list(islice(pipeline, self.NUM_EXAMPLES_FOR_FEATURES_INFERENCE))
                                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/webdataset/webdataset.py", line 33, in _get_pipeline_from_tar
                  for filename, f in tar_iterator:
                                     ^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/utils/track.py", line 49, in __iter__
                  for x in self.generator(*self.args):
                           ^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/utils/file_utils.py", line 1380, in _iter_from_urlpath
                  yield from cls._iter_tar(f)
                File "/usr/local/lib/python3.12/site-packages/datasets/utils/file_utils.py", line 1331, in _iter_tar
                  stream = tarfile.open(fileobj=f, mode="r|*")
                           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/tarfile.py", line 1886, in open
                  t = cls(name, filemode, stream, **kwargs)
                      ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/tarfile.py", line 1762, in __init__
                  self.firstmember = self.next()
                                     ^^^^^^^^^^^
                File "/usr/local/lib/python3.12/tarfile.py", line 2750, in next
                  raise ReadError(str(e)) from None
              tarfile.ReadError: invalid header
              
              The above exception was the direct cause of the following exception:
              
              Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/config/split_names.py", line 65, in compute_split_names_from_streaming_response
                  for split in get_dataset_split_names(
                               ^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/inspect.py", line 340, in get_dataset_split_names
                  info = get_dataset_config_info(
                         ^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/inspect.py", line 291, in get_dataset_config_info
                  raise SplitsNotFoundError("The split names could not be parsed from the dataset config.") from err
              datasets.inspect.SplitsNotFoundError: The split names could not be parsed from the dataset config.

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

Assemblage Linux Dataset Please note, the Assemblage code is published under the MIT license, while the dataset specify each binary's source code repository license, please obey the original repository's license.

This reposotory holds the Linux public dataset for Assemblage, and you can find the paper here.

We also provide all the licensed source code (w/ all Git info) for these binaris available upon request, as the compressed file is too large (~1.5T after at highest compression level), please contact us and we can setup sftp or some netdisk for data transfer.

Files

  • linux_licensed.duckdb.zst — DuckDB database (zstd-compressed). Decompress with zstd -d linux_licensed.duckdb.zst → 127 GiB DuckDB file. Open with the duckdb Python/CLI client. Schema: binaries, functions, rvas, lines, pdbs (same column layout as the previous SQLite release).
  • binaries.tar.xz — raw ELF binaries.

Changelog

2024 June 28th: Update binaries on license info.

2025 Nov 13th: Added GCC O2 binaries for the Oz misflagged binaries. Please refer to the dataset docs.

2026 Apr 5th: Added new binaries with source codes.

2026 May 7th: Replaced linux_licensed.sqlite.tar.xz with linux_licensed.duckdb.zst. The DWARF extraction pipeline was rewritten end-to-end to fix several bugs in the previous SQLite release; the data was re-extracted from scratch on every binary in the corpus. Concrete fixes:

  • Function names — preferred DW_AT_linkage_name, fall back to DW_AT_name, then follow DW_AT_specification / DW_AT_abstract_origin. The previous extractor stored 16-char-truncated names for ~5.9% of rows; now <0.7% are 16 chars (and those are real 16-char names).
  • RVA end — derived from DW_AT_high_pc form-aware (DW_FORM_addr → absolute, DW_FORM_data* → offset from low_pc); DW_AT_ranges properly handled including DWARF v5 BaseAddressEntry + offset-pair lists. The previous extractor's int(str(size), 16) bug inflated function sizes; ranges in v5 binaries with split text sections are now correct.
  • Line program — skip state.line == 0 synthetic entries; per-CU bisect-walk handles overlapping subprogram + inlined-subroutine ranges; line→function assignment is exhaustive.
  • Source-code text — populated from on-disk source, byte-exact with the source file at the recorded line. Path heuristics resolve /tmp/projects/... and <32-hex>/... build-tmpdir prefixes.
  • Section pseudo-symbols (.text, .bss, etc.) and zero-length RVAs are filtered.
  • binaries.optimization is the literal compiler flag (-O0..-O3, -Oz).

Re-extraction was validated bit-exact against an independent DWARF + source walk on a 160-binary stratified sample (zero discrepancies across ~280K functions, ~1.4M RVAs, ~2M lines, ~50K source-code text strings) plus full-corpus integrity checks (4.7B rows, zero PK/FK/range/null violations).

Row counts: 249,121 binaries · 613,573,055 functions · 685,044,264 rvas · 4,028,246,246 lines.

Downloads last month
48

Paper for changliu8541/Assemblage_LinuxELF