Datasets:
Tasks:
Text Generation
Modalities:
Text
Formats:
parquet
Languages:
code
Size:
10K - 100K
ArXiv:
DOI:
License:
| annotations_creators: [] | |
| language: | |
| - code | |
| license: cc-by-4.0 | |
| multilinguality: | |
| - multilingual | |
| pretty_name: ComPile | |
| size_categories: | |
| - unknown | |
| source_datasets: [] | |
| task_categories: | |
| - text-generation | |
| task_ids: [] | |
| # Dataset Card for ComPile: A Large IR Dataset from Production Sources | |
| ## Table of Contents | |
| - [Table of Contents](#table-of-contents) | |
| - [Dataset Description](#dataset-description) | |
| - [Changelog](#changelog) | |
| - [Dataset Summary](#dataset-summary) | |
| - [Languages](#languages) | |
| - [Dataset Usage](#dataset-usage) | |
| - [Dataset Structure](#dataset-structure) | |
| - [Data Fields](#data-fields) | |
| - [Dataset Size](#dataset-size) | |
| - [Licensing](#licensing) | |
| ## Dataset Description | |
| - **Homepage:** https://llvm-ml.github.io/ComPile/ | |
| - **Paper:** https://arxiv.org/abs/2309.15432 | |
| - **Leaderboard:** N/A | |
| ### Changelog | |
| |Release|Programming Languages|Description| | |
| |-|-|-| | |
| |v1.0| C/C++, Rust, Swift, Julia | Fine Tuning-scale dataset of 564GB of deduplicated LLVM IR | | |
| ### Dataset Summary | |
| ComPile contains over 500GB of permissively-licensed source code compiled to [LLVM](https://llvm.org) intermediate representation (IR) covering C/C++, Rust, Swift, and Julia. | |
| The dataset was created by hooking into LLVM code generation either through the language's package manager or the | |
| compiler directly to extract the dataset of intermediate representations from production grade programs using our | |
| [dataset collection utility for the LLVM compilation infrastructure](https://doi.org/10.5281/zenodo.10155761). | |
| ### Languages | |
| The dataset contains **5 programming languages** as of v1.0. | |
| ``` | |
| "c++", "c", "rust", "swift", "julia" | |
| ``` | |
| ### Dataset Usage | |
| To use ComPile we recommend HuggingFace's [datasets library](https://huggingface.co/docs/datasets/index). To e.g. load the dataset: | |
| ```python | |
| from datasets import load_dataset | |
| ds = load_dataset('llvm-ml/ComPile', split='train') | |
| ``` | |
| By default this will download the entirety of the 550GB+ dataset, and cache it locally at the directory | |
| specified by the environment variable `HF_DATASETS_CACHE`, which defaults to `~/.cache/huggingface`. To | |
| load the dataset in a streaming format, where the data is not saved locally: | |
| ```python | |
| ds = load_dataset('llvm-ml/ComPile', split='train', streaming=True) | |
| ``` | |
| For further arguments of `load_dataset`, please take a look at the | |
| `loading a dataset` [documentation](https://huggingface.co/docs/datasets/load_hub), and | |
| the `streaming` [documentation](https://huggingface.co/docs/datasets/stream). Bear in mind that | |
| this is significantly slower than loading the dataset from a local storage. For experimentation that | |
| requires more performance but might not require the whole dataset, you can also specify a portion | |
| of the dataset to download. For example, the following code will only download the first 10% | |
| of the dataset: | |
| ```python | |
| ds = load_dataset('llvm-ml/ComPile', split='train[:10%]') | |
| ``` | |
| Once the dataset has been loaded, the individual module files can be accessed by iterating through | |
| the dataset or accessing specific indices: | |
| ```python | |
| # We can iterate through the dataset | |
| next(iter(ds)) | |
| # We can also access modules at specific indices | |
| ds[0] | |
| ``` | |
| Filtering and map operations can be performed with the primitives available within the | |
| HuggingFace `datasets` library. | |
| ## Dataset Structure | |
| ### Data Fields | |
| Each row in the dataset consists of an individual LLVM-IR Module along with some metadata. There are | |
| six columns associated with each row: | |
| - `content` (string): This column contains the raw bitcode that composes the module. This can be written to a `.bc` | |
| file and manipulated using the standard llvm utilities or passed in directly through stdin if using something | |
| like Python's `subprocess`. | |
| - `license_expression` (string): This column contains the SPDX expression describing the license of the project that the | |
| module came from. | |
| - `license_source` (string): This column describes the way the `license_expression` was determined. This might indicate | |
| an individual package ecosystem (eg `spack`), license detection (eg `go_license_detector`), or might also indicate | |
| manual curation (`manual`). | |
| - `license_files`: This column contains an array of license files. These file names map to licenses included in | |
| `/licenses/licenses-0.parquet`. | |
| - `package_source` (string): This column contains information on the package that the module was sourced from. This is | |
| typically a link to a tar archive or git repository from which the project was built, but might also contain a | |
| mapping to a specific package ecosystem that provides the source, such as Spack. | |
| - `language` (string): This column indicates the source language that the module was compiled from. | |
| ## Dataset Size | |
| | Langauge | Raw Size | License Constraints | Deduplicated + License Constraints | | |
| |----------|----------|---------------------|------------------------------------| | |
| | C/C++ | 124GB | 47GB | 31GB | | |
| | C | N/A | N/A | 3GB | | |
| | C++ | N/A | N/A | 28GB | | |
| | Julia | 201GB | 179GB | 153GB | | |
| | Swift | 8GB | 7GB | 7GB | | |
| | Rust | 656GB | 443GB | 373GB | | |
| | Total | 989GB | 676GB | 564GB | | |
| The raw size is the size obtained directly from building all the projects. The license constraints column | |
| shows the size per language after license information is taken into account. The last column shows the size | |
| when both license constraints and deduplication are taken into account, which is what is included in the | |
| dataset. | |
| ## Licensing | |
| The individual modules within the dataset are subject to the licenses of the projects that they come from. License | |
| information is available in each row, including the SPDX license expression, the license files, and also a link to | |
| the package source where license information can be further validated. | |
| The curation of these modules is licensed under a CC-BY-4.0 license. |