Datasets:

Modalities:
Text
Formats:
parquet
ArXiv:
Libraries:
Datasets
Dask
License:
ahmedheakl commited on
Commit
483d3de
·
verified ·
1 Parent(s): 5126c4b

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +126 -45
README.md CHANGED
@@ -1,45 +1,126 @@
1
- ---
2
- license: mit
3
- dataset_info:
4
- features:
5
- - name: filename
6
- dtype: string
7
- - name: cuda_source
8
- dtype: string
9
- - name: cuda_host
10
- dtype: string
11
- - name: cuda_device
12
- dtype: string
13
- - name: hip_source
14
- dtype: string
15
- - name: hip_host
16
- dtype: string
17
- - name: hip_device
18
- dtype: string
19
- splits:
20
- - name: train
21
- num_bytes: 18979794237
22
- num_examples: 70694
23
- - name: stack
24
- num_bytes: 6087813411
25
- num_examples: 24170
26
- - name: synth
27
- num_bytes: 11766271412
28
- num_examples: 40591
29
- - name: bench
30
- num_bytes: 3676152
31
- num_examples: 40
32
- download_size: 10789629544
33
- dataset_size: 36837555212
34
- configs:
35
- - config_name: default
36
- data_files:
37
- - split: train
38
- path: data/train-*
39
- - split: stack
40
- path: data/stack-*
41
- - split: synth
42
- path: data/synth-*
43
- - split: bench
44
- path: data/bench-*
45
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ dataset_info:
4
+ features:
5
+ - name: filename
6
+ dtype: string
7
+ - name: cuda_source
8
+ dtype: string
9
+ - name: cuda_host
10
+ dtype: string
11
+ - name: cuda_device
12
+ dtype: string
13
+ - name: hip_source
14
+ dtype: string
15
+ - name: hip_host
16
+ dtype: string
17
+ - name: hip_device
18
+ dtype: string
19
+ splits:
20
+ - name: train
21
+ num_bytes: 18979794237
22
+ num_examples: 70694
23
+ - name: stack
24
+ num_bytes: 6087813411
25
+ num_examples: 24170
26
+ - name: synth
27
+ num_bytes: 11766271412
28
+ num_examples: 40591
29
+ - name: bench
30
+ num_bytes: 3676152
31
+ num_examples: 40
32
+ download_size: 10789629544
33
+ dataset_size: 36837555212
34
+ configs:
35
+ - config_name: default
36
+ data_files:
37
+ - split: train
38
+ path: data/train-*
39
+ - split: stack
40
+ path: data/stack-*
41
+ - split: synth
42
+ path: data/synth-*
43
+ - split: bench
44
+ path: data/bench-*
45
+ ---
46
+ # 💻 CASS: CUDA–AMD Assembly and Source Mapping
47
+
48
+ [CASS](https://huggingface.co/datasets/MBZUAI/CASS) is the **first large-scale dataset** for cross-architecture GPU transpilation, providing semantically aligned CUDA–HIP source pairs and their corresponding host/device assemblies for **NVIDIA (SASS)** and **AMD (RDNA3)** platforms. It enables research in:
49
+
50
+ * 🔁 Source-to-source translation (CUDA ↔ HIP)
51
+ * ⚙️ Assembly-level translation (SASS ↔ RDNA3)
52
+ * 🧠 LLM-guided GPU code transpilation
53
+
54
+ ---
55
+
56
+ ## 📚 Dataset Structure
57
+
58
+ Each sample contains the following fields:
59
+
60
+ | Field | Description |
61
+ | ------------- | ------------------------------------------ |
62
+ | `filename` | Sample ID or file name |
63
+ | `cuda_source` | Original CUDA source code |
64
+ | `cuda_host` | Compiled x86 host-side assembly from CUDA |
65
+ | `cuda_device` | Compiled SASS (Nvidia GPU) device assembly |
66
+ | `hip_source` | Transpiled HIP source code (via HIPIFY) |
67
+ | `hip_host` | Compiled x86 host-side assembly from HIP |
68
+ | `hip_device` | Compiled RDNA3 (AMD GPU) device assembly |
69
+
70
+ ---
71
+
72
+ ## 🔀 Dataset Splits
73
+
74
+ | Split | Description | # Examples |
75
+ | ------- | ----------------------------------------- | ---------- |
76
+ | `train` | Union of `synth`, `stack`, and `opencl` | 70,694 |
77
+ | `synth` | LLM-synthesized CUDA programs | 40,591 |
78
+ | `stack` | Scraped and filtered CUDA from StackV2 | 24,170 |
79
+ | `bench` | 40 curated eval tasks from 16 GPU domains | 40 |
80
+
81
+ ---
82
+
83
+ ## 📦 How to Load
84
+
85
+ ```python
86
+ from datasets import load_dataset
87
+
88
+ # 🧠 Load the full dataset (default config with all splits)
89
+ cass = load_dataset("MBZUAI/cass", name="default")
90
+
91
+ # Access a specific split
92
+ train_data = cass["train"] # train = stack + synth + opencl
93
+ stack_data = cass["stack"]
94
+ synth_data = cass["synth"]
95
+ bench_data = cass["bench"]
96
+ ```
97
+
98
+ ---
99
+
100
+ ## 📈 Benchmark and Evaluation
101
+
102
+ The `bench` split includes 40 samples across 16 domains like:
103
+
104
+ * 🧪 Physics Simulation
105
+ * 📊 Data Structures
106
+ * 📸 Image Processing
107
+ * 🧮 Linear Algebra
108
+
109
+ All samples have been manually verified for semantic equivalence across CUDA and HIP and come with executable device/host binaries.
110
+
111
+ ---
112
+
113
+ ## 📄 License
114
+
115
+ Released under the **MIT license**.
116
+
117
+ ---
118
+
119
+ ## 🔗 Useful Links
120
+
121
+ * 🤗 Hugging Face Collection: [CASS on Hugging Face](https://huggingface.co/collections/MBZUAI/cass-6825b5bf7414503cf16f87b2)
122
+ * 📂 Code & Tools: [GitHub Repository](https://github.com/GustavoStahl/CASS)
123
+
124
+ ---
125
+
126
+ Let me know if you'd like a shorter version, a version tailored for a dataset card (`dataset_card.json`), or to include GPU specs for compilation details.