mjbommar commited on
Commit
a3f1af7
·
verified ·
1 Parent(s): ef3a0f2

Add dataset card

Browse files
Files changed (1) hide show
  1. README.md +121 -39
README.md CHANGED
@@ -1,41 +1,123 @@
1
  ---
2
- dataset_info:
3
- features:
4
- - name: blake2b
5
- dtype: string
6
- - name: mime_type
7
- dtype: string
8
- - name: source_path
9
- dtype: string
10
- - name: source_root
11
- dtype: string
12
- - name: extension
13
- dtype: string
14
- - name: source_size_bytes
15
- dtype: int64
16
- - name: sample_bytes
17
- dtype: int32
18
- - name: tokens
19
- list: int32
20
- splits:
21
- - name: train
22
- num_bytes: 4159696849
23
- num_examples: 37111
24
- - name: validation
25
- num_bytes: 508386684
26
- num_examples: 4591
27
- - name: test
28
- num_bytes: 539592038
29
- num_examples: 4748
30
- download_size: 2315936056
31
- dataset_size: 5207675571
32
- configs:
33
- - config_name: default
34
- data_files:
35
- - split: train
36
- path: data/train-*
37
- - split: validation
38
- path: data/validation-*
39
- - split: test
40
- path: data/test-*
41
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ language:
3
+ - en
4
+ license: apache-2.0
5
+ task_categories:
6
+ - text-classification
7
+ tags:
8
+ - binary-analysis
9
+ - file-type-detection
10
+ - mime-type
11
+ - magic-bytes
12
+ - cybersecurity
13
+ size_categories:
14
+ - 10K<n<100K
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
15
  ---
16
+
17
+ # Magic-BERT Binary File Classification Dataset
18
+
19
+ This dataset contains tokenized binary file samples for training MIME type classification models.
20
+ Each sample is a 64KB chunk from the beginning of a file, tokenized using a byte-level BPE tokenizer.
21
+
22
+ ## Dataset Description
23
+
24
+ - **Total Samples:** 46,450
25
+ - **Number of Classes:** 125
26
+ - **Sequence Length:** 999999
27
+ - **Vocabulary Size:** 32,768
28
+
29
+ ### Splits
30
+
31
+ | Split | Samples |
32
+ |-------|---------|
33
+ | Train | 37,111 |
34
+ | Validation | 4,591 |
35
+ | Test | 4,748 |
36
+
37
+ ## Features
38
+
39
+ Each sample contains:
40
+
41
+ | Feature | Type | Description |
42
+ |---------|------|-------------|
43
+ | `blake2b` | string | Content hash (unique sample ID) |
44
+ | `mime_type` | string | MIME type label |
45
+ | `source_path` | string | Original file path |
46
+ | `source_root` | string | Dataset source |
47
+ | `extension` | string | File extension |
48
+ | `source_size_bytes` | int64 | Original file size |
49
+ | `sample_bytes` | int32 | Sample size (max 64KB) |
50
+ | `tokens` | List[int32] | Tokenized input (999999 tokens) |
51
+
52
+ ## MIME Type Distribution (Top 20)
53
+
54
+ | MIME Type | Count | Percentage |
55
+ |-----------|-------|------------|
56
+ | application/octet-stream | 2,000 | 4.3% |
57
+ | application/vnd.ms-powerpoint | 2,000 | 4.3% |
58
+ | audio/flac | 2,000 | 4.3% |
59
+ | text/x-c | 2,000 | 4.3% |
60
+ | text/plain | 1,999 | 4.3% |
61
+ | image/png | 1,964 | 4.2% |
62
+ | application/gzip | 1,574 | 3.4% |
63
+ | image/svg+xml | 1,486 | 3.2% |
64
+ | text/html | 1,454 | 3.1% |
65
+ | application/vnd.ms-excel | 1,380 | 3.0% |
66
+ | application/javascript | 1,340 | 2.9% |
67
+ | application/x-7z-compressed | 1,200 | 2.6% |
68
+ | image/gif | 1,156 | 2.5% |
69
+ | text/csv | 1,128 | 2.4% |
70
+ | image/webp | 1,014 | 2.2% |
71
+ | application/zlib | 987 | 2.1% |
72
+ | text/x-lisp | 980 | 2.1% |
73
+ | application/x-gettext-translation | 965 | 2.1% |
74
+ | text/x-c++ | 800 | 1.7% |
75
+ | application/zstd | 781 | 1.7% |
76
+
77
+ ## Usage
78
+
79
+ ```python
80
+ from datasets import load_dataset
81
+
82
+ # Load the dataset
83
+ dataset = load_dataset("your-username/magic-bert-dataset")
84
+
85
+ # Access splits
86
+ train_data = dataset["train"]
87
+ val_data = dataset["validation"]
88
+ test_data = dataset["test"]
89
+
90
+ # Example: Get a sample
91
+ sample = train_data[0]
92
+ print(f"MIME type: {sample['mime_type']}")
93
+ print(f"Tokens: {len(sample['tokens'])}")
94
+ ```
95
+
96
+ ## Training
97
+
98
+ This dataset is designed for use with BERT-style models for binary file classification:
99
+
100
+ ```python
101
+ from transformers import BertForSequenceClassification
102
+
103
+ model = BertForSequenceClassification.from_pretrained(
104
+ "your-model",
105
+ num_labels=125,
106
+ )
107
+ ```
108
+
109
+ ## Citation
110
+
111
+ If you use this dataset, please cite:
112
+
113
+ ```bibtex
114
+ @dataset{magic_bert_dataset,
115
+ title = {Magic-BERT Binary File Classification Dataset},
116
+ year = {2025},
117
+ publisher = {HuggingFace},
118
+ }
119
+ ```
120
+
121
+ ## License
122
+
123
+ This dataset is released under the Apache 2.0 License.