satwickkk honicky commited on
Commit
7320c7d
·
0 Parent(s):

Duplicate from honicky/hdfs-logs-encoded-blocks

Browse files

Co-authored-by: RJ Honicky <honicky@users.noreply.huggingface.co>

.gitattributes ADDED
@@ -0,0 +1,59 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.7z filter=lfs diff=lfs merge=lfs -text
2
+ *.arrow filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.bz2 filter=lfs diff=lfs merge=lfs -text
5
+ *.ckpt filter=lfs diff=lfs merge=lfs -text
6
+ *.ftz filter=lfs diff=lfs merge=lfs -text
7
+ *.gz filter=lfs diff=lfs merge=lfs -text
8
+ *.h5 filter=lfs diff=lfs merge=lfs -text
9
+ *.joblib filter=lfs diff=lfs merge=lfs -text
10
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
+ *.lz4 filter=lfs diff=lfs merge=lfs -text
12
+ *.mds filter=lfs diff=lfs merge=lfs -text
13
+ *.mlmodel filter=lfs diff=lfs merge=lfs -text
14
+ *.model filter=lfs diff=lfs merge=lfs -text
15
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
16
+ *.npy filter=lfs diff=lfs merge=lfs -text
17
+ *.npz filter=lfs diff=lfs merge=lfs -text
18
+ *.onnx filter=lfs diff=lfs merge=lfs -text
19
+ *.ot filter=lfs diff=lfs merge=lfs -text
20
+ *.parquet filter=lfs diff=lfs merge=lfs -text
21
+ *.pb filter=lfs diff=lfs merge=lfs -text
22
+ *.pickle filter=lfs diff=lfs merge=lfs -text
23
+ *.pkl filter=lfs diff=lfs merge=lfs -text
24
+ *.pt filter=lfs diff=lfs merge=lfs -text
25
+ *.pth filter=lfs diff=lfs merge=lfs -text
26
+ *.rar filter=lfs diff=lfs merge=lfs -text
27
+ *.safetensors filter=lfs diff=lfs merge=lfs -text
28
+ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
29
+ *.tar.* filter=lfs diff=lfs merge=lfs -text
30
+ *.tar filter=lfs diff=lfs merge=lfs -text
31
+ *.tflite filter=lfs diff=lfs merge=lfs -text
32
+ *.tgz filter=lfs diff=lfs merge=lfs -text
33
+ *.wasm filter=lfs diff=lfs merge=lfs -text
34
+ *.xz filter=lfs diff=lfs merge=lfs -text
35
+ *.zip filter=lfs diff=lfs merge=lfs -text
36
+ *.zst filter=lfs diff=lfs merge=lfs -text
37
+ *tfevents* filter=lfs diff=lfs merge=lfs -text
38
+ # Audio files - uncompressed
39
+ *.pcm filter=lfs diff=lfs merge=lfs -text
40
+ *.sam filter=lfs diff=lfs merge=lfs -text
41
+ *.raw filter=lfs diff=lfs merge=lfs -text
42
+ # Audio files - compressed
43
+ *.aac filter=lfs diff=lfs merge=lfs -text
44
+ *.flac filter=lfs diff=lfs merge=lfs -text
45
+ *.mp3 filter=lfs diff=lfs merge=lfs -text
46
+ *.ogg filter=lfs diff=lfs merge=lfs -text
47
+ *.wav filter=lfs diff=lfs merge=lfs -text
48
+ # Image files - uncompressed
49
+ *.bmp filter=lfs diff=lfs merge=lfs -text
50
+ *.gif filter=lfs diff=lfs merge=lfs -text
51
+ *.png filter=lfs diff=lfs merge=lfs -text
52
+ *.tiff filter=lfs diff=lfs merge=lfs -text
53
+ # Image files - compressed
54
+ *.jpg filter=lfs diff=lfs merge=lfs -text
55
+ *.jpeg filter=lfs diff=lfs merge=lfs -text
56
+ *.webp filter=lfs diff=lfs merge=lfs -text
57
+ # Video files - compressed
58
+ *.mp4 filter=lfs diff=lfs merge=lfs -text
59
+ *.webm filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,98 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language: en
3
+ tags:
4
+ - log-analysis
5
+ - hdfs
6
+ - anomaly-detection
7
+ license: mit
8
+ configs:
9
+ - config_name: default
10
+ data_files:
11
+ - split: train
12
+ path: data/train-*
13
+ - split: validation
14
+ path: data/validation-*
15
+ - split: test
16
+ path: data/test-*
17
+ dataset_info:
18
+ features:
19
+ - name: event_encoded
20
+ dtype: string
21
+ - name: tokenized_block
22
+ sequence: int64
23
+ - name: block_id
24
+ dtype: string
25
+ - name: label
26
+ dtype: string
27
+ - name: __index_level_0__
28
+ dtype: int64
29
+ splits:
30
+ - name: train
31
+ num_bytes: 1159074302
32
+ num_examples: 460048
33
+ - name: validation
34
+ num_bytes: 145089712
35
+ num_examples: 57506
36
+ - name: test
37
+ num_bytes: 144844752
38
+ num_examples: 57507
39
+ download_size: 173888975
40
+ dataset_size: 1449008766
41
+ ---
42
+
43
+ # HDFS Logs Train/Val/Test Splits
44
+
45
+ This dataset contains preprocessed HDFS log sequences split into train, validation, and test sets for anomaly detection tasks.
46
+
47
+ ## Dataset Description
48
+
49
+ The dataset is derived from the HDFS log dataset, which contains system logs from a Hadoop Distributed File System (HDFS).
50
+ Each sequence represents a block of log messages, labeled as either normal or anomalous. The dataset has been preprocessed
51
+ using the Drain algorithm to extract structured fields and identify event types.
52
+
53
+ ### Data Fields
54
+
55
+ - `block_id`: Unique identifier for each HDFS block, used to group log messages into blocks
56
+ - `event_encoded`: The preprocessed log sequence with event IDs and parameters
57
+ - `tokenized_block`: The tokenized log sequence, used for training
58
+ - `label`: Classification label ('Normal' or 'Anomaly')
59
+
60
+ ### Data Splits
61
+
62
+ - Training set: 460,049 sequences (80%)
63
+ - Validation set: 57,506 sequences (10%)
64
+ - Test set: 57,506 sequences (10%)
65
+
66
+ The splits are stratified by the Label field to maintain class distribution across splits.
67
+
68
+ ## Source Data
69
+
70
+ Original data source: https://zenodo.org/records/8196385/files/HDFS_v1.zip?download=1
71
+
72
+ ## Preprocessing
73
+
74
+ We preprocess the logs using the Drain algorithm to extract structured fields and identify event types.
75
+ We then encode the logs using a pretrained tokenizer and add special tokens to separate event types. This
76
+ dataset should be immediately usable for training and testing models for log-based anomaly detection.
77
+
78
+ ## Intended Uses
79
+
80
+ This dataset is designed for:
81
+ - Training log anomaly detection models
82
+ - Evaluating log sequence prediction models
83
+ - Benchmarking different approaches to log-based anomaly detection
84
+
85
+ see [honicky/pythia-14m-hdfs-logs](https://huggingface.co/honicky/pythia-14m-hdfs-logs) for an example model.
86
+
87
+ ## Citation
88
+
89
+ If you use this dataset, please cite the original HDFS paper:
90
+ ```bibtex
91
+ @inproceedings{xu2009detecting,
92
+ title={Detecting large-scale system problems by mining console logs},
93
+ author={Xu, Wei and Huang, Ling and Fox, Armando and Patterson, David and Jordan, Michael I},
94
+ booktitle={Proceedings of the ACM SIGOPS 22nd symposium on Operating systems principles},
95
+ pages={117--132},
96
+ year={2009}
97
+ }
98
+ ```
data/test-00000-of-00001.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d4b2b4cac21c9a654dac6a813f8dfdb2dc909f3923a0ee13d96a11010e986740
3
+ size 17380648
data/train-00000-of-00003.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4b8cd43699bee87e23d0767f8d14f208bfabfa01a7fa03c5f53a1159e1c47f87
3
+ size 46331214
data/train-00001-of-00003.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b3fed255722c3e83c47d9c91b4028468bd8cdc0c681d1a9ed2af9f1fba65e0a6
3
+ size 46359934
data/train-00002-of-00003.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b39a5834f22658e42106ee733f3be6af8f625d4e22c566aca41c19ed051e6580
3
+ size 46413917
data/validation-00000-of-00001.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3bbbe37f34d282a3c27ac955645b567543124cb6eef1a0414251d98e0e93c7f8
3
+ size 17403262