anuj20 darkknight25 commited on
Commit
040c1b5
·
verified ·
0 Parent(s):

Duplicate from darkknight25/Advanced_SIEM_Dataset

Browse files

Co-authored-by: Sunny thakur <darkknight25@users.noreply.huggingface.co>

Files changed (3) hide show
  1. .gitattributes +60 -0
  2. README.md +313 -0
  3. advanced_siem_dataset.jsonl +3 -0
.gitattributes ADDED
@@ -0,0 +1,60 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.7z filter=lfs diff=lfs merge=lfs -text
2
+ *.arrow filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.bz2 filter=lfs diff=lfs merge=lfs -text
5
+ *.ckpt filter=lfs diff=lfs merge=lfs -text
6
+ *.ftz filter=lfs diff=lfs merge=lfs -text
7
+ *.gz filter=lfs diff=lfs merge=lfs -text
8
+ *.h5 filter=lfs diff=lfs merge=lfs -text
9
+ *.joblib filter=lfs diff=lfs merge=lfs -text
10
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
+ *.lz4 filter=lfs diff=lfs merge=lfs -text
12
+ *.mds filter=lfs diff=lfs merge=lfs -text
13
+ *.mlmodel filter=lfs diff=lfs merge=lfs -text
14
+ *.model filter=lfs diff=lfs merge=lfs -text
15
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
16
+ *.npy filter=lfs diff=lfs merge=lfs -text
17
+ *.npz filter=lfs diff=lfs merge=lfs -text
18
+ *.onnx filter=lfs diff=lfs merge=lfs -text
19
+ *.ot filter=lfs diff=lfs merge=lfs -text
20
+ *.parquet filter=lfs diff=lfs merge=lfs -text
21
+ *.pb filter=lfs diff=lfs merge=lfs -text
22
+ *.pickle filter=lfs diff=lfs merge=lfs -text
23
+ *.pkl filter=lfs diff=lfs merge=lfs -text
24
+ *.pt filter=lfs diff=lfs merge=lfs -text
25
+ *.pth filter=lfs diff=lfs merge=lfs -text
26
+ *.rar filter=lfs diff=lfs merge=lfs -text
27
+ *.safetensors filter=lfs diff=lfs merge=lfs -text
28
+ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
29
+ *.tar.* filter=lfs diff=lfs merge=lfs -text
30
+ *.tar filter=lfs diff=lfs merge=lfs -text
31
+ *.tflite filter=lfs diff=lfs merge=lfs -text
32
+ *.tgz filter=lfs diff=lfs merge=lfs -text
33
+ *.wasm filter=lfs diff=lfs merge=lfs -text
34
+ *.xz filter=lfs diff=lfs merge=lfs -text
35
+ *.zip filter=lfs diff=lfs merge=lfs -text
36
+ *.zst filter=lfs diff=lfs merge=lfs -text
37
+ *tfevents* filter=lfs diff=lfs merge=lfs -text
38
+ # Audio files - uncompressed
39
+ *.pcm filter=lfs diff=lfs merge=lfs -text
40
+ *.sam filter=lfs diff=lfs merge=lfs -text
41
+ *.raw filter=lfs diff=lfs merge=lfs -text
42
+ # Audio files - compressed
43
+ *.aac filter=lfs diff=lfs merge=lfs -text
44
+ *.flac filter=lfs diff=lfs merge=lfs -text
45
+ *.mp3 filter=lfs diff=lfs merge=lfs -text
46
+ *.ogg filter=lfs diff=lfs merge=lfs -text
47
+ *.wav filter=lfs diff=lfs merge=lfs -text
48
+ # Image files - uncompressed
49
+ *.bmp filter=lfs diff=lfs merge=lfs -text
50
+ *.gif filter=lfs diff=lfs merge=lfs -text
51
+ *.png filter=lfs diff=lfs merge=lfs -text
52
+ *.tiff filter=lfs diff=lfs merge=lfs -text
53
+ # Image files - compressed
54
+ *.jpg filter=lfs diff=lfs merge=lfs -text
55
+ *.jpeg filter=lfs diff=lfs merge=lfs -text
56
+ *.webp filter=lfs diff=lfs merge=lfs -text
57
+ # Video files - compressed
58
+ *.mp4 filter=lfs diff=lfs merge=lfs -text
59
+ *.webm filter=lfs diff=lfs merge=lfs -text
60
+ advanced_siem_dataset.jsonl filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,313 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ language:
4
+ - en
5
+ tags:
6
+ - siem
7
+ - cybersecurity
8
+ pretty_name: sunny thakur
9
+ size_categories:
10
+ - 100K<n<1M
11
+ ---
12
+ # Advanced SIEM Dataset
13
+
14
+ Dataset Description
15
+
16
+ The advanced_siem_dataset is a synthetic dataset of 100,000 security event records designed for training machine learning (ML) and artificial intelligence (AI) models in cybersecurity.
17
+
18
+ It simulates logs from Security Information and Event Management (SIEM) systems, capturing diverse event types such as firewall activities, intrusion detection system (IDS) alerts, authentication attempts, endpoint activities, network traffic, cloud operations, IoT device events, and AI system interactions.
19
+
20
+ The dataset includes advanced metadata, MITRE ATT&CK techniques, threat actor associations, and unconventional indicators of compromise (IOCs), making it suitable for tasks like anomaly detection, threat classification, predictive analytics, and user and entity behavior analytics (UEBA).
21
+ ```java
22
+ Paper: N/A
23
+ Point of Contact: sunny thakur ,sunny48445@gmail.com
24
+ Size of Dataset: 100,000 records
25
+ File Format: JSON Lines (.jsonl)
26
+ License: MIT License
27
+ ```
28
+ # Dataset Structure
29
+
30
+ The dataset is stored in a single train split in JSON Lines format, with each record representing a security event. Below is the schema:
31
+
32
+ ```
33
+
34
+ Field
35
+ Type
36
+ Description
37
+
38
+
39
+
40
+ event_id
41
+ String
42
+ Unique identifier (UUID) for the event.
43
+
44
+
45
+ timestamp
46
+ String
47
+ ISO 8601 timestamp of the event.
48
+
49
+
50
+ event_type
51
+ String
52
+ Event category: firewall, ids_alert, auth, endpoint, network, cloud, iot, ai.
53
+
54
+
55
+ source
56
+ String
57
+ Security tool and version (e.g., "Splunk v9.0.2").
58
+
59
+
60
+ severity
61
+ String
62
+ Severity level: info, low, medium, high, critical, emergency.
63
+
64
+
65
+ description
66
+ String
67
+ Human-readable summary of the event.
68
+
69
+
70
+ raw_log
71
+ String
72
+ CEF-formatted raw log with optional noise.
73
+
74
+
75
+ advanced_metadata
76
+ Dict
77
+ Metadata including geo_location, device_hash, user_agent, session_id, risk_score, confidence.
78
+
79
+
80
+ behavioral_analytics
81
+ Dict
82
+ Optional; includes baseline_deviation, entropy, frequency_anomaly, sequence_anomaly (10% of records).
83
+
84
+
85
+ Event-specific fields
86
+ Varies
87
+ E.g., src_ip, dst_ip, alert_type (for ids_alert), user (for auth), action, etc.
88
+ ```
89
+ ```java
90
+ Sample Record:
91
+ {
92
+ "event_id": "123e4567-e89b-12d3-a456-426614174000",
93
+ "timestamp": "2025-07-11T11:27:00+00:00",
94
+ "event_type": "ids_alert",
95
+ "source": "Snort v2.9.20",
96
+ "severity": "high",
97
+ "description": "Snort Alert: Zero-Day Exploit detected from 192.168.1.100 targeting N/A | MITRE Technique: T1059.001",
98
+ "raw_log": "CEF:0|Snort v2.9.20|SIEM|1.0|100|ids_alert|high| desc=Snort Alert: Zero-Day Exploit detected from 192.168.1.100 targeting N/A | MITRE Technique: T1059.001",
99
+ "advanced_metadata": {
100
+ "geo_location": "United States",
101
+ "device_hash": "a1b2c3d4e5f6",
102
+ "user_agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) Chrome/91.0.4472.124",
103
+ "session_id": "987fcdeb-1234-5678-abcd-426614174000",
104
+ "risk_score": 85.5,
105
+ "confidence": 0.95
106
+ },
107
+ "alert_type": "Zero-Day Exploit",
108
+ "signature_id": "SIG-1234",
109
+ "category": "Exploit",
110
+ "additional_info": "MITRE Technique: T1059.001"
111
+ }
112
+ ```
113
+ # Intended Use
114
+
115
+ This dataset is intended for:
116
+ ```
117
+ Anomaly Detection: Identify unusual patterns (e.g., zero-day exploits, beaconing) using unsupervised learning.
118
+ Threat Classification: Classify events by severity or event_type for incident prioritization.
119
+ User and Entity Behavior Analytics (UEBA): Detect insider threats or compromised credentials by analyzing auth or endpoint events.
120
+ Predictive Analytics: Forecast high-risk periods using time-series analysis of risk_score and timestamp.
121
+ Threat Hunting: Leverage MITRE ATT&CK techniques and IOCs in additional_info for threat intelligence.
122
+ Red Teaming: Simulate adversarial scenarios (e.g., APTs, DNS tunneling) for testing SIEM systems.
123
+ ```
124
+ # Loading the Dataset
125
+
126
+ Install the datasets library:
127
+
128
+ ```python
129
+ pip install datasets
130
+
131
+ Load the dataset from Hugging Face Hub:
132
+ from datasets import load_dataset
133
+
134
+ dataset = load_dataset("your-username/advanced_siem_dataset", split="train")
135
+ print(dataset)
136
+
137
+ For large-scale processing, use streaming:
138
+ dataset = load_dataset("your-username/advanced_siem_dataset", streaming=True)
139
+ for example in dataset["train"]:
140
+ print(example["event_type"], example["severity"])
141
+ break
142
+
143
+ Preprocessing
144
+ Preprocessing is critical for ML/AI tasks. Below are recommended steps:
145
+
146
+ Numerical Features:
147
+
148
+ Extract risk_score and confidence from advanced_metadata.
149
+ Normalize using StandardScaler:from sklearn.preprocessing import StandardScaler
150
+ import pandas as pd
151
+
152
+ df = dataset.to_pandas()
153
+ df['risk_score'] = df['advanced_metadata'].apply(lambda x: x['risk_score'])
154
+ df['confidence'] = df['advanced_metadata'].apply(lambda x: x['confidence'])
155
+ scaler = StandardScaler()
156
+ X = scaler.fit_transform(df[['risk_score', 'confidence']])
157
+ ```
158
+
159
+
160
+
161
+ # Categorical Features:
162
+ ```java
163
+ Encode event_type, severity, and other categorical fields using LabelEncoder or one-hot encoding:from sklearn.preprocessing import LabelEncoder
164
+
165
+ le = LabelEncoder()
166
+ df['event_type_encoded'] = le.fit_transform(df['event_type'])
167
+
168
+
169
+
170
+
171
+ Text Features:
172
+
173
+ Tokenize description or raw_log for NLP tasks using Hugging Face Transformers:from transformers import AutoTokenizer
174
+
175
+ tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")
176
+ def preprocess_function(examples):
177
+ return tokenizer(examples["description"], padding="max_length", truncation=True)
178
+ processed_dataset = dataset.map(preprocess_function, batched=True)
179
+
180
+
181
+
182
+
183
+ Handling Missing Values:
184
+
185
+ Replace "N/A" in dst_ip (for some ids_alert events) with a placeholder (e.g., 0.0.0.0) or filter out.
186
+
187
+
188
+ Feature Engineering:
189
+
190
+ Extract MITRE ATT&CK techniques and IOCs from additional_info using regex or string parsing.
191
+ Convert timestamp to datetime for temporal analysis:df['timestamp'] = pd.to_datetime(df['timestamp'])
192
+
193
+
194
+ ```
195
+
196
+
197
+ # Training Examples
198
+ Below are example ML/AI workflows using Hugging Face and other libraries.
199
+ Anomaly Detection (Isolation Forest)
200
+ Detect unusual events like zero-day exploits or beaconing:
201
+ ```python
202
+ from datasets import load_dataset
203
+ from sklearn.ensemble import IsolationForest
204
+ import pandas as pd
205
+
206
+ dataset = load_dataset("darkknight25/advanced_siem_dataset", split="train")
207
+ df = dataset.to_pandas()
208
+ X = df['advanced_metadata'].apply(lambda x: [x['risk_score'], x['confidence']]).tolist()
209
+
210
+ model = IsolationForest(contamination=0.05, random_state=42)
211
+ df['anomaly'] = model.predict(X)
212
+ anomalies = df[df['anomaly'] == -1]
213
+ print(f"Detected {len(anomalies)} anomalies")
214
+ ```
215
+ Threat Classification (Transformers)
216
+ Classify events by severity using a BERT model:
217
+ ```python
218
+ from datasets import load_dataset
219
+ from transformers import AutoModelForSequenceClassification, Trainer, TrainingArguments
220
+ from sklearn.preprocessing import LabelEncoder
221
+
222
+ dataset = load_dataset("your-username/advanced_siem_dataset", split="train")
223
+ le = LabelEncoder()
224
+ dataset = dataset.map(lambda x: {"labels": le.fit_transform([x["severity"]])[0]})
225
+ dataset = dataset.map(lambda x: tokenizer(x["description"], padding="max_length", truncation=True))
226
+
227
+ train_test = dataset.train_test_split(test_size=0.2)
228
+ train_dataset = train_test["train"]
229
+ eval_dataset = train_test["test"]
230
+
231
+ model = AutoModelForSequenceClassification.from_pretrained("bert-base-uncased", num_labels=len(le.classes_))
232
+ training_args = TrainingArguments(
233
+ output_dir="./results",
234
+ evaluation_strategy="epoch",
235
+ per_device_train_batch_size=16,
236
+ per_device_eval_batch_size=16,
237
+ num_train_epochs=3,
238
+ )
239
+ trainer = Trainer(
240
+ model=model,
241
+ args=training_args,
242
+ train_dataset=train_dataset,
243
+ eval_dataset=eval_dataset,
244
+ )
245
+ trainer.train()
246
+ ```
247
+ Time-Series Forecasting (Prophet)
248
+ Forecast risk_score trends:
249
+ ```java
250
+ from datasets import load_dataset
251
+ from prophet import Prophet
252
+ import pandas as pd
253
+
254
+ dataset = load_dataset("darkknight25/advanced_siem_dataset", split="train")
255
+ df = dataset.to_pandas()
256
+ ts_data = df[['timestamp', 'advanced_metadata']].copy()
257
+ ts_data['ds'] = pd.to_datetime(ts_data['timestamp'])
258
+ ts_data['y'] = ts_data['advanced_metadata'].apply(lambda x: x['risk_score'])
259
+
260
+ model = Prophet()
261
+ model.fit(ts_data[['ds', 'y']])
262
+ future = model.make_future_dataframe(periods=30)
263
+ forecast = model.predict(future)
264
+ print(forecast[['ds', 'yhat', 'yhat_lower', 'yhat_upper']].tail())
265
+ ```
266
+
267
+
268
+ ```python
269
+ UEBA (K-Means Clustering)
270
+ Cluster auth events to detect insider threats:
271
+ from datasets import load_dataset
272
+ from sklearn.cluster import KMeans
273
+ import pandas as pd
274
+
275
+ dataset = load_dataset("darkknight25/advanced_siem_dataset", split="train")
276
+ auth_df = dataset.filter(lambda x: x["event_type"] == "auth").to_pandas()
277
+ X = auth_df['advanced_metadata'].apply(lambda x: [x['risk_score'], x['confidence']]).tolist()
278
+
279
+ kmeans = KMeans(n_clusters=3, random_state=42)
280
+ auth_df['cluster'] = kmeans.fit_predict(X)
281
+ print(auth_df.groupby('cluster')[['user', 'action']].describe())
282
+ ```
283
+ # Limitations
284
+ ```java
285
+ Synthetic Nature: The dataset is synthetic and may not fully capture real-world SIEM log complexities, such as vendor-specific formats or noise patterns.
286
+ Class Imbalance: Certain event_type (e.g., ai, iot) or severity (e.g., emergency) values may be underrepresented. Use data augmentation or reweighting for balanced training.
287
+ Missing Values: Some dst_ip fields in ids_alert events are "N/A", requiring imputation or filtering.
288
+ Timestamp Anomalies: 5% of records include intentional timestamp anomalies (future/past dates) to simulate time-based attacks, which may require special handling.
289
+ ```
290
+ # Bias and Ethical Considerations
291
+ ```
292
+ The dataset includes synthetic geo_location data with a 5% chance of high-risk locations (e.g., North Korea, Russia). This is for anomaly simulation and not indicative of real-world biases. Ensure models do not inadvertently profile based on geo_location.
293
+ User names and other PII-like fields are generated using faker and do not represent real individuals.
294
+ Models trained on this dataset should be validated to avoid overfitting to synthetic patterns.
295
+ ```
296
+ # Citation
297
+ ```
298
+ If you use this dataset in your work, please cite:
299
+ @dataset{advanced_siem_dataset_2025,
300
+ author = {sunnythakur},
301
+ title = {Advanced SIEM Dataset for Cybersecurity ML},
302
+ year = {2025},
303
+ publisher = {Hugging Face},
304
+ url = {https://huggingface.co/datasets/darkknight25/advanced_siem_dataset}
305
+ }
306
+ ```
307
+
308
+ # Acknowledgments
309
+ ```
310
+ Generated using a custom Python script (datasetcreator.py) with faker and numpy.
311
+ Inspired by real-world SIEM log formats (e.g., CEF) and MITRE ATT&CK framework.
312
+ Thanks to the Hugging Face community for providing tools to share and process datasets.
313
+ ```
advanced_siem_dataset.jsonl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b30902649de9197376e7b245045c6f6721ba3039edc1b7cac3474cc2ad0c4f21
3
+ size 93724615