You need to agree to share your contact information to access this dataset

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this dataset content.

DB-Forensic-Hybrid Dataset

A large-scale, event-level MySQL database forensics benchmark dataset for multi-class attack detection and attribution using real MySQL log streams. Events are labeled at the individual log-event level against 14 MITRE ATT&CK techniques plus a Normal class, collected across 8 realistic attack scenarios.

This dataset accompanies the paper:

"DB-Forensic-Hybrid: A Hybrid Deep Learning Framework for Database Intrusion Detection and Forensic Attribution" Hyeonseop Shin β€” submitted to Forensic Science International: Digital Investigation (FSI:DI)

Dataset Summary

Property Value
Task Multi-class event-level sequence labeling
Classes 15 (Normal + 14 MITRE ATT&CK techniques)
Total events ~14.5M log events
Log sources MySQL general log, error log, slow query log, binary log
Attack scenarios 8 (S01–S08), Leave-One-Scenario-Out splits provided
Feature dimensionality 46-dim float32 vectors (pre-extracted)
License CC BY 4.0

Access

This dataset is gated β€” please request access via the button above. Access is manually approved. Approved users can download with:

from huggingface_hub import snapshot_download

snapshot_download(
    repo_id="golfoscar/db-forensic-hybrid",
    repo_type="dataset",
    local_dir="data/",
    token="YOUR_HF_TOKEN",
)

To download only the pre-extracted features (~2.2 GB, sufficient to re-run all evaluations without GPU):

from huggingface_hub import snapshot_download

snapshot_download(
    repo_id="golfoscar/db-forensic-hybrid",
    repo_type="dataset",
    local_dir="data/",
    allow_patterns=["features_v3/*", "val_versions/*"],
    token="YOUR_HF_TOKEN",
)

Repository Structure

golfoscar/db-forensic-hybrid/
β”œβ”€β”€ labeled/                        Raw labeled log events (Parquet)
β”‚   β”œβ”€β”€ train/events.parquet        10.0 GB β€” 14,497,692 events
β”‚   β”œβ”€β”€ val/events.parquet           1.7 GB β€”  2,250,771 events
β”‚   └── test/events.parquet          1.7 GB β€”  1,919,768 events (held-out)
β”‚
β”œβ”€β”€ features_v3/                    Pre-extracted 46-dim features (NumPy)
β”‚   β”œβ”€β”€ train_features.npy           1.2 GB β€” (6,326,705 Γ— 46) float32
β”‚   β”œβ”€β”€ train_labels.npy            51 MB  β€” (6,326,705,) int64
β”‚   β”œβ”€β”€ train_anomaly_labels.npy    51 MB  β€” binary anomaly flag
β”‚   β”œβ”€β”€ train_boundary_labels.npy   51 MB  β€” sequence boundary flag
β”‚   β”œβ”€β”€ train_source_ids.npy        51 MB  β€” scenario/source ID per event
β”‚   β”œβ”€β”€ val_* / test_*                     β€” same arrays for val and test splits
β”‚   └── metadata.json                      β€” extraction config and statistics
β”‚
β”œβ”€β”€ loso_v3/                        Leave-One-Scenario-Out pre-split folds
β”‚   β”œβ”€β”€ fold_1/ … fold_8/           One folder per held-out scenario
β”‚   β”‚   β”œβ”€β”€ train_features.npy
β”‚   β”‚   β”œβ”€β”€ train_labels.npy
β”‚   β”‚   β”œβ”€β”€ … (5 arrays Γ— train/val/test splits)
β”‚   β”‚   └── metadata.json
β”‚
β”œβ”€β”€ augmented/                      Synthetic data used in paper training pipeline
β”‚   └── tabddpm_v3/                 TabDDPM v3 (post-processed) β€” used in training
β”‚       β”œβ”€β”€ events.parquet          170 KB β€” 2,328 synthetic events
β”‚       β”‚                           T1546/T1136/T1041/T1005/T1070 upsampled to 500/technique
β”‚       β”‚                           T1098 excluded (mode collapse)
β”‚       └── metadata.json           generation config + post-processing steps
β”‚
β”œβ”€β”€ normal_only/                    Phase 3 pure normal traffic (pre-attack)
β”‚   └── events.parquet              231 MB β€” 515,986 events, all is_attack=False
β”‚                                   Collected March 6-7 2026, before any attacks ran.
β”‚                                   Use for normal-signal pretraining / anomaly detection.
β”‚
β”œβ”€β”€ metadata/                       Dataset-level integrity files
β”‚   β”œβ”€β”€ checksums.sha256            SHA-256 checksums for all primary data files
β”‚   └── statistics.json             Per-split event counts, class distributions, etc.
β”‚
└── val_versions/                   Validation index arrays
    β”œβ”€β”€ v1_indices.npy              T1110-capped validation indices
    β”œβ”€β”€ test_v1_indices.npy         T1110-capped test indices
    β”œβ”€β”€ v0_indices.npy              Uncapped validation indices
    └── v2_indices.npy              Alternative split indices

Classes (15)

Label index MITRE ID Technique Train events (features_v3)
0 β€” Normal 4,581,535
1 T1190 Exploit Public-Facing Application (SQL Injection) 473,215
2 T1110 Brute Force (capped at 50K in features) 50,000
3 T1078 Valid Accounts 1,270
4 T1059.004 Command and Scripting: Unix Shell (UDF exec) 361
5 T1136 Create Account 500
6 T1098 Account Manipulation 106
7 T1546 Event Triggered Execution (Trigger backdoor) 500
8 T1213 Data from Information Repositories 1,216,674
9 T1005 Data from Local System 500
10 T1029 Scheduled Transfer 566
11 T1041 Exfiltration Over C2 Channel 500
12 T1486 Data Encrypted for Impact (ransomware) 352
13 T1485 Data Destruction 126
14 T1070 Indicator Removal 500

Note: T1110 (Brute Force) has ~8.2M raw events in labeled/. The features_v3/ split caps T1110 at 50,000 to reduce class imbalance during training.


Data Splits

Standard split (features_v3/)

Split Events Description
Train 6,326,705 S01–S08 combined, T1110 capped at 50K
Validation 2,250,771 Held-out time window, T1110 capped (v1 indices)
Test 1,919,768 Separate held-out set (test_v1_indices)

Leave-One-Scenario-Out (loso_v3/)

8 folds, each holding out one complete attack scenario:

Fold Held-out Test attack techniques Test events
1 S01 T1190, T1213 1,802,950
2 S02 T1110, T1078, T1098, T1213 10,227,922
3 S03 T1078, T1213, T1029 810,446
4 S04 T1190, T1098, T1486, T1485, T1070 808,394
5 S05 T1190, T1136, T1098, T1546 807,541
6 S06 T1190, T1213 1,737,982
7 S07 T1190, T1059.004, T1005 808,380
8 S08 T1190, T1059.004, T1213, T1041 821,660

Synthetic Data (augmented/)

Six rare-class techniques (T1546, T1136, T1041, T1005, T1070, T1098) have fewer than 110 original training events each. augmented/tabddpm_v3/ provides the synthetic oversampling used in the paper's training pipeline: 500 events per technique via TabDDPM (Kotelnikov et al., ICML 2023), with T1098 excluded due to mode collapse and constant-column artifacts corrected to match the train parquet schema.

See augmented/tabddpm_v3/metadata.json for the full post-processing log.


Feature Schema (46-dim float32)

Each log event is encoded as a 46-dimensional float32 vector.

Base features (dims 0–23, 24-dim)

Dims ID Feature Encoding
[0:8] Q1 query_type One-hot: SELECT/INSERT/UPDATE/DELETE/CREATE/DROP/GRANT/OTHER
[8] Q2 query_length log1p(len(query))
[9] Q5 has_union Binary
[10] Q6 has_subquery Binary
[11] Q7 accessed_tables_count count / 10
[12] Q10 has_system_table Binary: mysql.* / information_schema
[13] Q15 has_file_operation Binary: LOAD DATA / INTO OUTFILE
[14] Q17 has_privilege_command Binary: GRANT/REVOKE/CREATE USER
[15] Q18 has_user_management Binary
[16] S1 time_since_last log1p(seconds)
[17] S3 events_per_minute count in last 60s / 100
[18:20] T1 hour_of_day sin + cos cyclical
[20] T2 is_business_hours Binary: 09:00–18:00
[21:23] T4 day_of_week sin + cos cyclical
[23] U5 is_known_attacker_ip Binary (reserved, currently 0)

Attack-signature features (dims 24–45, 22-dim)

Dims ID Feature Encoding
[24] A1 has_udf_function Binary (T1059.004)
[25] A2 has_encryption_func Binary: AES_ENCRYPT/ENCODE (T1486)
[26] A3 has_trigger_ddl Binary: CREATE TRIGGER (T1546)
[27] A4 has_log_manipulation Binary: SET GLOBAL general_log (T1070)
[28] A5 has_destructive_ddl Binary: DROP TABLE/TRUNCATE (T1485)
[29] A6 has_hex_payload Binary: hex-encoded ELF/shellcode
[30] A7 has_sleep_benchmark Binary: SLEEP()/BENCHMARK()
[31] A8 has_outfile Binary: INTO OUTFILE/DUMPFILE
[32:43] A9 event_type_onehot 11-way one-hot: QUERY/DDL/INSERT/UPDATE/DELETE/AUTH_FAILURE/CONNECT_FAILED/CONNECT/QUIT/SERVER_WARNING/SERVER_ERROR
[43] A10 has_production_table Binary
[44] A11 query_keyword_count count of attack keywords / 10
[45] A12 has_concat_hex Binary: CONCAT with hex strings

Parquet Schema (labeled/)

Each row represents a single MySQL log event:

Column Type Description
timestamp datetime64[us, UTC] Event timestamp
log_source str general, error, slow, or binlog
event_type str QUERY, DDL, INSERT, UPDATE, DELETE, AUTH_FAILURE, etc.
source_ip str Client IP (nullable)
user str MySQL user (nullable)
thread_id int64 MySQL connection thread ID
db str Target database (nullable)
table str Target table (nullable)
query str SQL query text (nullable)
affected_rows float64 Rows affected (nullable)
query_time float64 Execution time in seconds (slow log)
lock_time float64 Lock wait time in seconds (slow log)
rows_examined float64 Rows examined (slow log)
rows_sent float64 Rows returned to client
bytes_sent float64 Response size in bytes
bytes_received float64 Request size in bytes
error_code str MySQL error code (nullable)
severity str Log severity level (nullable)
raw_log str Original unparsed log line
log_file str Source log file path
log_position float64 Byte offset in log file (binlog)
is_attack bool True if this event is part of an attack
scenario_id str S01–S08 (nullable for normal traffic)
run_id str Simulation run identifier
step float64 Attack step index within scenario
mitre_technique str MITRE ATT&CK technique ID (null = Normal)
mitre_tactic str MITRE ATT&CK tactic
attack_path_position str start, middle, end, or isolated
confidence float64 Label confidence (1.0 = verified)
group_id str Scenario group or collected for normal traffic

Log Source Distribution (train)

Source Events Description
general 6,884,967 All executed queries
error 6,658,525 Server events, auth failures, warnings
slow 930,509 Queries exceeding slow_query_log threshold
binlog 23,691 Binary log row-level change events

Attack Scenarios

Scenario Primary techniques Kill chain description
S01 T1190, T1213 SQL injection reconnaissance + data exfiltration
S02 T1110, T1078, T1098, T1213 Brute force β†’ valid account β†’ privilege escalation β†’ data collection
S03 T1078, T1213, T1029 Valid account abuse β†’ bulk data collection β†’ scheduled transfer
S04 T1190, T1098, T1486, T1485, T1070 Injection β†’ account manipulation β†’ ransomware β†’ data destruction β†’ cover tracks
S05 T1190, T1136, T1098, T1546 Injection β†’ backdoor account creation β†’ trigger implant
S06 T1190, T1213 Large-scale SQL injection + bulk data theft
S07 T1190, T1059.004, T1005 Injection β†’ UDF shellcode execution β†’ local file read
S08 T1190, T1059.004, T1213, T1041 Injection β†’ UDF β†’ data collection β†’ exfiltration over C2

Usage with the Paper Code

See the companion GitHub repository:

GitHub: https://github.com/Hyeonseop-Shin/DB-forensic-hybrid

git clone https://github.com/Hyeonseop-Shin/DB-forensic-hybrid
cd DB-forensic-hybrid

# Install package
SETUPTOOLS_USE_DISTUTILS=local pip install -e .

# Download pre-extracted features (2.2 GB, no GPU needed for evaluation)
python -c "
from huggingface_hub import snapshot_download
snapshot_download(
    repo_id='golfoscar/db-forensic-hybrid',
    repo_type='dataset',
    local_dir='data/',
    allow_patterns=['features_v3/*', 'val_versions/*'],
    token='YOUR_HF_TOKEN',
)
"

# Run test set evaluation with pre-trained checkpoints
python scripts/evaluate_test.py --config configs/E11.yaml --gpu 0

# Run LOSO evaluation
python scripts/evaluate_loso.py --gpu 0

# Re-extract features from raw logs (requires labeled/ + augmented/tabddpm_v3/, ~1 hour)
python scripts/extract_features.py --n-features 46

Re-running augmentation from scratch

# Download labeled train split and generate synthetic data
python scripts/generate_synthetic.py --model tabddpm --gpu 0
# Output: data/augmented/tabddpm_v3/events.parquet (applied automatically by extract_features.py)

Citation

@dataset{shin2026dbforensic,
  author    = {Shin, Hyeonseop},
  title     = {{DB-Forensic-Hybrid}: A Large-Scale {MySQL} Log Dataset
               for Event-Level Attack Detection and Attribution},
  year      = {2026},
  publisher = {HuggingFace},
  url       = {https://huggingface.co/datasets/golfoscar/db-forensic-hybrid},
}

License

Released under CC BY 4.0. See DATASHEET.md for the full dataset card.

Downloads last month
14