intertwine commited on
Commit
3032e13
·
verified ·
1 Parent(s): a537ac0

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +134 -0
README.md ADDED
@@ -0,0 +1,134 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ task_categories:
4
+ - text-classification
5
+ language:
6
+ - en
7
+ tags:
8
+ - security
9
+ - rl
10
+ - network-security
11
+ - anomaly-detection
12
+ - verifiers
13
+ - metadata-only
14
+ pretty_name: "Security Verifiers E1 - Network Log Anomaly Detection (Metadata)"
15
+ size_categories:
16
+ - n<1K
17
+ ---
18
+
19
+ # 🔒 Security Verifiers E1: Network Log Anomaly Detection (Public Metadata)
20
+
21
+ > **⚠️ This is a PUBLIC metadata-only repository.** The full datasets are hosted privately to prevent training contamination. See below for access instructions.
22
+
23
+ ## Overview
24
+
25
+ E1 is a network log anomaly detection environment with calibrated classification and abstention. This repository contains **only the sampling metadata** that describes how the private datasets were constructed.
26
+
27
+ ### Why Private Datasets?
28
+
29
+ **Training contamination** is a critical concern for benchmark integrity. If datasets leak into public training corpora:
30
+ - Models can memorize answers instead of learning to reason
31
+ - Evaluation metrics become unreliable
32
+ - Research reproducibility suffers
33
+ - True capabilities become obscured
34
+
35
+ By keeping evaluation datasets private with gated access, we:
36
+ - ✅ Preserve benchmark validity over time
37
+ - ✅ Enable fair model comparisons
38
+ - ✅ Maintain research integrity
39
+ - ✅ Allow controlled access for legitimate research
40
+
41
+ ### Dataset Composition
42
+
43
+ The private E1 datasets include:
44
+
45
+ #### Primary Dataset: IoT-23
46
+ - **Samples**: 1,800 network flows (train/dev/test splits)
47
+ - **Source**: IoT-23 botnet dataset
48
+ - **Features**: Network flow statistics, timestamps, protocols
49
+ - **Labels**: Benign vs Malicious with confidence scores
50
+ - **Sampling**: Stratified by label and split
51
+
52
+ #### Out-of-Distribution Datasets
53
+ - **CIC-IDS-2017**: 600 samples (different attack patterns)
54
+ - **UNSW-NB15**: 600 samples (different network environment)
55
+ - **Purpose**: Test generalization and OOD detection
56
+
57
+ ### What's in This Repository?
58
+
59
+ This public repository contains:
60
+
61
+ 1. **Sampling Metadata** (`sampling-*.json`):
62
+ - Dataset versions and sources
63
+ - Sampling strategies and random seeds
64
+ - Label distributions
65
+ - Split ratios
66
+ - Reproducibility parameters
67
+
68
+ 2. **Tools Versions** (referenced in metadata):
69
+ - Exact versions of all preprocessing tools
70
+ - Dataset library versions
71
+ - Python environment specifications
72
+
73
+ 3. **This README**: Instructions for requesting access
74
+
75
+ ### Reward Components
76
+
77
+ E1 uses composable reward functions:
78
+ - **Accuracy**: Correctness of malicious/benign classification
79
+ - **Calibration**: Alignment between confidence and actual accuracy
80
+ - **Abstention**: Reward for declining on uncertain examples
81
+ - **Asymmetric Costs**: Higher penalty for false negatives (security context)
82
+
83
+ ### Requesting Access
84
+
85
+ 🔑 **To access the full private datasets:**
86
+
87
+ 1. **Open an access request issue**: [Security Verifiers Issues](https://github.com/intertwine/security-verifiers/issues)
88
+ 2. **Use the title**: "Dataset Access Request: E1"
89
+ 3. **Include**:
90
+ - Your name and affiliation
91
+ - Research purpose / use case
92
+ - HuggingFace username
93
+ - Commitment to not redistribute or publish the raw data
94
+
95
+ **Approval criteria:**
96
+ - Legitimate research or educational use
97
+ - Understanding of contamination concerns
98
+ - Agreement to usage terms
99
+
100
+ We typically respond within 2-3 business days.
101
+
102
+ ### Citation
103
+
104
+ If you use this environment or metadata in your research:
105
+
106
+ ```bibtex
107
+ @misc{security-verifiers-2025,
108
+ title={Open Security Verifiers: Composable RL Environments for AI Safety},
109
+ author={intertwine},
110
+ year={2025},
111
+ url={https://github.com/intertwine/security-verifiers},
112
+ note={E1: Network Log Anomaly Detection}
113
+ }
114
+ ```
115
+
116
+ ### Related Resources
117
+
118
+ - **GitHub Repository**: [intertwine/security-verifiers](https://github.com/intertwine/security-verifiers)
119
+ - **Documentation**: See `EXECUTIVE_SUMMARY.md` and `PRD.md` in the repo
120
+ - **Framework**: Built on [Prime Intellect Verifiers](https://github.com/PrimeIntellect-ai/verifiers)
121
+ - **Other Environments**: E2 (Config Verification), E3-E6 (in development)
122
+
123
+ ### License
124
+
125
+ MIT License - See repository for full terms.
126
+
127
+ ### Contact
128
+
129
+ - **Issues**: [GitHub Issues](https://github.com/intertwine/security-verifiers/issues)
130
+ - **Discussions**: [GitHub Discussions](https://github.com/intertwine/security-verifiers/discussions)
131
+
132
+ ---
133
+
134
+ **Built with ❤️ for the AI safety research community**