intertwine commited on
Commit
95c50e0
ยท
verified ยท
1 Parent(s): dd6de9f

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +146 -0
README.md ADDED
@@ -0,0 +1,146 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ task_categories:
4
+ - text-classification
5
+ - token-classification
6
+ language:
7
+ - en
8
+ tags:
9
+ - security
10
+ - rl
11
+ - kubernetes
12
+ - terraform
13
+ - config-verification
14
+ - verifiers
15
+ - metadata-only
16
+ pretty_name: "Security Verifiers E2 - Config Verification (Metadata)"
17
+ size_categories:
18
+ - n<1K
19
+ ---
20
+
21
+ # ๐Ÿ”’ Security Verifiers E2: Security Configuration Verification (Public Metadata)
22
+
23
+ > **โš ๏ธ This is a PUBLIC metadata-only repository.** The full datasets are hosted privately to prevent training contamination. See below for access instructions.
24
+
25
+ ## Overview
26
+
27
+ E2 is a tool-grounded configuration auditing environment for Kubernetes and Terraform. This repository contains **only the sampling metadata** that describes how the private datasets were constructed.
28
+
29
+ ### Why Private Datasets?
30
+
31
+ **Training contamination** is a critical concern for benchmark integrity. If datasets leak into public training corpora:
32
+ - Models can memorize answers instead of learning to reason
33
+ - Evaluation metrics become unreliable
34
+ - Research reproducibility suffers
35
+ - True capabilities become obscured
36
+
37
+ By keeping evaluation datasets private with gated access, we:
38
+ - โœ… Preserve benchmark validity over time
39
+ - โœ… Enable fair model comparisons
40
+ - โœ… Maintain research integrity
41
+ - โœ… Allow controlled access for legitimate research
42
+
43
+ ### Dataset Composition
44
+
45
+ The private E2 datasets include:
46
+
47
+ #### Kubernetes Configurations
48
+ - **Source**: Real-world K8s manifests from popular open-source projects
49
+ - **Scans**: KubeLinter, Semgrep, OPA/Rego policies
50
+ - **Violations**: Security misconfigurations, best practice violations
51
+ - **Severity**: Categorized (high/medium/low) based on tool outputs
52
+
53
+ #### Terraform Configurations
54
+ - **Source**: Infrastructure-as-code from real projects
55
+ - **Scans**: Semgrep, OPA/Rego policies, custom rules
56
+ - **Violations**: Security risks, compliance issues
57
+ - **Severity**: Weighted scoring for reward computation
58
+
59
+ ### What's in This Repository?
60
+
61
+ This public repository contains:
62
+
63
+ 1. **Sampling Metadata** (`sampling-*.json`):
64
+ - Source repository information
65
+ - File selection criteria
66
+ - Scan configurations
67
+ - Label distributions
68
+ - Reproducibility parameters
69
+
70
+ 2. **Tools Versions** (`tools-versions.json`):
71
+ - KubeLinter version (pinned)
72
+ - Semgrep version (pinned)
73
+ - OPA version (pinned)
74
+ - Ensures reproducible scanning
75
+
76
+ 3. **This README**: Instructions for requesting access
77
+
78
+ ### Reward Components
79
+
80
+ E2 uses tool-grounded reward functions:
81
+ - **Detection Precision/Recall/F1**: Against ground-truth violations
82
+ - **Severity Weighting**: Higher reward for catching critical issues
83
+ - **Patch Delta**: Reward for proposed fixes that eliminate violations
84
+ - **Re-scan Verification**: Patches must pass tool validation
85
+
86
+ **Multi-turn performance**: Models achieve ~0.93 reward with tool calling vs ~0.62 without tools.
87
+
88
+ ### Requesting Access
89
+
90
+ ๐Ÿ”‘ **To access the full private datasets:**
91
+
92
+ 1. **Open an access request issue**: [Security Verifiers Issues](https://github.com/intertwine/security-verifiers/issues)
93
+ 2. **Use the title**: "Dataset Access Request: E2"
94
+ 3. **Include**:
95
+ - Your name and affiliation
96
+ - Research purpose / use case
97
+ - HuggingFace username
98
+ - Commitment to not redistribute or publish the raw data
99
+
100
+ **Approval criteria:**
101
+ - Legitimate research or educational use
102
+ - Understanding of contamination concerns
103
+ - Agreement to usage terms
104
+
105
+ We typically respond within 2-3 business days.
106
+
107
+ ### Citation
108
+
109
+ If you use this environment or metadata in your research:
110
+
111
+ ```bibtex
112
+ @misc{security-verifiers-2025,
113
+ title={Open Security Verifiers: Composable RL Environments for AI Safety},
114
+ author={intertwine},
115
+ year={2025},
116
+ url={https://github.com/intertwine/security-verifiers},
117
+ note={E2: Security Configuration Verification}
118
+ }
119
+ ```
120
+
121
+ ### Related Resources
122
+
123
+ - **GitHub Repository**: [intertwine/security-verifiers](https://github.com/intertwine/security-verifiers)
124
+ - **Documentation**: See `EXECUTIVE_SUMMARY.md` and `PRD.md` in the repo
125
+ - **Framework**: Built on [Prime Intellect Verifiers](https://github.com/PrimeIntellect-ai/verifiers)
126
+ - **Other Environments**: E1 (Network Logs), E3-E6 (in development)
127
+
128
+ ### Tools
129
+
130
+ The following security tools are used for ground-truth generation:
131
+ - **KubeLinter**: Kubernetes YAML linting and security checks
132
+ - **Semgrep**: Pattern-based static analysis for K8s and Terraform
133
+ - **OPA**: Policy-as-code validation with Rego
134
+
135
+ ### License
136
+
137
+ MIT License - See repository for full terms.
138
+
139
+ ### Contact
140
+
141
+ - **Issues**: [GitHub Issues](https://github.com/intertwine/security-verifiers/issues)
142
+ - **Discussions**: [GitHub Discussions](https://github.com/intertwine/security-verifiers/discussions)
143
+
144
+ ---
145
+
146
+ **Built with โค๏ธ for the AI safety research community**