File size: 5,162 Bytes
95c50e0 4d79cd9 95c50e0 4d79cd9 d444f2a 4d79cd9 95c50e0 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 |
---
license: mit
task_categories:
- text-classification
- token-classification
language:
- en
tags:
- security
- rl
- kubernetes
- terraform
- config-verification
- verifiers
- metadata-only
pretty_name: Security Verifiers E2 - Config Verification (Metadata)
size_categories:
- n<1K
configs:
- config_name: default
data_files:
- split: meta
path: data/meta-*
dataset_info:
features:
- name: section
dtype: string
- name: name
dtype: string
- name: description
dtype: string
- name: payload_json
dtype: string
- name: version
dtype: string
- name: created_at
dtype: string
splits:
- name: meta
num_bytes: 2380
num_examples: 6
download_size: 5778
dataset_size: 2380
---
# ๐ Security Verifiers E2: Security Configuration Verification (Public Metadata)
> **โ ๏ธ This is a PUBLIC metadata-only repository.** The full datasets are hosted privately to prevent training contamination. See below for access instructions.
## Overview
E2 is a tool-grounded configuration auditing environment for Kubernetes and Terraform. This repository contains **only the sampling metadata** that describes how the private datasets were constructed.
### Why Private Datasets?
**Training contamination** is a critical concern for benchmark integrity. If datasets leak into public training corpora:
- Models can memorize answers instead of learning to reason
- Evaluation metrics become unreliable
- Research reproducibility suffers
- True capabilities become obscured
By keeping evaluation datasets private with gated access, we:
- โ
Preserve benchmark validity over time
- โ
Enable fair model comparisons
- โ
Maintain research integrity
- โ
Allow controlled access for legitimate research
### Dataset Composition
The private E2 datasets include:
#### Kubernetes Configurations
- **Source**: Real-world K8s manifests from popular open-source projects
- **Scans**: KubeLinter, Semgrep, OPA/Rego policies
- **Violations**: Security misconfigurations, best practice violations
- **Severity**: Categorized (high/medium/low) based on tool outputs
#### Terraform Configurations
- **Source**: Infrastructure-as-code from real projects
- **Scans**: Semgrep, OPA/Rego policies, custom rules
- **Violations**: Security risks, compliance issues
- **Severity**: Weighted scoring for reward computation
### What's in This Repository?
This public repository contains:
1. **Sampling Metadata** (`sampling-*.json`):
- Source repository information
- File selection criteria
- Scan configurations
- Label distributions
- Reproducibility parameters
2. **Tools Versions** (`tools-versions.json`):
- KubeLinter version (pinned)
- Semgrep version (pinned)
- OPA version (pinned)
- Ensures reproducible scanning
3. **This README**: Instructions for requesting access
### Reward Components
E2 uses tool-grounded reward functions:
- **Detection Precision/Recall/F1**: Against ground-truth violations
- **Severity Weighting**: Higher reward for catching critical issues
- **Patch Delta**: Reward for proposed fixes that eliminate violations
- **Re-scan Verification**: Patches must pass tool validation
**Multi-turn performance**: Models achieve ~0.93 reward with tool calling vs ~0.62 without tools.
### Requesting Access
๐ **To access the full private datasets:**
1. **Open an access request issue**: [Security Verifiers Issues](https://github.com/intertwine/security-verifiers/issues)
2. **Use the title**: "Dataset Access Request: E2"
3. **Include**:
- Your name and affiliation
- Research purpose / use case
- HuggingFace username
- Commitment to not redistribute or publish the raw data
**Approval criteria:**
- Legitimate research or educational use
- Understanding of contamination concerns
- Agreement to usage terms
We typically respond within 2-3 business days.
### Citation
If you use this environment or metadata in your research:
```bibtex
@misc{security-verifiers-2025,
title={Open Security Verifiers: Composable RL Environments for AI Safety},
author={intertwine},
year={2025},
url={https://github.com/intertwine/security-verifiers},
note={E2: Security Configuration Verification}
}
```
### Related Resources
- **GitHub Repository**: [intertwine/security-verifiers](https://github.com/intertwine/security-verifiers)
- **Documentation**: See `EXECUTIVE_SUMMARY.md` and `PRD.md` in the repo
- **Framework**: Built on [Prime Intellect Verifiers](https://github.com/PrimeIntellect-ai/verifiers)
- **Other Environments**: E1 (Network Logs), E3-E6 (in development)
### Tools
The following security tools are used for ground-truth generation:
- **KubeLinter**: Kubernetes YAML linting and security checks
- **Semgrep**: Pattern-based static analysis for K8s and Terraform
- **OPA**: Policy-as-code validation with Rego
### License
MIT License - See repository for full terms.
### Contact
- **Issues**: [GitHub Issues](https://github.com/intertwine/security-verifiers/issues)
- **Discussions**: [GitHub Discussions](https://github.com/intertwine/security-verifiers/discussions)
---
**Built with โค๏ธ for the AI safety research community**
|