File size: 4,741 Bytes
3032e13 4821480 3032e13 4821480 f8df60c 4821480 f8df60c 3032e13 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 |
---
license: mit
task_categories:
- text-classification
language:
- en
tags:
- security
- rl
- network-security
- anomaly-detection
- verifiers
- metadata-only
pretty_name: Security Verifiers E1 - Network Log Anomaly Detection (Metadata)
size_categories:
- n<1K
configs:
- config_name: default
data_files:
- split: meta
path: data/meta-*
dataset_info:
features:
- name: section
dtype: string
- name: name
dtype: string
- name: description
dtype: string
- name: payload_json
dtype: string
- name: version
dtype: string
- name: created_at
dtype: string
splits:
- name: meta
num_bytes: 2518
num_examples: 6
download_size: 5613
dataset_size: 2518
---
# ๐ Security Verifiers E1: Network Log Anomaly Detection (Public Metadata)
> **โ ๏ธ This is a PUBLIC metadata-only repository.** The full datasets are hosted privately to prevent training contamination. See below for access instructions.
## Overview
E1 is a network log anomaly detection environment with calibrated classification and abstention. This repository contains **only the sampling metadata** that describes how the private datasets were constructed.
### Why Private Datasets?
**Training contamination** is a critical concern for benchmark integrity. If datasets leak into public training corpora:
- Models can memorize answers instead of learning to reason
- Evaluation metrics become unreliable
- Research reproducibility suffers
- True capabilities become obscured
By keeping evaluation datasets private with gated access, we:
- โ
Preserve benchmark validity over time
- โ
Enable fair model comparisons
- โ
Maintain research integrity
- โ
Allow controlled access for legitimate research
### Dataset Composition
The private E1 datasets include:
#### Primary Dataset: IoT-23
- **Samples**: 1,800 network flows (train/dev/test splits)
- **Source**: IoT-23 botnet dataset
- **Features**: Network flow statistics, timestamps, protocols
- **Labels**: Benign vs Malicious with confidence scores
- **Sampling**: Stratified by label and split
#### Out-of-Distribution Datasets
- **CIC-IDS-2017**: 600 samples (different attack patterns)
- **UNSW-NB15**: 600 samples (different network environment)
- **Purpose**: Test generalization and OOD detection
### What's in This Repository?
This public repository contains:
1. **Sampling Metadata** (`sampling-*.json`):
- Dataset versions and sources
- Sampling strategies and random seeds
- Label distributions
- Split ratios
- Reproducibility parameters
2. **Tools Versions** (referenced in metadata):
- Exact versions of all preprocessing tools
- Dataset library versions
- Python environment specifications
3. **This README**: Instructions for requesting access
### Reward Components
E1 uses composable reward functions:
- **Accuracy**: Correctness of malicious/benign classification
- **Calibration**: Alignment between confidence and actual accuracy
- **Abstention**: Reward for declining on uncertain examples
- **Asymmetric Costs**: Higher penalty for false negatives (security context)
### Requesting Access
๐ **To access the full private datasets:**
1. **Open an access request issue**: [Security Verifiers Issues](https://github.com/intertwine/security-verifiers/issues)
2. **Use the title**: "Dataset Access Request: E1"
3. **Include**:
- Your name and affiliation
- Research purpose / use case
- HuggingFace username
- Commitment to not redistribute or publish the raw data
**Approval criteria:**
- Legitimate research or educational use
- Understanding of contamination concerns
- Agreement to usage terms
We typically respond within 2-3 business days.
### Citation
If you use this environment or metadata in your research:
```bibtex
@misc{security-verifiers-2025,
title={Open Security Verifiers: Composable RL Environments for AI Safety},
author={intertwine},
year={2025},
url={https://github.com/intertwine/security-verifiers},
note={E1: Network Log Anomaly Detection}
}
```
### Related Resources
- **GitHub Repository**: [intertwine/security-verifiers](https://github.com/intertwine/security-verifiers)
- **Documentation**: See `EXECUTIVE_SUMMARY.md` and `PRD.md` in the repo
- **Framework**: Built on [Prime Intellect Verifiers](https://github.com/PrimeIntellect-ai/verifiers)
- **Other Environments**: E2 (Config Verification), E3-E6 (in development)
### License
MIT License - See repository for full terms.
### Contact
- **Issues**: [GitHub Issues](https://github.com/intertwine/security-verifiers/issues)
- **Discussions**: [GitHub Discussions](https://github.com/intertwine/security-verifiers/discussions)
---
**Built with โค๏ธ for the AI safety research community**
|