intertwine's picture
Upload dataset
f8df60c verified
metadata
license: mit
task_categories:
  - text-classification
language:
  - en
tags:
  - security
  - rl
  - network-security
  - anomaly-detection
  - verifiers
  - metadata-only
pretty_name: Security Verifiers E1 - Network Log Anomaly Detection (Metadata)
size_categories:
  - n<1K
configs:
  - config_name: default
    data_files:
      - split: meta
        path: data/meta-*
dataset_info:
  features:
    - name: section
      dtype: string
    - name: name
      dtype: string
    - name: description
      dtype: string
    - name: payload_json
      dtype: string
    - name: version
      dtype: string
    - name: created_at
      dtype: string
  splits:
    - name: meta
      num_bytes: 2518
      num_examples: 6
  download_size: 5613
  dataset_size: 2518

πŸ”’ Security Verifiers E1: Network Log Anomaly Detection (Public Metadata)

⚠️ This is a PUBLIC metadata-only repository. The full datasets are hosted privately to prevent training contamination. See below for access instructions.

Overview

E1 is a network log anomaly detection environment with calibrated classification and abstention. This repository contains only the sampling metadata that describes how the private datasets were constructed.

Why Private Datasets?

Training contamination is a critical concern for benchmark integrity. If datasets leak into public training corpora:

  • Models can memorize answers instead of learning to reason
  • Evaluation metrics become unreliable
  • Research reproducibility suffers
  • True capabilities become obscured

By keeping evaluation datasets private with gated access, we:

  • βœ… Preserve benchmark validity over time
  • βœ… Enable fair model comparisons
  • βœ… Maintain research integrity
  • βœ… Allow controlled access for legitimate research

Dataset Composition

The private E1 datasets include:

Primary Dataset: IoT-23

  • Samples: 1,800 network flows (train/dev/test splits)
  • Source: IoT-23 botnet dataset
  • Features: Network flow statistics, timestamps, protocols
  • Labels: Benign vs Malicious with confidence scores
  • Sampling: Stratified by label and split

Out-of-Distribution Datasets

  • CIC-IDS-2017: 600 samples (different attack patterns)
  • UNSW-NB15: 600 samples (different network environment)
  • Purpose: Test generalization and OOD detection

What's in This Repository?

This public repository contains:

  1. Sampling Metadata (sampling-*.json):

    • Dataset versions and sources
    • Sampling strategies and random seeds
    • Label distributions
    • Split ratios
    • Reproducibility parameters
  2. Tools Versions (referenced in metadata):

    • Exact versions of all preprocessing tools
    • Dataset library versions
    • Python environment specifications
  3. This README: Instructions for requesting access

Reward Components

E1 uses composable reward functions:

  • Accuracy: Correctness of malicious/benign classification
  • Calibration: Alignment between confidence and actual accuracy
  • Abstention: Reward for declining on uncertain examples
  • Asymmetric Costs: Higher penalty for false negatives (security context)

Requesting Access

πŸ”‘ To access the full private datasets:

  1. Open an access request issue: Security Verifiers Issues
  2. Use the title: "Dataset Access Request: E1"
  3. Include:
    • Your name and affiliation
    • Research purpose / use case
    • HuggingFace username
    • Commitment to not redistribute or publish the raw data

Approval criteria:

  • Legitimate research or educational use
  • Understanding of contamination concerns
  • Agreement to usage terms

We typically respond within 2-3 business days.

Citation

If you use this environment or metadata in your research:

@misc{security-verifiers-2025,
  title={Open Security Verifiers: Composable RL Environments for AI Safety},
  author={intertwine},
  year={2025},
  url={https://github.com/intertwine/security-verifiers},
  note={E1: Network Log Anomaly Detection}
}

Related Resources

License

MIT License - See repository for full terms.

Contact


Built with ❀️ for the AI safety research community