Dataset Viewer
The dataset viewer is not available for this dataset.
Cannot get the config names for the dataset.
Error code:   ConfigNamesError
Exception:    ValueError
Message:      Feature type 'Object' not found. Available feature types: ['Value', 'ClassLabel', 'Translation', 'TranslationVariableLanguages', 'LargeList', 'List', 'Array2D', 'Array3D', 'Array4D', 'Array5D', 'Audio', 'Image', 'Video', 'Pdf']
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/dataset/config_names.py", line 66, in compute_config_names_response
                  config_names = get_dataset_config_names(
                                 ^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/inspect.py", line 161, in get_dataset_config_names
                  dataset_module = dataset_module_factory(
                                   ^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/load.py", line 1031, in dataset_module_factory
                  raise e1 from None
                File "/usr/local/lib/python3.12/site-packages/datasets/load.py", line 1004, in dataset_module_factory
                  ).get_module()
                    ^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/load.py", line 605, in get_module
                  dataset_infos = DatasetInfosDict.from_dataset_card_data(dataset_card_data)
                                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/info.py", line 386, in from_dataset_card_data
                  dataset_info = DatasetInfo._from_yaml_dict(dataset_card_data["dataset_info"])
                                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/info.py", line 317, in _from_yaml_dict
                  yaml_data["features"] = Features._from_yaml_list(yaml_data["features"])
                                          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/features/features.py", line 2031, in _from_yaml_list
                  return cls.from_dict(from_yaml_inner(yaml_data))
                         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/features/features.py", line 1876, in from_dict
                  obj = generate_from_dict(dic)
                        ^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/features/features.py", line 1463, in generate_from_dict
                  return {key: generate_from_dict(value) for key, value in obj.items()}
                               ^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/features/features.py", line 1469, in generate_from_dict
                  raise ValueError(f"Feature type '{_type}' not found. Available feature types: {list(_FEATURE_TYPES.keys())}")
              ValueError: Feature type 'Object' not found. Available feature types: ['Value', 'ClassLabel', 'Translation', 'TranslationVariableLanguages', 'LargeList', 'List', 'Array2D', 'Array3D', 'Array4D', 'Array5D', 'Audio', 'Image', 'Video', 'Pdf']

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

ITBench-Lite Dataset Card

Dataset Overview

Dataset Name: ITBench-Lite
Organization: IBM Research
License: Apache 2.0
Language: English
Paper: ITBench: Evaluating AI Agents across Diverse Real-World IT Automation Tasks
GitHub: ITBench

ITBench-Lite is a systematic framework for benchmarking LLMs and AI Agents on real-world IT automation tasks. This dataset contains 50 scenarios across two critical domains:

  • Site Reliability Engineering (SRE): 35 scenarios with environment snapshots for incident diagnosis
  • Financial Operations (FinOps): 15 scenarios with synthetic cost anomaly data for spend analysis

Summary

ITBench evaluates LLMs against reasoning and decision-making for IT automation tasks across three domains:

  1. Site Reliability Engineering (SRE): Ensures app resilience and performance through incident diagnosis
  2. Financial Operations (FinOps): Manages IT spend by performing cost anomaly analysis
  3. Compliance & Security Operations (CISO): Manages threats and assesses policies by conducting compliance posture assessment (coming soon)

This leaderboard represents a subset of the broader ITBench and includes selected SRE and FinOps scenarios in offline environments. We will continue enriching the scenario set and incorporating agentic evaluations into the leaderboard over time.

Key Resources

Dataset Structure

Directory Organization

ITBench-Lite/
β”œβ”€β”€ snapshots/
β”‚   β”œβ”€β”€ sre/
β”‚   β”‚   └── v0.2-B96DF826-4BB2-4B62-97AB-6D84254C53D7/
β”‚   β”‚       β”œβ”€β”€ Scenario-1/
β”‚   β”‚       β”‚   β”œβ”€β”€ alerts/                     # Alert snapshots over time
β”‚   β”‚       β”‚   β”œβ”€β”€ metrics/                    # Prometheus metrics
β”‚   β”‚       β”‚   β”œβ”€β”€ ground_truth.yaml          # Ground truth entities
β”‚   β”‚       β”‚   β”œβ”€β”€ k8s_events_raw.tsv         # Kubernetes events
β”‚   β”‚       β”‚   β”œβ”€β”€ k8s_objects_raw.tsv        # Kubernetes resources
β”‚   β”‚       β”‚   β”œβ”€β”€ otel_logs_raw.tsv          # OpenTelemetry logs
β”‚   β”‚       β”‚   └── otel_traces_raw.tsv        # OpenTelemetry traces
β”‚   β”‚       β”œβ”€β”€ Scenario-2/
β”‚   β”‚       └── ... (35 scenarios total)
β”‚   β”‚
β”‚   └── finops/
β”‚       └── v0.1-finops-anomaly/
β”‚           β”œβ”€β”€ Scenario-1/
β”‚           β”‚   └── anomaly.json               # Cost anomaly details
β”‚           β”œβ”€β”€ Scenario-2/
β”‚           └── ... (15 scenarios total)

Data Files by Domain

SRE Scenarios (35 scenarios)

Each SRE scenario contains observability data from orchestrated environments where faults were injected:

File Format Description
alerts/ JSON Time-series alert snapshots from monitoring systems
metrics/ Prometheus Infrastructure and application metrics
ground_truth.yaml YAML Ground truth faulty entities and metadata
k8s_events_raw.tsv TSV Kubernetes cluster events
k8s_objects_raw.tsv TSV Kubernetes resource states (pods, services, deployments)
otel_logs_raw.tsv TSV Application and system logs via OpenTelemetry
otel_traces_raw.tsv TSV Distributed traces via OpenTelemetry

FinOps Scenarios (15 scenarios)

Each FinOps scenario contains synthetic cost anomaly data:

File Format Description
anomaly.json JSON Cost anomaly details (date, account_id)

Example anomaly.json:

{
    "date": "2025-10-13",
    "account_id": "49tAGZz"
}

Tasks

SRE: Fault Localization

Identify the faulty entity or resource (e.g., a specific pod, service, or deployment) that caused the incident based on logs, traces, metrics, Kubernetes events and resources from the environment snapshot.

FinOps: Cost Anomaly Analysis

Identify the resource responsible for anomalous cost changes using synthetic cost anomaly details and workload configuration.

CISO: Compliance Assessment (Coming Soon)

Generate the correct Kyverno policy (Kubernetes) or OPA policy (RHEL) based on regulatory rules and natural-language Kubernetes/RHEL configurations.

Limitations

The SRE scenarios are exported snapshots from sandboxed live Kubernetes environments. While sourced from real running systems, the snapshot format inherently lacks the dynamic characteristics of live environments:

  • Real-time observability (streaming metrics, logs, traces)
  • Runtime state changes and non-deterministic behavior
  • Interactive debugging and human-in-the-loop investigation
  • Active remediation capabilities (deployments, rollbacks, scaling)

The static nature of these exports makes them suitable for diagnostic analysis but does not capture the full complexity of live incident response.

Related Datasets

  • ITBench-Trajectories: Complete agent execution traces with reasoning steps, tool usage, and evaluation metrics for 105 trajectories across 35 SRE scenarios

Citation

If you use ITBench in academic or industrial work, please cite:

@misc{jha2025itbench,
  title={ITBench: Evaluating AI Agents across Diverse Real-World IT Automation Tasks},
  author={Jha, Saurabh and Arora, Rohan and Watanabe, Yuji and others},
  year={2025},
  eprint={2502.05352},
  archivePrefix={arXiv},
  primaryClass={cs.AI},
  url={https://arxiv.org/abs/2502.05352}
}

Contact & Support

Last Updated: January 2026

Downloads last month
146

Paper for ibm-research/ITBench-Lite