Datasets:
The dataset viewer is not available for this subset.
Exception: SplitsNotFoundError
Message: The split names could not be parsed from the dataset config.
Traceback: Traceback (most recent call last):
File "/usr/local/lib/python3.12/site-packages/datasets/inspect.py", line 286, in get_dataset_config_info
for split_generator in builder._split_generators(
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/json/json.py", line 91, in _split_generators
pa_table = next(iter(self._generate_tables(**splits[0].gen_kwargs, allow_full_read=False)))[1]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/json/json.py", line 194, in _generate_tables
json_field_paths += find_mixed_struct_types_field_paths(examples)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/utils/json.py", line 58, in find_mixed_struct_types_field_paths
examples = [x[subfield] for x in content if x[subfield] is not None]
~^^^^^^^^^^
KeyError: 'id'
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/src/services/worker/src/worker/job_runners/config/split_names.py", line 65, in compute_split_names_from_streaming_response
for split in get_dataset_split_names(
^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/inspect.py", line 340, in get_dataset_split_names
info = get_dataset_config_info(
^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/inspect.py", line 291, in get_dataset_config_info
raise SplitsNotFoundError("The split names could not be parsed from the dataset config.") from err
datasets.inspect.SplitsNotFoundError: The split names could not be parsed from the dataset config.Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
CyberThreat-Eval Benchmark
This repository contains the dataset for the paper CyberThreat-Eval: Can Large Language Models Automate Real-World Threat Research? (published in TMLR).
CyberThreat-Eval is an expert-annotated benchmark collected from the daily Cyber Threat Intelligence (CTI) workflow of a world-leading company. It assesses Large Language Models (LLMs) on practical tasks across three essential stages of threat research.
What’s included
- Stage 1: Triage — Priority assignment for CTI articles (Text Classification).
- Stage 2: Deep Search — Quality of related URLs and additional info beyond a reference URL (Text Retrieval).
- Stage 3: TI Drafting — IOC/TTP extraction and analytical quality scoring (Text Generation).
Directory map
.
├── README.md
├── stage1_triage/
│ └── priority/...
├── stage2_deep_search/
│ ├── code/...
│ ├── data/...
│ └── example/...
└── stage3_ti_drafting/
├── ioc/...
├── ttp/...
└── score_evaluation/...
Quick install
Run from the repo root:
Stage 1 (Triage) deps
cd stage1_triage/priority
pip install numpy scikit-learn tqdm
cd ../..
Stage 2 (Deep Search) deps + browser runtime
cd stage2_deep_search
pip install networkx openai azure-identity playwright playwright-stealth tqdm tenacity tiktoken
python -m playwright install # installs Chromium for scraping
cd ..
Stage 3 (TI Drafting) deps
cd stage3_ti_drafting
pip install pandas json5 openai tqdm
cd ..
API keys
export OPENAI_API_KEY=<your_key>
# Optional: export OPENAI_API_BASE=https://api.openai.com/v1 # or your Azure/OpenAI endpoint
Datasets are already under each stage’s data/ directory; no extra download needed for basic tests.
Quick tests (Sample Usage)
Stage 1: Triage (priority scoring)
cd stage1_triage/priority python code/eval.py \ --ground_truth data/0314-articles.json \ --predictions predictions.json \ --article_type article \ --output results.jsonStage 2: Deep Search (related URL quality)
Requires your generated result files (*_results.json) with related URLs per article.cd stage2_deep_search python code/eval.py \ --results_dir <path_to_results_dir> \ --output_dir similarity_analyses \ --test_model_name gpt-4o \ --api_key $OPENAI_API_KEY \ --api_base https://api.openai.com/v1 \ --workers 4Stage 3: TI Drafting
- IOC extraction
cd stage3_ti_drafting/ioc python eval/eval_ioc.py \ --dataset data/IoCs.csv \ --prediction example/prediction/manual_ioc_predictions.json - TTP mapping
cd stage3_ti_drafting/ttp python eval/compute.py \ --articles data/100-days-articles.json \ --results example_predicted.json \ --ttp-mapping data/TTP_Mapping.csv - Score evaluation (threat actor analysis)
cd stage3_ti_drafting/score_evaluation python eval/threat_actor.py \ --model gpt-4o \ --input data/0330-articles-with-rejected-score.json \ --output-dir output/
- IOC extraction
Documentation links
- Stage 1:
stage1_triage/priority/README.md - Stage 2:
stage2_deep_search/README.md - Stage 3:
stage3_ti_drafting/README.md- IOC:
stage3_ti_drafting/ioc/README.md - TTP:
stage3_ti_drafting/ttp/README.md - Score evaluation:
stage3_ti_drafting/score_evaluation/README.md
- IOC:
- Downloads last month
- 50