Improve dataset card: Add description, links, comprehensive usage, and update metadata
Browse filesThis PR significantly enhances the dataset card for AICrypto by providing a comprehensive overview and practical guidance.
Key changes include:
- **Metadata Updates**:
- Updated `task_categories` to `['question-answering', 'text-generation']` to accurately reflect the benchmark's multiple-choice questions, CTF challenges, and proof problems.
- Removed `size_categories: - 10M<n<100M`, as this is inaccurate for a benchmark comprising a fixed, smaller number of structured problems.
- Added `benchmark` to the `tags` list to better categorize the dataset's nature.
- **Introduction**: Added a detailed description of the AICrypto benchmark, summarizing its purpose and components (MCQs, CTF challenges, proof problems) based on the paper's abstract.
- **Links**: Included direct links to the paper ([AICrypto: A Comprehensive Benchmark for Evaluating Cryptography Capabilities of Large Language Models](https://huggingface.co/papers/2507.09580)), project page ([https://aicryptobench.github.io/](https://aicryptobench.github.io/)), and the associated GitHub repository ([https://github.com/wangyu-ovo/aicrypto-agent](https://github.com/wangyu-ovo/aicrypto-agent)).
- **Download Instructions**: Provided clear instructions on how to download the dataset from this repository for use with the code.
- **Setup Guide**: Included the prerequisites and installation steps directly from the GitHub README to facilitate environment setup.
- **Comprehensive Sample Usage**: Incorporated all relevant code snippets from the GitHub README, demonstrating how to run single and parallel CTF challenges, multiple-choice question evaluations, and proof tasks.
- **Citation**: Added the BibTeX citation for the paper to enable proper academic referencing.
These updates aim to make the dataset card more informative, user-friendly, and discoverable for researchers interested in evaluating LLM cryptography capabilities.
|
@@ -1,12 +1,133 @@
|
|
| 1 |
---
|
|
|
|
|
|
|
| 2 |
license: mit
|
| 3 |
task_categories:
|
|
|
|
| 4 |
- text-generation
|
| 5 |
-
language:
|
| 6 |
-
- en
|
| 7 |
tags:
|
| 8 |
- code
|
| 9 |
- cryptography
|
| 10 |
-
|
| 11 |
-
-
|
| 12 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
---
|
| 2 |
+
language:
|
| 3 |
+
- en
|
| 4 |
license: mit
|
| 5 |
task_categories:
|
| 6 |
+
- question-answering
|
| 7 |
- text-generation
|
|
|
|
|
|
|
| 8 |
tags:
|
| 9 |
- code
|
| 10 |
- cryptography
|
| 11 |
+
- benchmark
|
| 12 |
+
---
|
| 13 |
+
|
| 14 |
+
# AICrypto: A Comprehensive Benchmark for Evaluating Cryptography Capabilities of Large Language Models
|
| 15 |
+
|
| 16 |
+
This repository contains **AICrypto**, the first comprehensive benchmark designed to evaluate the cryptography capabilities of Large Language Models (LLMs), as presented in the paper [AICrypto: A Comprehensive Benchmark for Evaluating Cryptography Capabilities of Large Language Models](https://huggingface.co/papers/2507.09580).
|
| 17 |
+
|
| 18 |
+
The benchmark comprises 135 multiple-choice questions, 150 capture-the-flag (CTF) challenges, and 18 proof problems, covering a broad range of skills from factual memorization to vulnerability exploitation and formal reasoning. All tasks are carefully reviewed or constructed by cryptography experts to ensure correctness and rigor.
|
| 19 |
+
|
| 20 |
+
- **Paper**: [AICrypto: A Comprehensive Benchmark for Evaluating Cryptography Capabilities of Large Language Models](https://huggingface.co/papers/2507.09580)
|
| 21 |
+
- **Project Page**: https://aicryptobench.github.io/
|
| 22 |
+
- **Code Repository**: https://github.com/wangyu-ovo/aicrypto-agent
|
| 23 |
+
|
| 24 |
+
## Download Dataset
|
| 25 |
+
|
| 26 |
+
To use this dataset with the associated code, download its contents from this Hugging Face repository and place them in the `./data` directory within the code repository.
|
| 27 |
+
|
| 28 |
+
## Setup
|
| 29 |
+
|
| 30 |
+
Before using the dataset with the provided code, you need to set up your environment as detailed in the [GitHub repository](https://github.com/wangyu-ovo/aicrypto-agent#setup).
|
| 31 |
+
|
| 32 |
+
### Prerequisites
|
| 33 |
+
|
| 34 |
+
- Python 3.10.15
|
| 35 |
+
- SageMath 10.5
|
| 36 |
+
- yafu 1.34.5
|
| 37 |
+
|
| 38 |
+
### Installation
|
| 39 |
+
|
| 40 |
+
1. **Create conda environment:**
|
| 41 |
+
```shell
|
| 42 |
+
conda env create -f environment.yml
|
| 43 |
+
conda activate crypto
|
| 44 |
+
```
|
| 45 |
+
|
| 46 |
+
2. **Install SageMath dependencies:**
|
| 47 |
+
```shell
|
| 48 |
+
sage -pip install -r sage-requirements.txt
|
| 49 |
+
```
|
| 50 |
+
|
| 51 |
+
3. **Install additional tools:**
|
| 52 |
+
- Install flatter: https://github.com/keeganryan/flatter
|
| 53 |
+
- Install yafu: https://github.com/bbuhrow/yafu/tree/master
|
| 54 |
+
|
| 55 |
+
4. **Configure API keys:**
|
| 56 |
+
Create a `.env` file with your API keys for the models you want to use.
|
| 57 |
+
|
| 58 |
+
## Sample Usage
|
| 59 |
+
|
| 60 |
+
The associated code repository provides scripts to run and evaluate tasks.
|
| 61 |
+
|
| 62 |
+
### Run Single CTF Challenge
|
| 63 |
+
|
| 64 |
+
To run a single CTF challenge:
|
| 65 |
+
|
| 66 |
+
```shell
|
| 67 |
+
python run_single_ctf_task.py --task-path data/CTF/04-RSA/01-blue-hens-2023 --model gpt-4.1
|
| 68 |
+
```
|
| 69 |
+
|
| 70 |
+
Results will be automatically saved in `./outputs/CTF-0/04-RSA/01-blue-hens-2023/gpt-4.1/run`
|
| 71 |
+
|
| 72 |
+
### Run CTF Challenges in Parallel for Evaluation
|
| 73 |
+
|
| 74 |
+
To run multiple tasks simultaneously for evaluating multiple models:
|
| 75 |
+
|
| 76 |
+
```shell
|
| 77 |
+
python run_ctf_parallel.py --jobs 4 --id 0
|
| 78 |
+
```
|
| 79 |
+
|
| 80 |
+
This command uses 4 processes to run tasks with run ID 0. Results will be automatically saved in `./outputs/CTF-0`.
|
| 81 |
+
|
| 82 |
+
The script will run all models specified in `config/model.yaml`. You can customize the models by modifying `config/model.yaml` and the corresponding model implementations in the `src/model` directory.
|
| 83 |
+
|
| 84 |
+
### Run Single MCQ Evaluation
|
| 85 |
+
|
| 86 |
+
Run MCQs with a single model:
|
| 87 |
+
```shell
|
| 88 |
+
python run_choice_question.py --model gpt-4.1
|
| 89 |
+
```
|
| 90 |
+
|
| 91 |
+
Results are saved to `./outputs/MultipleChoice/<model_name>/`.
|
| 92 |
+
|
| 93 |
+
### Run MCQ Evaluation in Parallel
|
| 94 |
+
|
| 95 |
+
Run MCQs across multiple models in parallel:
|
| 96 |
+
```shell
|
| 97 |
+
python batch_run_choice_question.py --parallel --jobs 4
|
| 98 |
+
```
|
| 99 |
+
Optionally select specific models:
|
| 100 |
+
```shell
|
| 101 |
+
python batch_run_choice_question.py --parallel --jobs 4 --models gpt-4.1 o3
|
| 102 |
+
```
|
| 103 |
+
|
| 104 |
+
### Run Single Proof Task
|
| 105 |
+
|
| 106 |
+
Run proof generation for a specific exam and model:
|
| 107 |
+
```shell
|
| 108 |
+
python run_proof_task.py --exam 1 --model gpt-4.1
|
| 109 |
+
```
|
| 110 |
+
Outputs:
|
| 111 |
+
- Proofs: `./outputs/proof/exam1/proof/gpt-4.1_proof_results.tex`
|
| 112 |
+
- Reasoning: `./outputs/proof/exam1/reasoning/gpt-4.1_reasoning_results.tex`
|
| 113 |
+
- Logs: `./outputs/proof/exam1/log/`
|
| 114 |
+
|
| 115 |
+
### Run Proof Tasks in Parallel for Evaluation
|
| 116 |
+
|
| 117 |
+
Run multiple exams and models concurrently:
|
| 118 |
+
```shell
|
| 119 |
+
python batch_run_proof_tasks.py --exam-values 1 2 3 --jobs 4
|
| 120 |
+
```
|
| 121 |
+
|
| 122 |
+
## Citation
|
| 123 |
+
|
| 124 |
+
If you find this repository useful, please consider citing:
|
| 125 |
+
|
| 126 |
+
```bibtex
|
| 127 |
+
@article{wang2025aicrypto,
|
| 128 |
+
title={AICrypto: A Comprehensive Benchmark for Evaluating Cryptography Capabilities of Large Language Models},
|
| 129 |
+
author={Wang, Yu and Liu, Yijian and Ji, Liheng and Luo, Han and Li, Wenjie and Zhou, Xiaofei and Feng, Chiyun and Wang, Puji and Cao, Yuhan and Zhang, Geyuan and Li, Xiaojian and Xu, Rongwu and Chen, Yilei and He, Tianxing},
|
| 130 |
+
journal={arXiv preprint arXiv:2507.09580},
|
| 131 |
+
year={2025}
|
| 132 |
+
}
|
| 133 |
+
```
|