Zecyel commited on
Commit
a3cd830
·
verified ·
1 Parent(s): 2f23ae2

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +159 -112
README.md CHANGED
@@ -1,146 +1,193 @@
1
  ---
2
- configs:
3
- - config_name: default
4
- data_files:
5
- - split: train
6
- path: data/train-*
7
- dataset_info:
8
- features:
9
- - name: homework
10
- dtype: string
11
- - name: exercise_number
12
- dtype: string
13
- - name: sub_problem
14
- dtype: string
15
- - name: content
16
- dtype: string
17
- - name: full_id
18
- dtype: string
19
- splits:
20
- - name: train
21
- num_bytes: 6972
22
- num_examples: 28
23
- download_size: 6910
24
- dataset_size: 6972
25
  ---
26
- # LyTOC Benchmark
27
 
28
- This repository contains a benchmark dataset extracted from homework PDFs using SimpleTex OCR API.
29
 
30
- ## Features
31
 
32
- - Automated PDF content extraction using SimpleTex OCR API
33
- - Structured benchmark dataset creation
34
- - Multiple export formats (JSON, JSONL, HuggingFace Dataset)
35
- - Easy upload to HuggingFace Hub
36
- - Interactive pipeline runner
37
 
38
- ## Quick Start
39
 
40
- ### 1. Install Dependencies
41
 
42
- ```bash
43
- pip install -r requirements.txt
44
- ```
 
 
 
45
 
46
- ### 2. Set Up API Keys
47
 
48
- ```bash
49
- cp .env.example .env
50
- # Edit .env and add your API keys:
51
- # - OCR_UAT: Get from https://simpletex.cn
52
- # - HF_TOKEN: Get from https://huggingface.co/settings/tokens
53
- ```
54
 
55
- ### 3. Run the Pipeline
56
 
57
- **Option A: Interactive Pipeline** (Recommended)
58
- ```bash
59
- python run_pipeline.py
60
- ```
61
 
62
- **Option B: Manual Steps**
63
- ```bash
64
- # Extract PDFs
65
- python extract_pdfs.py
66
 
67
- # Create benchmark dataset
68
- python create_benchmark.py
69
 
70
- # Upload to HuggingFace
71
- python upload_to_hf.py <username/repo-name> [--private]
 
 
 
 
 
 
72
  ```
73
 
74
- ## Project Structure
75
 
76
- ```
77
- lytoc/
78
- ├── raw/ # Source PDF files
79
- ├── parsed_data/ # Extracted markdown content
80
- │ ├── hw*.md # Individual parsed files
81
- │ └── extraction_metadata.json # Extraction status
82
- ├── benchmark_dataset/ # Final benchmark dataset
83
- │ ├── dataset.json # JSON format
84
- │ ├── dataset.jsonl # JSONL format
85
- │ └── huggingface_dataset/ # HuggingFace format
86
- ├── extract_pdfs.py # PDF extraction script
87
- ├── create_benchmark.py # Benchmark creation script
88
- ├── upload_to_hf.py # HuggingFace upload script
89
- ├── run_pipeline.py # Interactive pipeline runner
90
- └── DATASET_CARD.md # Dataset documentation
91
- ```
92
 
93
- ## Dataset Structure
94
 
95
- Each problem in the dataset contains:
96
- - `homework`: Homework identifier (e.g., "hw1", "hw2")
97
- - `problem_number`: Problem number within the homework
98
- - `content`: Full problem text and description
99
- - `full_id`: Unique identifier (e.g., "hw1_problem1")
100
 
101
- Example:
102
- ```json
103
- {
104
- "homework": "hw1",
105
- "problem_number": "1",
106
- "content": "Problem statement...",
107
- "full_id": "hw1_problem1"
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
108
  }
109
  ```
110
 
111
- ## Scripts
112
 
113
- ### extract_pdfs.py
114
- Extracts content from PDF files in the `raw/` directory using SimpleTex OCR API.
115
- - Converts PDFs to markdown format via OCR
116
- - Saves parsed content to `parsed_data/`
117
- - Creates extraction metadata
118
 
119
- ### create_benchmark.py
120
- Processes parsed content into a structured benchmark dataset.
121
- - Parses markdown content to extract individual problems
122
- - Creates dataset in multiple formats
123
- - Generates statistics and sample output
124
 
125
- ### upload_to_hf.py
126
- Uploads the benchmark dataset to HuggingFace Hub.
127
- ```bash
128
- python upload_to_hf.py username/repo-name [--private]
129
- ```
130
 
131
- ### run_pipeline.py
132
- Interactive script that runs the complete pipeline with user prompts.
133
 
134
- ## Requirements
 
135
 
136
- - Python 3.8+
137
- - SimpleTex API token (OCR_UAT)
138
- - HuggingFace account and token (for upload)
139
 
140
- ## License
 
 
 
141
 
142
- MIT License - See LICENSE file for details
 
 
 
143
 
144
- ## Contributing
145
 
146
- Contributions are welcome! Please feel free to submit a Pull Request.
 
1
  ---
2
+ license: mit
3
+ task_categories:
4
+ - question-answering
5
+ - text-generation
6
+ language:
7
+ - en
8
+ tags:
9
+ - theory-of-computation
10
+ - algorithms
11
+ - computer-science
12
+ - homework
13
+ - exercises
14
+ size_categories:
15
+ - n<1K
16
+ pretty_name: LyTOC Benchmark
 
 
 
 
 
 
 
 
17
  ---
 
18
 
19
+ # LyTOC Benchmark Dataset
20
 
21
+ A curated collection of Theory of Computation and Algorithms homework exercises, extracted from academic PDFs using OCR and structured for machine learning evaluation.
22
 
23
+ ## Dataset Description
 
 
 
 
24
 
25
+ ### Dataset Summary
26
 
27
+ The LyTOC (Logic and Theory of Computation) Benchmark contains 28 carefully extracted exercises from 9 homework assignments covering fundamental topics in theoretical computer science. Each exercise is preserved with its original LaTeX mathematical notation, making it suitable for evaluating language models on formal reasoning tasks.
28
 
29
+ **Key Features:**
30
+ - 28 exercises across 9 homework assignments
31
+ - Topics: automata theory, complexity theory, Turing machines, formal languages, algorithm analysis
32
+ - LaTeX mathematical notation preserved
33
+ - Structured with exercise numbers and sub-problems
34
+ - Clean extraction with OCR post-processing
35
 
36
+ ### Supported Tasks
37
 
38
+ - **Question Answering**: Answer theoretical computer science questions
39
+ - **Mathematical Reasoning**: Solve problems involving formal proofs and mathematical notation
40
+ - **Text Generation**: Generate solutions to computational theory problems
41
+ - **Educational Assessment**: Evaluate understanding of CS theory concepts
 
 
42
 
43
+ ### Languages
44
 
45
+ - English (en)
 
 
 
46
 
47
+ ## Dataset Structure
48
+
49
+ ### Data Instances
 
50
 
51
+ Each instance represents a single exercise or sub-problem:
 
52
 
53
+ ```json
54
+ {
55
+ "homework": "hw1",
56
+ "exercise_number": "3",
57
+ "sub_problem": null,
58
+ "content": "Let $\\Sigma = \\{0, 1\\}$. Let language\n\n$$L = \\{w \\in \\{0, 1\\}^* : w \\text{ has an unequal number of 0's and 1's}\\}.$$\n\nProve $L^* = \\Sigma^*$.",
59
+ "full_id": "hw1_ex3"
60
+ }
61
  ```
62
 
63
+ ### Data Fields
64
 
65
+ - `homework` (string): Homework identifier (e.g., "hw1", "hw2", "hw13")
66
+ - `exercise_number` (string): Exercise number within the homework (e.g., "1", "2", "3")
67
+ - `sub_problem` (string or null): Sub-problem identifier if the exercise has multiple parts (e.g., "1", "2")
68
+ - `content` (string): Full exercise text including LaTeX mathematical notation
69
+ - `full_id` (string): Unique identifier for the exercise (e.g., "hw1_ex3", "hw2_ex3_1")
 
 
 
 
 
 
 
 
 
 
 
70
 
71
+ ### Data Splits
72
 
73
+ The dataset consists of a single split containing all 28 exercises.
 
 
 
 
74
 
75
+ ## Dataset Statistics
76
+
77
+ - **Total Exercises**: 28
78
+ - **Homeworks**: 9 (hw1, hw2, hw3, hw5, hw6, hw9, hw10, hw11, hw13)
79
+ - **Exercises with Sub-problems**: 2
80
+ - **Average Content Length**: ~200-500 characters per exercise
81
+
82
+ ### Topic Distribution
83
+
84
+ The exercises cover the following topics:
85
+
86
+ - **Asymptotic Analysis**: Big-O notation, growth rates
87
+ - **Finite Automata**: DFA, NFA, regular expressions
88
+ - **Formal Languages**: Regular languages, context-free languages
89
+ - **Turing Machines**: Decidability, computability
90
+ - **Complexity Theory**: P, NP, NP-completeness, reductions
91
+ - **Algorithm Design**: Time complexity, space complexity
92
+
93
+ ## Dataset Creation
94
+
95
+ ### Source Data
96
+
97
+ The dataset was created from homework assignments in a Theory of Computation and Algorithms course.
98
+
99
+ #### Data Collection
100
+
101
+ - **Source**: Academic homework PDFs (9 files)
102
+ - **Extraction Method**: SimpleTex OCR API
103
+ - **Processing**: Automated regex-based exercise splitting
104
+ - **Quality Control**: Manual verification of extraction accuracy
105
+
106
+ #### Data Processing Pipeline
107
+
108
+ 1. **PDF to Image**: Convert each PDF page to high-resolution images
109
+ 2. **OCR Processing**: Extract text using SimpleTex OCR API
110
+ 3. **Punctuation Normalization**: Convert Chinese punctuation to English equivalents
111
+ 4. **Exercise Splitting**: Use regex patterns to identify exercise boundaries
112
+ 5. **Sub-problem Detection**: Identify and separate sub-problems within exercises
113
+ 6. **Metadata Generation**: Create unique identifiers and structure data
114
+
115
+ ### Annotations
116
+
117
+ The dataset does not include solutions or annotations. It contains only problem statements as extracted from the source materials.
118
+
119
+ ## Considerations for Using the Data
120
+
121
+ ### Recommended Uses
122
+
123
+ - Evaluating language models on formal reasoning tasks
124
+ - Training models for mathematical problem understanding
125
+ - Benchmarking CS theory knowledge in AI systems
126
+ - Educational tool development for computer science
127
+
128
+ ### Limitations
129
+
130
+ - **No Solutions**: The dataset contains only problem statements, not solutions
131
+ - **OCR Artifacts**: Some mathematical notation may have minor OCR errors
132
+ - **Limited Scope**: Covers specific topics in theory of computation and algorithms
133
+ - **No Visual Content**: Diagrams and figures from PDFs are not included
134
+ - **Language**: English only
135
+
136
+ ### Ethical Considerations
137
+
138
+ This dataset is intended for educational and research purposes. Users should:
139
+ - Respect academic integrity when using for educational purposes
140
+ - Not use for automated homework completion systems
141
+ - Cite appropriately when using in research
142
+
143
+ ## Additional Information
144
+
145
+ ### Licensing Information
146
+
147
+ This dataset is released under the MIT License.
148
+
149
+ ### Citation Information
150
+
151
+ If you use this dataset in your research, please cite:
152
+
153
+ ```bibtex
154
+ @misc{lytoc-benchmark-2025,
155
+ title={LyTOC Benchmark: Theory of Computation and Algorithms Exercise Dataset},
156
+ author={LyTOC Contributors},
157
+ year={2025},
158
+ howpublished={\\url{https://huggingface.co/datasets/lytoc-benchmark}}
159
  }
160
  ```
161
 
162
+ ### Dataset Curators
163
 
164
+ Dataset created using:
165
+ - SimpleTex OCR API for PDF extraction
166
+ - Custom Python scripts for data processing
167
+ - Claude Code for automation and quality assurance
 
168
 
169
+ ### Contact
 
 
 
 
170
 
171
+ For questions or issues regarding this dataset, please open an issue on the dataset repository.
 
 
 
 
172
 
173
+ ## Usage Example
 
174
 
175
+ ```python
176
+ from datasets import load_dataset
177
 
178
+ # Load the dataset
179
+ dataset = load_dataset("lytoc-benchmark")
 
180
 
181
+ # Access an exercise
182
+ exercise = dataset['train'][0]
183
+ print(f"Exercise ID: {exercise['full_id']}")
184
+ print(f"Content: {exercise['content']}")
185
 
186
+ # Filter by homework
187
+ hw1_exercises = [ex for ex in dataset['train'] if ex['homework'] == 'hw1']
188
+ print(f"Homework 1 has {len(hw1_exercises)} exercises")
189
+ ```
190
 
191
+ ## Version History
192
 
193
+ - **v1.0.0** (2025-12-30): Initial release with 28 exercises from 9 homework assignments