patrickfleith commited on
Commit
202fb26
·
verified ·
1 Parent(s): 7a68012

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +130 -1
README.md CHANGED
@@ -43,4 +43,133 @@ tags:
43
  pretty_name: astro-mcq
44
  size_categories:
45
  - n<1K
46
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
43
  pretty_name: astro-mcq
44
  size_categories:
45
  - n<1K
46
+ ---
47
+
48
+ # Astro-MCQ Dataset
49
+
50
+ **Astro-MCQ** is the first dataset in the upcoming **AstroBench** collection, a suite of domain-specific benchmark datasets for evaluating small and large language models (SLMs and LLMs) in space mission engineering and astronautics.
51
+
52
+ ## Overview
53
+
54
+ Astro-MCQ is a multiple-choice question dataset designed to evaluate language model performance across key topics in astronautics, including:
55
+ - Orbital mechanics
56
+ - Space propulsion
57
+ - Space environment and its effects
58
+ - Spacecraft systems and design
59
+ - Mission operations
60
+ - Human spaceflight
61
+ - Launchers and more.
62
+
63
+ ## Purpose
64
+
65
+ This dataset enables comparative assessment of LLM capabilities in the space engineering domain. It helps application developers and researchers answer critical questions:
66
+
67
+ - **Model selection**: Which LLM performs best for your specific astronautics subdomain?
68
+ - **Study configuration optimization**: What model size, quantization level, and prompting strategy work best?
69
+ - **Capability assessment**: How do different small-to-medium open-weight models compare in reasoning and domain knowledge?
70
+ - **Fine-tuning evaluation**: How effective is domain adaptation or specialized fine-tuning?
71
+
72
+ ## Use Cases
73
+
74
+ ### Recommended Uses
75
+ - **Model evaluation and benchmarking**: Compare performance across different LLMs
76
+ - **Quantization testing**: Assess how compression affects domain-specific performance
77
+ - **Prompt engineering**: Test and optimize prompting strategies
78
+ - **Domain adaptation**: Evaluate effectiveness of fine-tuning approaches
79
+ - **Model auditing**: Verify model capabilities before deployment
80
+
81
+ ### Evaluation Methods
82
+ The dataset supports two evaluation approaches:
83
+ - **Loglikelihood-based evaluation**: Token probability scoring
84
+ - **Generative evaluation**: Free-form response assessment (e.g., model-as-a-judge)
85
+
86
+ ### Not Recommended For
87
+ - **Training or fine-tuning**: Dataset size is too limited for effective model training
88
+ - **Sole training resource**: Could potentially be combined with other datasets for meta-learning, but not suitable as primary training data
89
+
90
+ ## Quick Start
91
+
92
+ ### Explore Online
93
+ Browse the dataset: https://huggingface.co/datasets/patrickfleith/astro-mcq/viewer/default/train
94
+
95
+ ### Download and Use
96
+
97
+ **Manual download:**
98
+ https://huggingface.co/datasets/patrickfleith/astro-mcq
99
+
100
+ **Python:**
101
+ ```python
102
+ from datasets import load_dataset
103
+ dataset = load_dataset("patrickfleith/astro-mcq")
104
+ ```
105
+
106
+ ## About AstroBench
107
+
108
+ Astro-MCQ is the first release in the AstroBench collection. AstroBench aims to provide comprehensive evaluation tools for assessing language models in space mission engineering contexts, covering multiple task types and difficulty levels tailored to real-world astronautics applications.
109
+
110
+ ### Structure
111
+
112
+ The dataset contains **193 expert-created multiple-choice questions**, with **3 permutations per question** (randomized choice order), resulting in approximately **579 evaluation instances**. This permutation strategy helps mitigate the sensitivity of evals to model bias position (i.e. their tendency to prefer certain propositions)
113
+
114
+ Each instance includes the following fields:
115
+
116
+ - **uid**: Unique identifier string for the original question (UUID format), shared across permutations of the same question
117
+ - **question**: The question text as a string
118
+ - **permutation_idx**: Integer (0-2) indicating which permutation of the question this is
119
+ - **choices**: List of answer choices as strings. Questions can have multiple correct answers.
120
+ - **labels**: List of integers (0 or 1) corresponding to each choice. 0 = incorrect, 1 = correct. Multiple labels can be 1.
121
+ - **prompt**: Pre-formatted prompt string ready for LLM evaluation (includes question and formatted choices)
122
+ - **expected_output**: The correct answer(s) formatted as expected (e.g., "A. Choice text" or "A. Choice\nB. Choice" for multiple correct answers)
123
+
124
+ ### Languages
125
+ All instances in the dataset are in english
126
+  
127
+ ### Size
128
+ - **193** expert-created unique questions or retrieved from master-level courses in space mission design and operations.
129
+ - **~579** total evaluation instances (3 permutations per question)
130
+ - Questions filtered from the original dataset to guarantee at least **3 choices** per question and at least **1 correct answer**
131
+  
132
+ ### Question Types
133
+ - **Knowledge-based**: Questions testing domain knowledge, facts,, in space science and engineering
134
+ - **Reasoning**: Questions requiring logical reasoning and problem-solving, and understanding of physics..
135
+ - **Computational**: Questions requiring mathematical operations with numerical results (exam-style)
136
+ - **Multiple-answer**: Some questions have multiple correct choices (multi-select format)
137
+
138
+ ### Topics Covered
139
+ Comprehensive coverage across space engineering subdomains:
140
+ - Orbital mechanics and trajectories
141
+ - Space propulsion systems
142
+ - Mission operations and design
143
+ - Human spaceflight
144
+ - Space environment and effects
145
+ - Spacecraft systems and subsystems
146
+ - Communication and link analysis
147
+ - Space project lifecycle
148
+ - Launch systems and more
149
+
150
+ # USAGE AND GUIDELINES
151
+ #### License
152
+ AstroMCQ © 2025 by Patrick Fleith is licensed under Creative Commons Attribution 4.0 International
153
+
154
+ #### Restrictions
155
+ No restriction. Please provide the correct attribution following the license terms.
156
+  
157
+ #### Citation
158
+ P. Fleith, "Astro-MCQ: A Multiple-Choice Question Benchmark Dataset for Evaluating LLMs in Space Mission Engineering and Astronautics," (2025).
159
+  
160
+ #### Update Frequency
161
+ May be updated based on feedbacks. If you want to become a contributor, let me know.
162
+  
163
+ #### Have a feedback or spot an error?
164
+ Use the community discussion tab directly on the huggingface `astro-mcq` dataset page.
165
+  
166
+ #### Contact Information
167
+ Reach me here on the community tab or on LinkedIn (Patrick Fleith) with a Note.
168
+
169
+ #### Current Limitations and Future Work
170
+ - Only 193 multiple choice questions and answers. This makes it useless for fine-tuning purposes although it could be integrated as part of a larger pool of datasets compiled specifically for fine-tuning.
171
+ - While being a decent size enabling LLM evaluation, space engineering expert time is scarce and expensive. On average it takes 8 minutes to create one MCQA example. Having more examples would be much better for robustness.
172
+ - The dataset might be biased toward the very low number of annotators.
173
+ - The dataset might be biased toward European Space Programs.
174
+ - The dataset might not cover all subsystems or subdomains of astronautics, although we tried to do our best to cover the annotators' domains of expertise.
175
+ - No peer review. Ideally, we would like to have a Quality Control process to ensure high quality and correctness of each example in the dataset. Given the limited resources, this is not yet possible. Feel free to come and contribute if you feel this is an issue.