dataset_info:
features:
- name: id
dtype: int64
- name: uid
dtype: string
- name: question
dtype: string
- name: permutation_idx
dtype: int64
- name: choices
list: string
- name: labels
list: int64
- name: prompt
dtype: string
- name: expected_output
dtype: string
splits:
- name: train
num_bytes: 399645
num_examples: 579
download_size: 103527
dataset_size: 399645
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
license: cc-by-4.0
task_categories:
- question-answering
- text-generation
language:
- en
tags:
- astronautics
- space
- engineering
- multiple-choices
- astrodynamics
pretty_name: astro-mcq
size_categories:
- n<1K
Astro-MCQ Dataset
Astro-MCQ is the first dataset in the upcoming AstroBench collection, a suite of domain-specific benchmark datasets for evaluating small and large language models (SLMs and LLMs) in space mission engineering and astronautics.
Overview
Astro-MCQ is a multiple-choice question dataset designed to evaluate language model performance across key topics in astronautics, including:
- Orbital mechanics
- Space propulsion
- Space environment and its effects
- Spacecraft systems and design
- Mission operations
- Human spaceflight
- Launchers and more.
astro-mcq is a revision of the following previous dataset https://huggingface.co/datasets/patrickfleith/Astro-mcqa/blob/main/README.md
Purpose
This dataset enables comparative assessment of LLM capabilities in the space engineering domain. It helps application developers and researchers answer critical questions:
- Model selection: Which LLM performs best for your specific astronautics subdomain?
- Study configuration optimization: What model size, quantization level, and prompting strategy work best?
- Capability assessment: How do different small-to-medium open-weight models compare in reasoning and domain knowledge?
- Fine-tuning evaluation: How effective is domain adaptation or specialized fine-tuning?
Use Cases
Recommended Uses
- Model evaluation and benchmarking: Compare performance across different LLMs
- Quantization testing: Assess how compression affects domain-specific performance
- Prompt engineering: Test and optimize prompting strategies
- Domain adaptation: Evaluate effectiveness of fine-tuning approaches
- Model auditing: Verify model capabilities before deployment
Evaluation Methods
The dataset supports two evaluation approaches:
- Loglikelihood-based evaluation: Token probability scoring
- Generative evaluation: Free-form response assessment (e.g., model-as-a-judge)
Not Recommended For
- Training or fine-tuning: Dataset size is too limited for effective model training
- Sole training resource: Could potentially be combined with other datasets for meta-learning, but not suitable as primary training data
Quick Start
Explore Online
Browse the dataset: https://huggingface.co/datasets/patrickfleith/astro-mcq/viewer/default/train
Download and Use
Manual download: https://huggingface.co/datasets/patrickfleith/astro-mcq
Python:
from datasets import load_dataset
dataset = load_dataset("patrickfleith/astro-mcq")
About AstroBench
Astro-MCQ is the first release in the AstroBench collection. AstroBench aims to provide comprehensive evaluation tools for assessing language models in space mission engineering contexts, covering multiple task types and difficulty levels tailored to real-world astronautics applications.
Structure
The dataset contains 193 expert-created multiple-choice questions, with 3 permutations per question (randomized choice order), resulting in approximately 579 evaluation instances. This permutation strategy helps mitigate the sensitivity of evals to model bias position (i.e. their tendency to prefer certain propositions)
Each instance includes the following fields:
- uid: Unique identifier string for the original question (UUID format), shared across permutations of the same question
- question: The question text as a string
- permutation_idx: Integer (0-2) indicating which permutation of the question this is
- choices: List of answer choices as strings. Questions can have multiple correct answers.
- labels: List of integers (0 or 1) corresponding to each choice. 0 = incorrect, 1 = correct. Multiple labels can be 1.
- prompt: Pre-formatted prompt string ready for LLM evaluation (includes question and formatted choices)
- expected_output: The correct answer(s) formatted as expected (e.g., "A. Choice text" or "A. Choice\nB. Choice" for multiple correct answers)
Languages
All instances in the dataset are in english
Size
- 193 expert-created unique questions or retrieved from master-level courses in space mission design and operations.
- ~579 total evaluation instances (3 permutations per question)
- Questions filtered from the original dataset to guarantee at least 3 choices per question and at least 1 correct answer
Question Types
- Knowledge-based: Questions testing domain knowledge, facts,, in space science and engineering
- Reasoning: Questions requiring logical reasoning and problem-solving, and understanding of physics..
- Computational: Questions requiring mathematical operations with numerical results (exam-style)
- Multiple-answer: Some questions have multiple correct choices (multi-select format)
Topics Covered
Comprehensive coverage across space engineering subdomains:
- Orbital mechanics and trajectories
- Space propulsion systems
- Mission operations and design
- Human spaceflight
- Space environment and effects
- Spacecraft systems and subsystems
- Communication and link analysis
- Space project lifecycle
- Launch systems and more
USAGE AND GUIDELINES
License
AstroMCQ © 2025 by Patrick Fleith is licensed under Creative Commons Attribution 4.0 International
Restrictions
No restriction. Please provide the correct attribution following the license terms.
Citation
P. Fleith, "Astro-MCQ: A Multiple-Choice Question Benchmark Dataset for Evaluating LLMs in Space Mission Engineering and Astronautics," (2025).
Update Frequency
May be updated based on feedbacks. If you want to become a contributor, let me know.
Have a feedback or spot an error?
Use the community discussion tab directly on the huggingface astro-mcq dataset page.
Contact Information
Reach me here on the community tab or on LinkedIn (Patrick Fleith) with a Note.
Current Limitations and Future Work
- Only 193 multiple choice questions and answers. This makes it useless for fine-tuning purposes although it could be integrated as part of a larger pool of datasets compiled specifically for fine-tuning.
- While being a decent size enabling LLM evaluation, space engineering expert time is scarce and expensive. On average it takes 8 minutes to create one MCQA example. Having more examples would be much better for robustness.
- The dataset might be biased toward the very low number of annotators.
- The dataset might be biased toward European Space Programs.
- The dataset might not cover all subsystems or subdomains of astronautics, although we tried to do our best to cover the annotators' domains of expertise.
- No peer review. Ideally, we would like to have a Quality Control process to ensure high quality and correctness of each example in the dataset. Given the limited resources, this is not yet possible. Feel free to come and contribute if you feel this is an issue.