image imagewidth (px) 1.19k 2.09k |
|---|
LLM Education Impact Simulator Dataset
Version: 1.0
Generated: 2026-02-02
Based on: Jackson, D. (2025). “LLMs are not calculators: Why educators should embrace AI (and fear it)”
Overview
This dataset contains synthetic observational data simulating how students interact with different AI tools (search engines, explicit-context LLMs, and agentic LLMs) while completing educational tasks.
The simulation is grounded in educational research, particularly Daniel Jackson's observations about LLM usage in undergraduate software development courses.
Key Findings from Source Research
Jackson's paper identifies several critical patterns:
Context selection matters more than prompting
How students select context for LLM queries is more important than how they phrase prompts.Reading documentation declines
Students using LLMs read less background documentation.Intentionality preserves agency
Deliberate, strategic LLM use maintains student ownership of work.Agentic tools risk boundary violations
Automated context selection often breaks module boundaries.Verification is critical
Checking LLM output significantly reduces errors.
Dataset Statistics
- Students: 120
- Tasks: 15
- Runs: 600
- Overall pass rate: 4.8%
- Average agency proxy: 0.479
Files
students.csv— Student characteristics (n=120)tasks.csv— Educational tasks (n=15)runs.csv— Complete run records (n=600)events.csv— Fine-grained event logconfig.json— Simulation parameterssummary.json— Summary statisticsmetadata.json— Dataset metadata
Tool Modes
search
Traditional documentation-driven approach. Students consult search engines and documentation rather than LLMs.
llm_explicit
LLM usage with explicit context selection. Students carefully choose what context to provide to the LLM, maintaining awareness of scope and boundaries.
llm_agentic
LLM usage with automated context selection. Tools automatically determine context, reducing student control but potentially increasing efficiency.
Key Variables
Student Traits
skill— Base ability (0–1)effort_minimization— Tendency to take shortcuts (0–1)doc_discipline— Propensity to read documentation (0–1)context_care— Attention to context selection (0–1)intentionality— Deliberate vs. passive tool use (0–1)
Outcomes
score— Rubric-based quality score (0–1)passed— Binary pass/fail (0/1)agency_proxy— Student ownership of work (0–1)cognitive_engagement— Deep vs. surface learning (0–1)context_quality— Quality of context selection (0–1)
Observable Behaviors
reading_sessions— Sustained documentation readingdoc_opens— Quick reference lookupsprompts— LLM prompt roundsverification_checks— Checking LLM outputedits— Manual editing roundsboundary_violations— Module boundary breaks (coding only)hallucinations— LLM factual errorsomissions— LLM omission errors
Usage Examples
Load the data (Python)
import pandas as pd
students = pd.read_csv('students.csv')
tasks = pd.read_csv('tasks.csv')
runs = pd.read_csv('runs.csv')
events = pd.read_csv('events.csv')
# Analyze pass rates by tool mode
pass_rates = runs.groupby('tool_mode')['passed'].mean()
print(pass_rates)
# Compare agency across tools
agency_by_tool = runs.groupby('tool_mode')['agency_proxy'].mean()
print(agency_by_tool)
- Downloads last month
- 21