Demystifying the oracle: A "20 Questions" game to promote AI ethics and literacy
Abstract
Students engage with a modified "20 Questions" game to observe how large language models generate responses based on probabilistic patterns rather than fixed facts, promoting understanding of AI's stochastic nature through hands-on experimentation.
As Generative AI becomes a key component in physics education, a significant ethical challenge has emerged: the tendency of students to anthropomorphize Large Language Models (LLMs), treating them as authoritative "oracles" that retrieve fixed facts from an internal database. However, LLMs operate fundamentally as probabilistic engines. This paper describes the design and implementation of a didactic activity, a reduced version of the "20 Questions" game, aimed at making this stochastic nature directly observable. Unlike a human player who fixes a target object at the start of the game, students discover that the model generates answers based solely on local coherence with the interaction history. By utilizing functionalities such as re-sampling and history rewinding, students act as experimenters, observing how identical interaction histories can yield diverging narrative paths. We discuss how mapping these behaviors to familiar physics concepts provides the epistemic scaffolding necessary to promote informed skepticism, framing the verification of AI outputs not merely as a compliance rule, but as a technical necessity derived from the system's probabilistic nature.
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 1
Collections including this paper 0
No Collection including this paper