A newer version of the Gradio SDK is available: 6.14.0
metadata
title: Hangman AI Demo
emoji: 🤖
colorFrom: blue
colorTo: purple
sdk: gradio
sdk_version: 4.44.0
app_file: app.py
pinned: false
license: mit
short_description: AI agent plays hangman using transformer
Hangman AI Demo
An intelligent AI agent that plays hangman using a trained transformer model. The agent uses sophisticated strategies including information theory, dictionary filtering, and pattern analysis to make optimal letter guesses.
Features
- Single Game Mode: Play one hangman game and see detailed turn-by-turn analysis
- Multi-Game Demo: Run multiple games and view comprehensive statistics
- Configurable Parameters: Set word length, number of trials, and difficulty (max wrong guesses)
- Real-time Logging: See every guess, response, and game state in real-time
- Performance Analytics: Win rates, average turns, and detailed game results
How It Works
The AI agent combines multiple intelligent strategies:
- Transformer Model: A character-level transformer trained on hangman games
- Dictionary Filtering: Filters possible words based on current pattern and wrong letters
- Information Gain: Calculates which letter provides the most information about the hidden word
- Frequency Analysis: Uses letter frequency in remaining candidate words
- Context Awareness: Considers letter position and context for better guesses
Usage
Single Game
- Choose word length (or "Any" for random)
- Set max wrong guesses (difficulty level)
- Click "Play Single Game" to watch the AI play
- View detailed turn-by-turn game log
Multi-Game Demo
- Set number of games to play (1-20)
- Configure word length and difficulty
- Click "Run Demo" to see multiple games
- View comprehensive statistics and all game logs
Technical Details
The agent uses a combination of:
- Length Priors: Uses word length to inform initial guesses
- Pattern Matching: Filters dictionary based on current revealed pattern
- Information Theory: Calculates expected information from each guess
- Frequency Analysis: Uses letter frequency in remaining candidates
- Context Awareness: Considers letter context and position
Model Architecture
- Character Embeddings: Maps characters to dense vectors
- Positional Encoding: Adds position information
- Transformer Encoder: Processes sequence with self-attention
- Output Head: Produces logits for 26 letters
- Masking: Prevents guessing already-tried letters
Performance
The agent typically achieves:
- High win rates on common word lengths
- Efficient solving (low average turns)
- Robust performance across different word categories
- Graceful fallback to baseline strategies when needed
Try It Out!
Use the interface above to:
- Watch the AI play single games with detailed analysis
- Run multi-game demos to see performance statistics
- Experiment with different word lengths and difficulty levels
- Observe the agent's decision-making process in real-time
The demo showcases advanced AI techniques applied to a classic word game, demonstrating how modern machine learning can solve traditional puzzles with human-level or better performance.