title: AI Quiz Bot
emoji: ๐ง
colorFrom: blue
colorTo: purple
sdk: docker
pinned: false
๐ง AI Quiz Bot - Telegram Quiz Generator
A powerful Telegram bot that converts images, PDFs, or text into interactive multiple-choice quiz questions using Ollama Cloud API.
๐ Table of Contents
- Overview
- Features
- Prerequisites
- Installation
- Configuration
- Usage
- Architecture
- API Integration
- Error Handling
- Troubleshooting
๐ฏ Overview
The AI Quiz Bot is a Telegram bot that automates the creation of quiz questions from various content sources:
- ๐ธ Images: Extract text from screenshots, photos, and scanned documents
- ๐ PDFs: Process PDF documents and extract text
- โ๏ธ Text: Generate quizzes directly from typed text
The bot uses Ollama Cloud API to:
- Analyze images and extract text using vision models
- Generate intelligent, well-formatted multiple-choice questions using chat models
All questions are delivered as interactive Telegram polls with the correct answer marked.
โจ Features
Core Functionality
- โ Multi-Format Support: Images (JPG, PNG, WebP), PDFs, and plain text
- โ Customizable Question Count: 1-20 questions per request
- โ Intelligent Question Generation: AI-powered, contextual questions
- โ Interactive Polls: Each question as a Telegram poll with correct answer
- โ Multiple Languages: Arabic, English, and other languages
- โ Cloud-Based: Uses Ollama Cloud API (no local installation needed)
User Experience
- ๐ Easy to Use: Simple commands (
/start,/help) - ๐ Progress Feedback: Real-time status messages
- ๐จ Friendly Interface: Arabic instructions and messages
- โฑ๏ธ Asynchronous Processing: Non-blocking operations
- ๐ Error Handling: Graceful error messages and recovery
๐ ๏ธ Prerequisites
System Requirements
- Python 3.9+
- Internet connection (for Ollama Cloud API)
- Telegram Bot Token (from @BotFather)
- Ollama Cloud API Key and Host
Software Dependencies
python-telegram-bot- Telegram bot frameworkPyPDF2- PDF text extractionhttpx- Async HTTP client for API callspython-dotenv- Environment variable management
๐ฆ Installation
Step 1: Clone/Download Project
cd c:\Users\mohamed\Desktop\Getting_Huge_Again\aiquiz
Step 2: Create Virtual Environment (Recommended)
# Windows
python -m venv venv
venv\Scripts\activate
# Linux/Mac
python3 -m venv venv
source venv/bin/activate
Step 3: Install Dependencies
pip install -r requirements.txt
Step 4: Create .env File
# Copy the template
copy .env.example .env # Windows
cp .env.example .env # Linux/Mac
Step 5: Configure Environment Variables
Edit .env with your credentials (see Configuration section)
โ๏ธ Configuration
Environment Variables (.env file)
# Your Telegram bot token (from @BotFather)
TELEGRAM_BOT_TOKEN=7319290683:AAFpfIEGCZGOky3OOB2V-xcn8rq8pc7TEUQ
# Ollama Cloud API endpoint
OLLAMA_HOST=https://ollama.com
# Ollama API authentication key
OLLAMA_API_KEY=830f2fd6fedc491ba49c91a0709c94d9.4Z84I1pRSwd8iPDT1J4Tfu6c
# Vision model for image analysis
VISION_MODEL=ministral-3:8b
# Chat model for quiz generation
CHAT_MODEL=ministral-3:14b
Getting Your Credentials
Telegram Bot Token:
- Open Telegram and search for
@BotFather - Type
/newbot - Follow the prompts
- Copy the token and paste in
.env
Ollama API Credentials:
- Visit https://ollama.com
- Go to account settings โ API keys
- Create a new API key (copy the full key)
- Get your API host URL (usually https://ollama.com)
- Add both to
.env
๐ Running the Bot
python bot.py
The bot will start polling for messages from Telegram.
๐ฑ Usage
User Commands
| Command | Description |
|---|---|
/start |
Show welcome message with usage instructions |
/help |
Show FAQ and common questions |
How to Use
Method 1: From Images
1. Send an image (screenshot, photo, etc.)
2. Add caption with number of questions (optional)
3. Bot extracts text and generates quiz
Method 2: From PDFs
1. Send a PDF file
2. Add caption with number of questions (optional)
3. Bot extracts text and generates quiz
Method 3: From Text
1. Send a text message
2. Optional: Start with a number on first line for question count
3. Bot generates quiz from text
Examples
- Send image + caption:
5โ Generates 5 questions - Send PDF + caption:
10โ Generates 10 questions - Send text starting with:
8\nโ Generates 8 questions - Default (no number): 5 questions generated
๐๏ธ Architecture
Project Structure
aiquiz/
โโโ bot.py # Main bot code (see DEVELOPER_GUIDE.md)
โโโ requirements.txt # Python dependencies
โโโ .env # Environment variables (DO NOT COMMIT)
โโโ .env.example # Template for .env
โโโ .gitignore # Git ignore file
โโโ README.md # This file
โโโ DEVELOPER_GUIDE.md # Detailed code explanation
โโโ ARCHITECTURE.md # System design documentation
Data Flow
User Input (Image/PDF/Text)
โ
Message Handler
โโโ handle_file() [for images/PDFs]
โโโ handle_text() [for text messages]
โโโ Commands [/start, /help]
โ
File/Text Processing
โโโ extract_text_from_image() [Vision API]
โโโ extract_text_from_pdf() [PyPDF2]
โโโ parse_num_questions() [Parse input]
โ
Quiz Generation
โโโ generate_quiz_questions() [Chat API โ JSON]
โ
Delivery
โโโ send_quiz_poll() [Each question as poll]
โโโ Completion message
๐ API Integration
Ollama Cloud API Endpoints
Endpoint: POST {OLLAMA_HOST}/api/chat
Authentication: Bearer Token in Authorization header
Authorization: Bearer {OLLAMA_API_KEY}
Request Format
{
"model": "model-name",
"messages": [
{"role": "system", "content": "system instructions"},
{"role": "user", "content": "user message or image"},
],
"stream": false,
"format": "json" # Optional, ensures JSON output
}
Response Format
{
"message": {
"role": "assistant",
"content": "response content"
}
}
For more details, see ARCHITECTURE.md
๐ก๏ธ Error Handling
Common Errors & Solutions
| Error | Cause | Solution |
|---|---|---|
getaddrinfo failed |
Network/DNS issue | Verify internet and API URL |
JSON Parse failed |
Invalid API response | Check API key and retry |
No text extracted |
Unreadable file | Use clearer image/PDF |
API timeout |
Slow network | Increase timeout or retry |
Invalid file type |
Wrong format | Use JPG/PNG/PDF/text |
Messages to Users
- โ "ููุน ู ูู ู ุด ู ุฏุนูู !" - Unsupported file type
- โ "ู ุด ูุงุฏุฑ ุงุณุชุฎุฑุฌ ูุต ูุงูู" - Text extraction failed
- โ "ู ุด ูุงุฏุฑ ุงุนู ู ุงุณุฆูุฉ" - Quiz generation failed
- โ "ุงููุต ูุตูุฑ ุดููุฉ!" - Text too short (min 20 chars)
๐ง Troubleshooting
Bot Won't Start
Error: TELEGRAM_BOT_TOKEN environment variable is not set!
Solution: Create .env file with your bot token
API Connection Failed
Error: getaddrinfo failed
Solutions:
- Check internet connection
- Verify
OLLAMA_HOSTis correct (https://ollama.com) - Verify
OLLAMA_API_KEYis valid - Test API manually with curl
Poor Quiz Quality
Causes: Low-quality images, short text Solutions:
- Use higher-resolution images
- Provide more detailed text (minimum 20 characters)
- Use clearer, well-formatted PDFs
Bot Responses Are Slow
Causes: Large files, slow internet, API processing Solutions:
- Use smaller file sizes
- Reduce number of questions
- Be patient (can take 10-30 seconds)
๐ For Developers
See these files for detailed information:
- DEVELOPER_GUIDE.md - Code explanation, functions, and how to modify
- ARCHITECTURE.md - System design, API flow, data structures
๐ Security Notes
- โ ๏ธ Never commit
.env- Contains sensitive credentials - ๐ Keep API keys private - Don't share in code/issues
- ๐ก๏ธ Use HTTPS - All API communications are encrypted
- ๐ Validate input - All file types and sizes are checked
๐ License
MIT License - Feel free to use and modify!
Last Updated: February 2026
Bot Version: 1.0
API: Ollama Cloud API
๐ง Configuration
| Variable | Description | Default |
|---|---|---|
TELEGRAM_BOT_TOKEN |
Your Telegram bot token | Required |
OLLAMA_HOST |
Ollama server URL | http://localhost:11434 |
VISION_MODEL |
Model for image analysis | llava |
CHAT_MODEL |
Model for quiz generation | mistral |
Recommended Models
For Vision (image analysis):
llava- Good balance of speed and accuracybakllava- Better for complex imagesllava:13b- Higher accuracy, slower
For Chat (quiz generation):
mistral- Fast and good qualitymixtral- Better quality, slowerllama2- Good alternative
๐ Troubleshooting
"Connection refused" error
- Make sure Ollama is running:
ollama serve
"Model not found" error
- Pull the required model:
ollama pull <model_name>
Bot not responding
- Check your bot token is correct
- Ensure the bot is not blocked
- Check the console for error messages
Poor quiz quality
- Try a different/larger model
- Ensure the extracted text is clear and readable
- Use higher quality images
๐ Project Structure
aiquiz/
โโโ bot.py # Main bot code
โโโ requirements.txt # Python dependencies
โโโ .env.example # Environment variables template
โโโ .env # Your configuration (create this)
โโโ README.md # This file
๐ Security Notes
- Never commit your
.envfile with real tokens - The
.envfile is in.gitignoreby default - Keep your Ollama instance secure if exposed to network
๐ License
MIT License - feel free to use and modify!