π Code Context Reference
Quick reference guide for understanding the AI Quiz Bot codebase at a glance.
π― What Does This Bot Do?
The AI Quiz Bot converts:
- Images (screenshots, photos) β Text extraction β Quiz questions
- PDFs (documents, papers) β Text extraction β Quiz questions
- Text (copied/pasted) β Direct analysis β Quiz questions
All questions are sent as interactive Telegram polls with correct answers revealed.
π Key Technologies
| Technology | Purpose | Version |
|---|---|---|
| python-telegram-bot | Telegram integration | 20.0+ |
| httpx | Async HTTP requests | 0.24.0+ |
| PyPDF2 | PDF text extraction | 3.0.0+ |
| python-dotenv | Environment variables | 1.0.0+ |
| Ollama Cloud API | AI (vision + chat) | REST API |
π Project Structure
aiquiz/
βββ bot.py # Main code (~400 lines)
βββ requirements.txt # Dependencies
βββ .env # Config (PRIVATE - never commit)
βββ .env.example # Config template
βββ .gitignore # Git ignore rules
βββ README.md # User documentation β START HERE
βββ DEVELOPER_GUIDE.md # Code explanation (THIS FILE)
βββ ARCHITECTURE.md # System design & API specs
βοΈ Configuration (.env)
TELEGRAM_BOT_TOKEN=your_bot_token_from_botfather
OLLAMA_HOST=https://ollama.com
OLLAMA_API_KEY=your_api_key_from_ollama_cloud
VISION_MODEL=ministral-3:8b # For image analysis
CHAT_MODEL=ministral-3:14b # For quiz generation
π Quick Start (For Developers)
Clone repo and create
.env:copy .env.example .envInstall dependencies:
pip install -r requirements.txtAdd credentials to
.env:- Telegram token from @BotFather
- Ollama API key from https://ollama.com
Run bot:
python bot.pyTest on Telegram:
- Send
/startto see welcome - Send image/PDF/text to generate quiz
- Send
π File: bot.py Structure
Section 1: Imports & Setup (Lines 1-30)
- Import libraries
- Load environment variables
- Configure logging
Section 2: Configuration (Lines 31-45)
- Read TELEGRAM_BOT_TOKEN, OLLAMA_HOST, etc.
- Set defaults
Section 3: Text Extraction (Lines 46-103)
extract_text_from_image() # Image β text (Vision API)
extract_text_from_pdf() # PDF β text (PyPDF2)
Section 4: Quiz Generation (Lines 104-138)
generate_quiz_questions() # Text β quiz JSON (Chat API)
Section 5: Message Handlers (Lines 139-270)
handle_file() # Handle images/PDFs
handle_text() # Handle text messages
send_quiz_poll() # Send individual quiz question
Section 6: Commands (Lines 271-365)
start_command() # /start command
help_command() # /help command
Section 7: Bot Setup (Lines 366-400)
main() # Initialize and run bot
π Request Flow Diagram
βββββββββββββββββββββββββββββββββββββββββββββββ
β User sends Image/PDF/Text β
ββββββββββββββββββββββ¬βββββββββββββββββββββββββ
β
βββββββββββββββββ
β Message Type? β
βββββ¬ββββββββ¬ββββ
β β
Image/PDFβ βText
β β
ββββββββββββββββββββββββββ
β Handler Function β
β handle_file() β handle_text()
β or handle_text() β
ββββββββββββββ¬ββββββββββββ
β
ββββββββββββββββββββββββββββββ
β Extract Text β
β (Vision API or PyPDF2) β
ββββββββββββββ¬ββββββββββββββββ
β
ββββββββββββββββββββββββββββββ
β Parse Question Count β
β Default: 5, Range: 1-20 β
ββββββββββββββ¬ββββββββββββββββ
β
ββββββββββββββββββββββββββββββ
β Generate Quiz Questions β
β (Chat API with JSON format)β
ββββββββββββββ¬ββββββββββββββββ
β
ββββββββββββββββββββββββββββββ
β Send Each Question as Poll β
β (Telegram send_poll API) β
ββββββββββββββ¬ββββββββββββββββ
β
ββββββββββββββββββββββββββββββ
β Send Completion Message β
ββββββββββββββββββββββββββββββ
π Core Functions Quick Reference
Image Analysis
# Input: Image bytes
# Process: Convert to base64 β Send to Vision API
# Output: Extracted text
await extract_text_from_image(image_bytes: bytes) -> str
PDF Analysis
# Input: PDF bytes
# Process: Read pages β Extract text per page β Combine
# Output: Extracted text
extract_text_from_pdf(pdf_bytes: bytes) -> str
Quiz Generation
# Input: Text + number of questions
# Process: Send to Chat API with JSON format prompt
# Output: JSON array of question objects
async generate_quiz_questions(text: str, num_questions: int) -> list
Poll Sending
# Input: Chat ID + question object
# Process: Format and send Telegram poll
# Output: None (sends message directly)
async send_quiz_poll(context, chat_id: int, question_data: dict) -> None
π Quiz Question Format
[
{
"question": "What is 2+2?",
"options": ["3", "4", "5", "6"],
"correct_option_id": 1
}
]
Requirements:
question: max 300 charsoptions: max 10 optionscorrect_option_id: 0-based index of correct answer
π External APIs Used
1. Ollama Cloud API
POST https://ollama.com/api/chat
Authorization: Bearer {OLLAMA_API_KEY}
Used for: Image analysis + Quiz generation
2. Telegram Bot API
https://api.telegram.org/bot{TOKEN}/sendPoll
Used for: Send messages, polls, download files
π‘οΈ Error Handling Pattern
try:
# API call or processing
result = await some_function()
except SpecificError as e:
logger.error(f"Error: {e}")
# Handle gracefully
await message.reply_text("User-friendly error message")
except Exception as e:
logger.error(f"Unexpected error: {e}")
# Generic error handling
π¨ User Interface Messages (Arabic)
/start β Welcome with instructions
/help β FAQ and common questions
πΈ Image β Processing status messages
π PDF β Processing status messages
βοΈ Text β Processing status messages
β
Complete β Completion message
β Error β Error-specific messages
π Security Best Practices
β DO:
- Store credentials in
.envfile - Use environment variables for secrets
- Validate file types and sizes
- Use HTTPS for API calls
- Add rate limiting
β DON'T:
- Hardcode API keys in code
- Commit
.envfile - Download unlimited file sizes
- Trust unvalidated user input
- Use unencrypted connections
π How to Modify Common Things
Change Default Question Count
# In parse_num_questions() function
return 5 # Change this to your preferred default
Change Bot Language
# In start_command() and help_command()
welcome_message = """Your Arabic/English message here"""
Change Models
Edit .env:
VISION_MODEL=different-model
CHAT_MODEL=different-model
Add New Prompt Instructions
# In generate_quiz_questions()
system_prompt = """Your new instructions here"""
Support New File Types
# In handle_file()
elif file_name.endswith('.txt'):
# Handle text files
π¨ Debugging Tips
Check if API Key is Valid
# Test with curl
curl -X POST https://ollama.com/api/chat \
-H "Authorization: Bearer YOUR_KEY" \
-H "Content-Type: application/json" \
-d '{"model":"ministral-3:14b","messages":[{"role":"user","content":"test"}],"stream":false}'
Check Bot Token
# Get bot info
curl https://api.telegram.org/botYOUR_TOKEN/getMe
Enable Debug Logging
# At top of bot.py
logging.basicConfig(level=logging.DEBUG) # Change INFO to DEBUG
Print Variable Values
logger.info(f"Debug - extracted_text: {text[:100]}") # Log first 100 chars
π Related Documentation
- README.md - User guide and features
- DEVELOPER_GUIDE.md - Detailed code explanation
- ARCHITECTURE.md - System design and API specs
π€ Common Questions
Q: Why use async?
A: Non-blocking operations. Bot can handle multiple users simultaneously.
Q: Why base64 for images?
A: Ollama API requires this format for images in JSON requests.
Q: Why 1-second delay between polls?
A: Prevent Telegram rate limiting. Recommended by Telegram.
Q: Can I increase max questions?
A: Yes, change the limit in parse_num_questions() from 20 to any number.
Q: Can I use local Ollama instead of cloud?
A: Yes, change OLLAMA_HOST to http://localhost:11434 in .env.
Q: How do I add user authentication?
A: Check user_id in handlers and maintain approved list.
π Troubleshooting Checklist
-
.envfile created with correct credentials - Internet connection working
- Telegram bot token valid (test with getMe)
- Ollama API key valid (test with curl)
- Python 3.9+ installed
- All dependencies installed (
pip install -r requirements.txt) - No other bot running on same token
- OLLAMA_HOST URL is correct (https://ollama.com)
π― Next Steps
- First time? Read README.md
- Want to code? Read DEVELOPER_GUIDE.md
- Need to understand design? Read ARCHITECTURE.md
- Have questions? Check
/helpin bot or this file
Last Updated: February 2026
For Version: 1.0
Python: 3.9+