telebot / CODE_CONTEXT.md
Esmaill1
Add AI Quiz Bot files
92d6323

πŸ“– Code Context Reference

Quick reference guide for understanding the AI Quiz Bot codebase at a glance.

🎯 What Does This Bot Do?

The AI Quiz Bot converts:

  • Images (screenshots, photos) β†’ Text extraction β†’ Quiz questions
  • PDFs (documents, papers) β†’ Text extraction β†’ Quiz questions
  • Text (copied/pasted) β†’ Direct analysis β†’ Quiz questions

All questions are sent as interactive Telegram polls with correct answers revealed.

πŸ”‘ Key Technologies

Technology Purpose Version
python-telegram-bot Telegram integration 20.0+
httpx Async HTTP requests 0.24.0+
PyPDF2 PDF text extraction 3.0.0+
python-dotenv Environment variables 1.0.0+
Ollama Cloud API AI (vision + chat) REST API

πŸ“‚ Project Structure

aiquiz/
β”œβ”€β”€ bot.py                  # Main code (~400 lines)
β”œβ”€β”€ requirements.txt        # Dependencies
β”œβ”€β”€ .env                    # Config (PRIVATE - never commit)
β”œβ”€β”€ .env.example            # Config template
β”œβ”€β”€ .gitignore              # Git ignore rules
β”œβ”€β”€ README.md               # User documentation ← START HERE
β”œβ”€β”€ DEVELOPER_GUIDE.md      # Code explanation (THIS FILE)
└── ARCHITECTURE.md         # System design & API specs

βš™οΈ Configuration (.env)

TELEGRAM_BOT_TOKEN=your_bot_token_from_botfather
OLLAMA_HOST=https://ollama.com
OLLAMA_API_KEY=your_api_key_from_ollama_cloud
VISION_MODEL=ministral-3:8b              # For image analysis
CHAT_MODEL=ministral-3:14b              # For quiz generation

πŸš€ Quick Start (For Developers)

  1. Clone repo and create .env:

    copy .env.example .env
    
  2. Install dependencies:

    pip install -r requirements.txt
    
  3. Add credentials to .env:

  4. Run bot:

    python bot.py
    
  5. Test on Telegram:

    • Send /start to see welcome
    • Send image/PDF/text to generate quiz

πŸ“‹ File: bot.py Structure

Section 1: Imports & Setup (Lines 1-30)

  • Import libraries
  • Load environment variables
  • Configure logging

Section 2: Configuration (Lines 31-45)

  • Read TELEGRAM_BOT_TOKEN, OLLAMA_HOST, etc.
  • Set defaults

Section 3: Text Extraction (Lines 46-103)

extract_text_from_image()      # Image β†’ text (Vision API)
extract_text_from_pdf()        # PDF β†’ text (PyPDF2)

Section 4: Quiz Generation (Lines 104-138)

generate_quiz_questions()      # Text β†’ quiz JSON (Chat API)

Section 5: Message Handlers (Lines 139-270)

handle_file()                  # Handle images/PDFs
handle_text()                  # Handle text messages
send_quiz_poll()              # Send individual quiz question

Section 6: Commands (Lines 271-365)

start_command()               # /start command
help_command()                # /help command

Section 7: Bot Setup (Lines 366-400)

main()                        # Initialize and run bot

πŸ”„ Request Flow Diagram

β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚      User sends Image/PDF/Text              β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
                     ↓
             β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
             β”‚ Message Type? β”‚
             β””β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”˜
                 β”‚       β”‚
        Image/PDFβ”‚       β”‚Text
                 ↓       ↓
          β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
          β”‚ Handler Function       β”‚
          β”‚ handle_file()          β”‚ handle_text()
          β”‚ or handle_text()       β”‚
          β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
                       ↓
          β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
          β”‚ Extract Text               β”‚
          β”‚ (Vision API or PyPDF2)     β”‚
          β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
                       ↓
          β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
          β”‚ Parse Question Count       β”‚
          β”‚ Default: 5, Range: 1-20    β”‚
          β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
                       ↓
          β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
          β”‚ Generate Quiz Questions    β”‚
          β”‚ (Chat API with JSON format)β”‚
          β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
                       ↓
          β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
          β”‚ Send Each Question as Poll β”‚
          β”‚ (Telegram send_poll API)   β”‚
          β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
                       ↓
          β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
          β”‚ Send Completion Message    β”‚
          β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

πŸ”‘ Core Functions Quick Reference

Image Analysis

# Input: Image bytes
# Process: Convert to base64 β†’ Send to Vision API
# Output: Extracted text
await extract_text_from_image(image_bytes: bytes) -> str

PDF Analysis

# Input: PDF bytes
# Process: Read pages β†’ Extract text per page β†’ Combine
# Output: Extracted text
extract_text_from_pdf(pdf_bytes: bytes) -> str

Quiz Generation

# Input: Text + number of questions
# Process: Send to Chat API with JSON format prompt
# Output: JSON array of question objects
async generate_quiz_questions(text: str, num_questions: int) -> list

Poll Sending

# Input: Chat ID + question object
# Process: Format and send Telegram poll
# Output: None (sends message directly)
async send_quiz_poll(context, chat_id: int, question_data: dict) -> None

πŸ“Š Quiz Question Format

[
  {
    "question": "What is 2+2?",
    "options": ["3", "4", "5", "6"],
    "correct_option_id": 1
  }
]

Requirements:

  • question: max 300 chars
  • options: max 10 options
  • correct_option_id: 0-based index of correct answer

πŸ”Œ External APIs Used

1. Ollama Cloud API

POST https://ollama.com/api/chat
Authorization: Bearer {OLLAMA_API_KEY}

Used for: Image analysis + Quiz generation

2. Telegram Bot API

https://api.telegram.org/bot{TOKEN}/sendPoll

Used for: Send messages, polls, download files

πŸ›‘οΈ Error Handling Pattern

try:
    # API call or processing
    result = await some_function()
except SpecificError as e:
    logger.error(f"Error: {e}")
    # Handle gracefully
    await message.reply_text("User-friendly error message")
except Exception as e:
    logger.error(f"Unexpected error: {e}")
    # Generic error handling

🎨 User Interface Messages (Arabic)

/start      β†’ Welcome with instructions
/help       β†’ FAQ and common questions
πŸ“Έ Image    β†’ Processing status messages
πŸ“„ PDF      β†’ Processing status messages
✍️ Text     β†’ Processing status messages
βœ… Complete β†’ Completion message
❌ Error    β†’ Error-specific messages

πŸ” Security Best Practices

βœ… DO:

  • Store credentials in .env file
  • Use environment variables for secrets
  • Validate file types and sizes
  • Use HTTPS for API calls
  • Add rate limiting

❌ DON'T:

  • Hardcode API keys in code
  • Commit .env file
  • Download unlimited file sizes
  • Trust unvalidated user input
  • Use unencrypted connections

πŸ“ How to Modify Common Things

Change Default Question Count

# In parse_num_questions() function
return 5  # Change this to your preferred default

Change Bot Language

# In start_command() and help_command()
welcome_message = """Your Arabic/English message here"""

Change Models

Edit .env:

VISION_MODEL=different-model
CHAT_MODEL=different-model

Add New Prompt Instructions

# In generate_quiz_questions()
system_prompt = """Your new instructions here"""

Support New File Types

# In handle_file()
elif file_name.endswith('.txt'):
    # Handle text files

🚨 Debugging Tips

Check if API Key is Valid

# Test with curl
curl -X POST https://ollama.com/api/chat \
  -H "Authorization: Bearer YOUR_KEY" \
  -H "Content-Type: application/json" \
  -d '{"model":"ministral-3:14b","messages":[{"role":"user","content":"test"}],"stream":false}'

Check Bot Token

# Get bot info
curl https://api.telegram.org/botYOUR_TOKEN/getMe

Enable Debug Logging

# At top of bot.py
logging.basicConfig(level=logging.DEBUG)  # Change INFO to DEBUG

Print Variable Values

logger.info(f"Debug - extracted_text: {text[:100]}")  # Log first 100 chars

πŸ“š Related Documentation

  • README.md - User guide and features
  • DEVELOPER_GUIDE.md - Detailed code explanation
  • ARCHITECTURE.md - System design and API specs

πŸ€” Common Questions

Q: Why use async?
A: Non-blocking operations. Bot can handle multiple users simultaneously.

Q: Why base64 for images?
A: Ollama API requires this format for images in JSON requests.

Q: Why 1-second delay between polls?
A: Prevent Telegram rate limiting. Recommended by Telegram.

Q: Can I increase max questions?
A: Yes, change the limit in parse_num_questions() from 20 to any number.

Q: Can I use local Ollama instead of cloud?
A: Yes, change OLLAMA_HOST to http://localhost:11434 in .env.

Q: How do I add user authentication?
A: Check user_id in handlers and maintain approved list.

πŸ“ž Troubleshooting Checklist

  • .env file created with correct credentials
  • Internet connection working
  • Telegram bot token valid (test with getMe)
  • Ollama API key valid (test with curl)
  • Python 3.9+ installed
  • All dependencies installed (pip install -r requirements.txt)
  • No other bot running on same token
  • OLLAMA_HOST URL is correct (https://ollama.com)

🎯 Next Steps

  1. First time? Read README.md
  2. Want to code? Read DEVELOPER_GUIDE.md
  3. Need to understand design? Read ARCHITECTURE.md
  4. Have questions? Check /help in bot or this file

Last Updated: February 2026
For Version: 1.0
Python: 3.9+