telebot / README.md
Esmaill1
Add AI Quiz Bot files
21f82e4
metadata
title: AI Quiz Bot
emoji: ๐Ÿง 
colorFrom: blue
colorTo: purple
sdk: docker
pinned: false

๐Ÿง  AI Quiz Bot - Telegram Quiz Generator

A powerful Telegram bot that converts images, PDFs, or text into interactive multiple-choice quiz questions using Ollama Cloud API.

๐Ÿ“‹ Table of Contents

๐ŸŽฏ Overview

The AI Quiz Bot is a Telegram bot that automates the creation of quiz questions from various content sources:

  • ๐Ÿ“ธ Images: Extract text from screenshots, photos, and scanned documents
  • ๐Ÿ“„ PDFs: Process PDF documents and extract text
  • โœ๏ธ Text: Generate quizzes directly from typed text

The bot uses Ollama Cloud API to:

  1. Analyze images and extract text using vision models
  2. Generate intelligent, well-formatted multiple-choice questions using chat models

All questions are delivered as interactive Telegram polls with the correct answer marked.

โœจ Features

Core Functionality

  • โœ… Multi-Format Support: Images (JPG, PNG, WebP), PDFs, and plain text
  • โœ… Customizable Question Count: 1-20 questions per request
  • โœ… Intelligent Question Generation: AI-powered, contextual questions
  • โœ… Interactive Polls: Each question as a Telegram poll with correct answer
  • โœ… Multiple Languages: Arabic, English, and other languages
  • โœ… Cloud-Based: Uses Ollama Cloud API (no local installation needed)

User Experience

  • ๐Ÿš€ Easy to Use: Simple commands (/start, /help)
  • ๐Ÿ“ Progress Feedback: Real-time status messages
  • ๐ŸŽจ Friendly Interface: Arabic instructions and messages
  • โฑ๏ธ Asynchronous Processing: Non-blocking operations
  • ๐Ÿ”„ Error Handling: Graceful error messages and recovery

๐Ÿ› ๏ธ Prerequisites

System Requirements

  • Python 3.9+
  • Internet connection (for Ollama Cloud API)
  • Telegram Bot Token (from @BotFather)
  • Ollama Cloud API Key and Host

Software Dependencies

  • python-telegram-bot - Telegram bot framework
  • PyPDF2 - PDF text extraction
  • httpx - Async HTTP client for API calls
  • python-dotenv - Environment variable management

๐Ÿ“ฆ Installation

Step 1: Clone/Download Project

cd c:\Users\mohamed\Desktop\Getting_Huge_Again\aiquiz

Step 2: Create Virtual Environment (Recommended)

# Windows
python -m venv venv
venv\Scripts\activate

# Linux/Mac
python3 -m venv venv
source venv/bin/activate

Step 3: Install Dependencies

pip install -r requirements.txt

Step 4: Create .env File

# Copy the template
copy .env.example .env   # Windows
cp .env.example .env     # Linux/Mac

Step 5: Configure Environment Variables

Edit .env with your credentials (see Configuration section)

โš™๏ธ Configuration

Environment Variables (.env file)

# Your Telegram bot token (from @BotFather)
TELEGRAM_BOT_TOKEN=7319290683:AAFpfIEGCZGOky3OOB2V-xcn8rq8pc7TEUQ

# Ollama Cloud API endpoint
OLLAMA_HOST=https://ollama.com

# Ollama API authentication key
OLLAMA_API_KEY=830f2fd6fedc491ba49c91a0709c94d9.4Z84I1pRSwd8iPDT1J4Tfu6c

# Vision model for image analysis
VISION_MODEL=ministral-3:8b

# Chat model for quiz generation
CHAT_MODEL=ministral-3:14b

Getting Your Credentials

Telegram Bot Token:

  1. Open Telegram and search for @BotFather
  2. Type /newbot
  3. Follow the prompts
  4. Copy the token and paste in .env

Ollama API Credentials:

  1. Visit https://ollama.com
  2. Go to account settings โ†’ API keys
  3. Create a new API key (copy the full key)
  4. Get your API host URL (usually https://ollama.com)
  5. Add both to .env

๐Ÿš€ Running the Bot

python bot.py

The bot will start polling for messages from Telegram.

๐Ÿ“ฑ Usage

User Commands

Command Description
/start Show welcome message with usage instructions
/help Show FAQ and common questions

How to Use

Method 1: From Images

1. Send an image (screenshot, photo, etc.)
2. Add caption with number of questions (optional)
3. Bot extracts text and generates quiz

Method 2: From PDFs

1. Send a PDF file
2. Add caption with number of questions (optional)
3. Bot extracts text and generates quiz

Method 3: From Text

1. Send a text message
2. Optional: Start with a number on first line for question count
3. Bot generates quiz from text

Examples

  • Send image + caption: 5 โ†’ Generates 5 questions
  • Send PDF + caption: 10 โ†’ Generates 10 questions
  • Send text starting with: 8\n โ†’ Generates 8 questions
  • Default (no number): 5 questions generated

๐Ÿ—๏ธ Architecture

Project Structure

aiquiz/
โ”œโ”€โ”€ bot.py                # Main bot code (see DEVELOPER_GUIDE.md)
โ”œโ”€โ”€ requirements.txt      # Python dependencies
โ”œโ”€โ”€ .env                  # Environment variables (DO NOT COMMIT)
โ”œโ”€โ”€ .env.example          # Template for .env
โ”œโ”€โ”€ .gitignore            # Git ignore file
โ”œโ”€โ”€ README.md             # This file
โ”œโ”€โ”€ DEVELOPER_GUIDE.md    # Detailed code explanation
โ””โ”€โ”€ ARCHITECTURE.md       # System design documentation

Data Flow

User Input (Image/PDF/Text)
        โ†“
    Message Handler
        โ”œโ”€โ†’ handle_file()  [for images/PDFs]
        โ”œโ”€โ†’ handle_text()  [for text messages]
        โ””โ”€โ†’ Commands       [/start, /help]
        โ†“
    File/Text Processing
        โ”œโ”€โ†’ extract_text_from_image()  [Vision API]
        โ”œโ”€โ†’ extract_text_from_pdf()    [PyPDF2]
        โ””โ”€โ†’ parse_num_questions()      [Parse input]
        โ†“
    Quiz Generation
        โ””โ”€โ†’ generate_quiz_questions()  [Chat API โ†’ JSON]
        โ†“
    Delivery
        โ”œโ”€โ†’ send_quiz_poll()           [Each question as poll]
        โ””โ”€โ†’ Completion message

๐Ÿ”Œ API Integration

Ollama Cloud API Endpoints

Endpoint: POST {OLLAMA_HOST}/api/chat

Authentication: Bearer Token in Authorization header

Authorization: Bearer {OLLAMA_API_KEY}

Request Format

{
  "model": "model-name",
  "messages": [
    {"role": "system", "content": "system instructions"},
    {"role": "user", "content": "user message or image"},
  ],
  "stream": false,
  "format": "json"  # Optional, ensures JSON output
}

Response Format

{
  "message": {
    "role": "assistant",
    "content": "response content"
  }
}

For more details, see ARCHITECTURE.md

๐Ÿ›ก๏ธ Error Handling

Common Errors & Solutions

Error Cause Solution
getaddrinfo failed Network/DNS issue Verify internet and API URL
JSON Parse failed Invalid API response Check API key and retry
No text extracted Unreadable file Use clearer image/PDF
API timeout Slow network Increase timeout or retry
Invalid file type Wrong format Use JPG/PNG/PDF/text

Messages to Users

  • โŒ "ู†ูˆุน ู…ู„ู ู…ุด ู…ุฏุนูˆู…!" - Unsupported file type
  • โŒ "ู…ุด ู‚ุงุฏุฑ ุงุณุชุฎุฑุฌ ู†ุต ูƒุงููŠ" - Text extraction failed
  • โŒ "ู…ุด ู‚ุงุฏุฑ ุงุนู…ู„ ุงุณุฆู„ุฉ" - Quiz generation failed
  • โŒ "ุงู„ู†ุต ู‚ุตูŠุฑ ุดูˆูŠุฉ!" - Text too short (min 20 chars)

๐Ÿ”ง Troubleshooting

Bot Won't Start

Error: TELEGRAM_BOT_TOKEN environment variable is not set!

Solution: Create .env file with your bot token

API Connection Failed

Error: getaddrinfo failed

Solutions:

  1. Check internet connection
  2. Verify OLLAMA_HOST is correct (https://ollama.com)
  3. Verify OLLAMA_API_KEY is valid
  4. Test API manually with curl

Poor Quiz Quality

Causes: Low-quality images, short text Solutions:

  1. Use higher-resolution images
  2. Provide more detailed text (minimum 20 characters)
  3. Use clearer, well-formatted PDFs

Bot Responses Are Slow

Causes: Large files, slow internet, API processing Solutions:

  1. Use smaller file sizes
  2. Reduce number of questions
  3. Be patient (can take 10-30 seconds)

๐Ÿ“š For Developers

See these files for detailed information:

๐Ÿ” Security Notes

  • โš ๏ธ Never commit .env - Contains sensitive credentials
  • ๐Ÿ”‘ Keep API keys private - Don't share in code/issues
  • ๐Ÿ›ก๏ธ Use HTTPS - All API communications are encrypted
  • ๐Ÿ“ Validate input - All file types and sizes are checked

๐Ÿ“„ License

MIT License - Feel free to use and modify!


Last Updated: February 2026
Bot Version: 1.0
API: Ollama Cloud API

๐Ÿ”ง Configuration

Variable Description Default
TELEGRAM_BOT_TOKEN Your Telegram bot token Required
OLLAMA_HOST Ollama server URL http://localhost:11434
VISION_MODEL Model for image analysis llava
CHAT_MODEL Model for quiz generation mistral

Recommended Models

For Vision (image analysis):

  • llava - Good balance of speed and accuracy
  • bakllava - Better for complex images
  • llava:13b - Higher accuracy, slower

For Chat (quiz generation):

  • mistral - Fast and good quality
  • mixtral - Better quality, slower
  • llama2 - Good alternative

๐Ÿ› Troubleshooting

"Connection refused" error

  • Make sure Ollama is running: ollama serve

"Model not found" error

  • Pull the required model: ollama pull <model_name>

Bot not responding

  • Check your bot token is correct
  • Ensure the bot is not blocked
  • Check the console for error messages

Poor quiz quality

  • Try a different/larger model
  • Ensure the extracted text is clear and readable
  • Use higher quality images

๐Ÿ“ Project Structure

aiquiz/
โ”œโ”€โ”€ bot.py              # Main bot code
โ”œโ”€โ”€ requirements.txt    # Python dependencies
โ”œโ”€โ”€ .env.example        # Environment variables template
โ”œโ”€โ”€ .env                # Your configuration (create this)
โ””โ”€โ”€ README.md           # This file

๐Ÿ”’ Security Notes

  • Never commit your .env file with real tokens
  • The .env file is in .gitignore by default
  • Keep your Ollama instance secure if exposed to network

๐Ÿ“„ License

MIT License - feel free to use and modify!