src/.streamlit/config.toml DELETED
@@ -1,11 +0,0 @@
1
- [global]
2
- developmentMode = false
3
-
4
- [browser]
5
- gatherUsageStats = false
6
-
7
- [theme]
8
- primaryColor = "#FF6B6B"
9
- backgroundColor = "#FFFFFF"
10
- secondaryBackgroundColor = "#F0F2F6"
11
- textColor = "#262730"
 
 
 
 
 
 
 
 
 
 
 
 
src/DEPLOYMENT.md DELETED
@@ -1,168 +0,0 @@
1
- # 🚀 Deployment Guide: Hugging Face Spaces
2
-
3
- ## Quick Start (5 minutes)
4
-
5
- ### Step 1: Prepare Your Repository
6
- 1. **Create a GitHub repository** with your project files
7
- 2. **Upload all files** from this directory to your GitHub repo
8
- 3. **Make sure you have**:
9
- - `app.py` (main Streamlit app)
10
- - `fine.py` (AI tutor implementation)
11
- - `requirements.txt` (dependencies)
12
- - `README.md` (documentation)
13
-
14
- ### Step 2: Create Hugging Face Space
15
- 1. **Go to** [huggingface.co/spaces](https://huggingface.co/spaces)
16
- 2. **Click** "Create new Space"
17
- 3. **Fill in the details**:
18
- - **Owner**: Your HF username
19
- - **Space name**: `ai-programming-tutor`
20
- - **License**: Choose appropriate license
21
- - **SDK**: Select **Streamlit**
22
- - **Python version**: 3.10
23
- 4. **Click** "Create Space"
24
-
25
- ### Step 3: Connect Your Repository
26
- 1. **In your Space settings**, go to "Repository" tab
27
- 2. **Select** "GitHub repository"
28
- 3. **Choose** your GitHub repository
29
- 4. **Set the main file** to `app.py`
30
- 5. **Click** "Save"
31
-
32
- ### Step 4: Upload Your Fine-tuned Model
33
- 1. **In your Space**, go to "Files" tab
34
- 2. **Create a folder** called `model`
35
- 3. **Upload your fine-tuned model files**:
36
- - `model-00001-of-00006.safetensors`
37
- - `model-00002-of-00006.safetensors`
38
- - `model-00003-of-00006.safetensors`
39
- - `model-00004-of-00006.safetensors`
40
- - `model-00005-of-00006.safetensors`
41
- - `model-00006-of-00006.safetensors`
42
- - `config.json`
43
- - `tokenizer.json`
44
- - `tokenizer.model`
45
- - `tokenizer_config.json`
46
- - `special_tokens_map.json`
47
- - `generation_config.json`
48
-
49
- ### Step 5: Update Model Path
50
- 1. **Edit** `app.py` in your Space
51
- 2. **Change the model path** to:
52
- ```python
53
- model_path = "./model" # Path to uploaded model
54
- ```
55
- 3. **Save** the changes
56
-
57
- ### Step 6: Deploy
58
- 1. **Your Space will automatically build** and deploy
59
- 2. **Wait for the build to complete** (5-10 minutes)
60
- 3. **Your app will be live** at: `https://huggingface.co/spaces/YOUR_USERNAME/ai-programming-tutor`
61
-
62
- ## 🎯 Advanced Configuration
63
-
64
- ### Hardware Settings
65
- - **CPU**: Default (sufficient for inference)
66
- - **GPU**: T4 (recommended for faster inference)
67
- - **Memory**: 16GB+ (required for 7B model)
68
-
69
- ### Environment Variables
70
- Add these in your Space settings:
71
- ```
72
- TOKENIZERS_PARALLELISM=false
73
- DATASETS_DISABLE_MULTIPROCESSING=1
74
- ```
75
-
76
- ### Custom Domain (Optional)
77
- 1. **In Space settings**, go to "Settings" tab
78
- 2. **Enable** "Custom domain"
79
- 3. **Add your domain** (e.g., `tutor.yourdomain.com`)
80
-
81
- ## 🔧 Troubleshooting
82
-
83
- ### Common Issues
84
-
85
- **Issue**: Model not loading
86
- - **Solution**: Check model path and file structure
87
- - **Debug**: Look at Space logs in "Settings" → "Logs"
88
-
89
- **Issue**: Out of memory
90
- - **Solution**: Upgrade to GPU hardware
91
- - **Alternative**: Use demo mode
92
-
93
- **Issue**: Build fails
94
- - **Solution**: Check `requirements.txt` for missing dependencies
95
- - **Debug**: Review build logs
96
-
97
- ### Performance Optimization
98
-
99
- 1. **Enable GPU** in Space settings
100
- 2. **Use model quantization** for faster inference
101
- 3. **Implement caching** for repeated requests
102
- 4. **Add rate limiting** to prevent abuse
103
-
104
- ## 📊 Monitoring
105
-
106
- ### Usage Analytics
107
- - **View usage** in Space settings
108
- - **Monitor performance** with built-in metrics
109
- - **Track user engagement** through logs
110
-
111
- ### Cost Management
112
- - **Free tier**: 16 hours/month GPU time
113
- - **Pro tier**: $9/month for unlimited GPU
114
- - **Enterprise**: Custom pricing
115
-
116
- ## 🌐 Sharing Your App
117
-
118
- ### Public Access
119
- 1. **Set Space to public** in settings
120
- 2. **Share the URL** with users
121
- 3. **Add to HF Spaces showcase**
122
-
123
- ### Embedding
124
- ```html
125
- <iframe
126
- src="https://huggingface.co/spaces/YOUR_USERNAME/ai-programming-tutor"
127
- width="100%"
128
- height="800px"
129
- frameborder="0"
130
- ></iframe>
131
- ```
132
-
133
- ## 🔒 Security Considerations
134
-
135
- 1. **Input validation** for code submissions
136
- 2. **Rate limiting** to prevent abuse
137
- 3. **Content filtering** for inappropriate code
138
- 4. **User authentication** (optional)
139
-
140
- ## 📈 Scaling
141
-
142
- ### For High Traffic
143
- 1. **Upgrade to Pro tier** for unlimited GPU
144
- 2. **Implement caching** with Redis
145
- 3. **Use load balancing** for multiple instances
146
- 4. **Monitor performance** and optimize
147
-
148
- ### For Production Use
149
- 1. **Add user authentication**
150
- 2. **Implement logging** and analytics
151
- 3. **Set up monitoring** and alerts
152
- 4. **Create backup** and recovery procedures
153
-
154
- ## 🎉 Success!
155
-
156
- Your AI Programming Tutor is now live and accessible to students worldwide!
157
-
158
- **Next steps**:
159
- 1. **Test thoroughly** with different code examples
160
- 2. **Gather user feedback** and iterate
161
- 3. **Share with your target audience**
162
- 4. **Monitor usage** and improve based on data
163
-
164
- ## 📞 Support
165
-
166
- - **Hugging Face Docs**: [docs.huggingface.co](https://docs.huggingface.co)
167
- - **Spaces Documentation**: [huggingface.co/docs/hub/spaces](https://huggingface.co/docs/hub/spaces)
168
- - **Community Forum**: [discuss.huggingface.co](https://discuss.huggingface.co)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
src/README.md DELETED
@@ -1,185 +0,0 @@
1
- # 🎓 Generative AI for Programming Education
2
-
3
- ## 🚀 Live Demo
4
- **Hugging Face Spaces**: [Coming Soon - Deploy using DEPLOYMENT.md guide]
5
-
6
- ## 📋 Problem Statement
7
- Current programming education struggles with high dropout rates, inefficient feedback loops, and a lack of personalized learning—problems exacerbated by limited instructor bandwidth. While Generative AI (e.g., Copilot, ChatGPT) can help, most tools prioritize productivity over learning, offering code solutions without explanations or tailored guidance. This risks student over-reliance without deeper comprehension.
8
-
9
- ## 🎯 Solution
10
- To address this gap, we fine-tuned **CodeLlama-7B** to provide structured, educational code feedback—not just correct answers. Our model analyzes student code and delivers:
11
-
12
- - **Instant, actionable reviews** (e.g., "This loop can be optimized from O(n²) to O(n) using a hashmap")
13
- - **Beginner-friendly explanations** (e.g., "In Python, list.append() modifies the list in-place but returns None—that's why your print() shows None")
14
- - **Personalized adaptation** (e.g., adjusting feedback depth based on inferred skill level)
15
-
16
- Unlike generic AI tools, our system is explicitly designed for education, balancing correctness, pedagogy, and ethical safeguards against over-reliance.
17
-
18
- ## ✨ Features
19
-
20
- ### 🧠 **Fine-tuned CodeLlama-7B Model**
21
- - Trained on **code review** and **code feedback** datasets
22
- - **7B parameters** for comprehensive understanding
23
- - **Educational focus** rather than productivity optimization
24
-
25
- ### 📊 **Progressive Learning Interface**
26
- - **5-stage educational process**:
27
- 1. **Code Analysis** - Strengths, weaknesses, issues
28
- 2. **Improvement Guide** - Step-by-step instructions
29
- 3. **Learning Points** - Key concepts and objectives
30
- 4. **Comprehension Quiz** - Test understanding
31
- 5. **Code Fix** - Improved solution (only after learning)
32
-
33
- ### 🎓 **Educational Features**
34
- - **Student Level Adaptation** (Beginner/Intermediate/Advanced)
35
- - **Comprehension Questions** generated by the model
36
- - **Learning Objectives** for each feedback
37
- - **Step-by-step improvement guides**
38
- - **Algorithm complexity explanations**
39
-
40
- ### 🛡️ **Ethical Safeguards**
41
- - **Progressive learning flow** prevents solution jumping
42
- - **Comprehension testing** before showing fixes
43
- - **Educational explanations** rather than quick answers
44
- - **Best practices promotion**
45
-
46
- ## 🚀 **Hugging Face Spaces Deployment**
47
-
48
- ### **Hardware Specifications**
49
- - **CPU**: 2 vCPU (virtual CPU cores)
50
- - **RAM**: 16 GB
51
- - **Plan**: FREE tier
52
- - **Storage**: Sufficient for model and application
53
-
54
- ### **Optimization Features**
55
- - ✅ **16GB RAM optimization** for fine-tuned model
56
- - ✅ **CPU-only inference** (no GPU required)
57
- - ✅ **Memory management** with gradient checkpointing
58
- - ✅ **Demo mode** for immediate testing
59
- - ✅ **Progressive loading** with fallback options
60
-
61
- ### **Performance Expectations**
62
- - **Demo Mode**: Instant response
63
- - **Fine-tuned Model**: 5-10 minutes initial loading
64
- - **Memory Usage**: Optimized for 16GB constraint
65
- - **Concurrent Users**: Limited by CPU cores
66
-
67
- ## 🛠️ Installation & Setup
68
-
69
- ### **Local Development**
70
- ```bash
71
- # Clone the repository
72
- git clone https://github.com/TomoriFarouk/GenAI-For-Programming-Language.git
73
- cd GenAI-For-Programming-Language
74
-
75
- # Install dependencies
76
- pip install -r requirements.txt
77
-
78
- # Run the application
79
- streamlit run app.py
80
- ```
81
-
82
- ### **Hugging Face Spaces Deployment**
83
- Follow the detailed guide in `DEPLOYMENT.md` for step-by-step instructions.
84
-
85
- ## 📁 Project Structure
86
-
87
- ```
88
- GenAI-For-Programming-Language/
89
- ├── app.py # Main Streamlit interface (HF Spaces optimized)
90
- ├── fine.py # Fine-tuned model integration
91
- ├── config.py # Configuration settings
92
- ├── requirements.txt # Dependencies
93
- ├── README.md # This file
94
- ├── DEPLOYMENT.md # HF Spaces deployment guide
95
- ├── .gitignore # Excludes model files
96
- ├── .gitattributes # File type configuration
97
- └── example_usage.py # Usage examples
98
- ```
99
-
100
- ## 🧠 Model Architecture
101
-
102
- ### **Base Model**
103
- - **CodeLlama-7B-Instruct-hf**
104
- - **7 billion parameters**
105
- - **Code-specific training**
106
-
107
- ### **Fine-tuning Datasets**
108
- 1. **Code Review Dataset**: Structured feedback on code quality
109
- 2. **Code Feedback Dataset**: Educational explanations and improvements
110
-
111
- ### **Training Process**
112
- - **LoRA fine-tuning** for efficiency
113
- - **Educational prompt engineering**
114
- - **Multi-stage feedback generation**
115
-
116
- ## 🎯 Usage Examples
117
-
118
- ### **Input Code**
119
- ```python
120
- def find_duplicates(numbers):
121
- x = []
122
- for i in range(len(numbers)):
123
- for j in range(i+1, len(numbers)):
124
- if numbers[i] == numbers[j]:
125
- x.append(numbers[i])
126
- return x
127
- ```
128
-
129
- ### **Generated Feedback**
130
- 1. **Analysis**: Identifies O(n²) complexity, poor variable naming
131
- 2. **Improvement Guide**: Step-by-step optimization instructions
132
- 3. **Learning Points**: Algorithm complexity, naming conventions
133
- 4. **Quiz**: "What is the time complexity and how to improve it?"
134
- 5. **Code Fix**: Optimized O(n) solution with better naming
135
-
136
- ## 🔧 Configuration
137
-
138
- ### **Model Settings**
139
- - **Path**: `./model` (for HF Spaces)
140
- - **Device**: CPU-optimized for 16GB RAM
141
- - **Memory**: Gradient checkpointing enabled
142
-
143
- ### **Educational Settings**
144
- - **Student Levels**: Beginner, Intermediate, Advanced
145
- - **Feedback Types**: Syntax, Logic, Optimization, Style
146
- - **Learning Objectives**: Comprehensive programming concepts
147
-
148
- ## 🚀 Performance
149
-
150
- ### **Local Environment**
151
- - **GPU**: Recommended for faster inference
152
- - **RAM**: 16GB+ recommended
153
- - **Storage**: 30GB+ for model files
154
-
155
- ### **Hugging Face Spaces**
156
- - **CPU**: 2 vCPU (sufficient for inference)
157
- - **RAM**: 16GB (optimized for this constraint)
158
- - **Loading Time**: 5-10 minutes for fine-tuned model
159
- - **Demo Mode**: Instant response
160
-
161
- ## 🤝 Contributing
162
-
163
- 1. Fork the repository
164
- 2. Create a feature branch
165
- 3. Make your changes
166
- 4. Test thoroughly
167
- 5. Submit a pull request
168
-
169
- ## 📄 License
170
-
171
- This project is licensed under the MIT License - see the LICENSE file for details.
172
-
173
- ## 🙏 Acknowledgments
174
-
175
- - **CodeLlama team** for the base model
176
- - **Hugging Face** for the Spaces platform
177
- - **Streamlit** for the web interface framework
178
-
179
- ## 📞 Contact
180
-
181
- For questions or support, please open an issue on GitHub.
182
-
183
- ---
184
-
185
- **🎓 Empowering programming education through AI-driven, structured learning experiences.**
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
src/config.py DELETED
@@ -1,126 +0,0 @@
1
- """
2
- Configuration file for the Generative AI Programming Education project
3
- """
4
-
5
- import os
6
- from pathlib import Path
7
-
8
- # Model Configuration
9
- MODEL_CONFIG = {
10
- # Path to your fine-tuned CodeLlama-7B model
11
- "model_path": "./model", # For Hugging Face Spaces deployment
12
-
13
- # Model generation parameters
14
- "max_new_tokens": 512,
15
- "temperature": 0.7,
16
- "do_sample": True,
17
- "top_p": 0.9,
18
- "top_k": 50,
19
-
20
- # Input processing
21
- "max_input_length": 2048,
22
- "truncation": True,
23
-
24
- # Device configuration
25
- "device_map": "auto",
26
- "torch_dtype": "float16",
27
- "trust_remote_code": True
28
- }
29
-
30
- # Dataset Configuration (for reference)
31
- DATASET_CONFIG = {
32
- "code_review_dataset": "path/to/your/code_review_dataset",
33
- "code_feedback_dataset": "path/to/your/code_feedback_dataset",
34
- "training_data_format": "json", # or "csv", "txt"
35
- }
36
-
37
- # Educational Levels
38
- STUDENT_LEVELS = {
39
- "beginner": {
40
- "description": "Students new to programming",
41
- "feedback_style": "explanatory",
42
- "include_basics": True,
43
- "complexity_threshold": "low"
44
- },
45
- "intermediate": {
46
- "description": "Students with basic programming knowledge",
47
- "feedback_style": "balanced",
48
- "include_basics": False,
49
- "complexity_threshold": "medium"
50
- },
51
- "advanced": {
52
- "description": "Students with strong programming background",
53
- "feedback_style": "technical",
54
- "include_basics": False,
55
- "complexity_threshold": "high"
56
- }
57
- }
58
-
59
- # Feedback Types
60
- FEEDBACK_TYPES = [
61
- "syntax",
62
- "logic",
63
- "optimization",
64
- "style",
65
- "explanation",
66
- "comprehensive_review",
67
- "educational_guidance"
68
- ]
69
-
70
- # Learning Objectives
71
- LEARNING_OBJECTIVES = [
72
- "syntax",
73
- "basic_python",
74
- "control_flow",
75
- "loops",
76
- "variables",
77
- "code_cleanliness",
78
- "algorithms",
79
- "complexity",
80
- "optimization",
81
- "naming_conventions",
82
- "readability",
83
- "code_analysis",
84
- "best_practices",
85
- "learning",
86
- "improvement"
87
- ]
88
-
89
- # Logging Configuration
90
- LOGGING_CONFIG = {
91
- "level": "INFO",
92
- "format": "%(asctime)s - %(name)s - %(levelname)s - %(message)s",
93
- "file": "programming_education_ai.log"
94
- }
95
-
96
- # Ethical Safeguards
97
- ETHICAL_CONFIG = {
98
- "prevent_over_reliance": True,
99
- "encourage_learning": True,
100
- "provide_explanations": True,
101
- "suggest_alternatives": True,
102
- "promote_best_practices": True
103
- }
104
-
105
-
106
- def get_model_path():
107
- """Get the model path from environment variable or config"""
108
- return os.getenv("FINETUNED_MODEL_PATH", MODEL_CONFIG["model_path"])
109
-
110
-
111
- def validate_config():
112
- """Validate the configuration settings"""
113
- model_path = get_model_path()
114
- if not os.path.exists(model_path):
115
- print(f"Warning: Model path does not exist: {model_path}")
116
- print("Please update the model_path in config.py or set FINETUNED_MODEL_PATH environment variable")
117
- return False
118
- return True
119
-
120
-
121
- if __name__ == "__main__":
122
- print("Configuration loaded successfully!")
123
- print(f"Model path: {get_model_path()}")
124
- print(f"Student levels: {list(STUDENT_LEVELS.keys())}")
125
- print(f"Feedback types: {FEEDBACK_TYPES}")
126
- validate_config()
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
src/example_usage.py DELETED
@@ -1,186 +0,0 @@
1
- """
2
- Example Usage of the Comprehensive Educational Feedback System
3
- """
4
-
5
- from fine import ProgrammingEducationAI
6
- import json
7
-
8
-
9
- def main():
10
- print("🎓 Comprehensive Educational Feedback System")
11
- print("=" * 60)
12
-
13
- # Initialize the system
14
- # Update this path to your actual fine-tuned model
15
- model_path = r"C:\Users\farou\OneDrive - Aston University\finetunning"
16
- ai_tutor = ProgrammingEducationAI(model_path)
17
-
18
- try:
19
- # Load the model
20
- print("Loading fine-tuned model...")
21
- ai_tutor.load_model()
22
- print("✅ Model loaded successfully!")
23
-
24
- # Example 1: Beginner student code
25
- print("\n" + "="*60)
26
- print("EXAMPLE 1: BEGINNER STUDENT")
27
- print("="*60)
28
-
29
- beginner_code = """
30
- def find_duplicates(numbers):
31
- x = []
32
- for i in range(len(numbers)):
33
- for j in range(i+1, len(numbers)):
34
- if numbers[i] == numbers[j]:
35
- x.append(numbers[i])
36
- return x
37
-
38
- result = find_duplicates([1, 2, 3, 2, 4, 5, 3])
39
- print(result)
40
- """
41
-
42
- print("Student Code:")
43
- print(beginner_code)
44
-
45
- feedback = ai_tutor.generate_comprehensive_feedback(
46
- beginner_code, "beginner")
47
- display_comprehensive_feedback(feedback)
48
-
49
- # Example 2: Intermediate student code
50
- print("\n" + "="*60)
51
- print("EXAMPLE 2: INTERMEDIATE STUDENT")
52
- print("="*60)
53
-
54
- intermediate_code = """
55
- def fibonacci(n):
56
- if n <= 1:
57
- return n
58
- return fibonacci(n-1) + fibonacci(n-2)
59
-
60
- # Calculate first 10 Fibonacci numbers
61
- for i in range(10):
62
- print(fibonacci(i))
63
- """
64
-
65
- print("Student Code:")
66
- print(intermediate_code)
67
-
68
- feedback = ai_tutor.generate_comprehensive_feedback(
69
- intermediate_code, "intermediate")
70
- display_comprehensive_feedback(feedback)
71
-
72
- # Example 3: Advanced student code
73
- print("\n" + "="*60)
74
- print("EXAMPLE 3: ADVANCED STUDENT")
75
- print("="*60)
76
-
77
- advanced_code = """
78
- class DataProcessor:
79
- def __init__(self, data):
80
- self.data = data
81
-
82
- def process(self):
83
- result = []
84
- for item in self.data:
85
- if item > 0:
86
- result.append(item * 2)
87
- return result
88
-
89
- processor = DataProcessor([1, -2, 3, -4, 5])
90
- output = processor.process()
91
- print(output)
92
- """
93
-
94
- print("Student Code:")
95
- print(advanced_code)
96
-
97
- feedback = ai_tutor.generate_comprehensive_feedback(
98
- advanced_code, "advanced")
99
- display_comprehensive_feedback(feedback)
100
-
101
- except Exception as e:
102
- print(f"❌ Error: {e}")
103
- print(
104
- "💡 Make sure to update the model_path to point to your actual fine-tuned model.")
105
-
106
-
107
- def display_comprehensive_feedback(feedback):
108
- """Display comprehensive feedback in a formatted way"""
109
-
110
- print("\n📊 COMPREHENSIVE FEEDBACK")
111
- print("-" * 40)
112
-
113
- # Analysis
114
- print("\n✅ STRENGTHS:")
115
- for i, strength in enumerate(feedback.strengths, 1):
116
- print(f" {i}. {strength}")
117
-
118
- print("\n❌ WEAKNESSES:")
119
- for i, weakness in enumerate(feedback.weaknesses, 1):
120
- print(f" {i}. {weakness}")
121
-
122
- print("\n⚠️ ISSUES:")
123
- for i, issue in enumerate(feedback.issues, 1):
124
- print(f" {i}. {issue}")
125
-
126
- # Educational content
127
- print("\n📝 STEP-BY-STEP IMPROVEMENT:")
128
- for i, step in enumerate(feedback.step_by_step_improvement, 1):
129
- print(f" Step {i}: {step}")
130
-
131
- print("\n🎓 LEARNING POINTS:")
132
- for i, point in enumerate(feedback.learning_points, 1):
133
- print(f" {i}. {point}")
134
-
135
- print(f"\n📋 REVIEW SUMMARY:")
136
- print(f" {feedback.review_summary}")
137
-
138
- # Interactive elements
139
- print(f"\n❓ COMPREHENSION QUESTION:")
140
- print(f" Q: {feedback.comprehension_question}")
141
- print(f" A: {feedback.comprehension_answer}")
142
- print(f" Explanation: {feedback.explanation}")
143
-
144
- # Code fixes
145
- print(f"\n🔧 IMPROVED CODE:")
146
- print(feedback.improved_code)
147
-
148
- print(f"\n💡 FIX EXPLANATION:")
149
- print(f" {feedback.fix_explanation}")
150
-
151
- # Metadata
152
- print(f"\n📊 METADATA:")
153
- print(f" Student Level: {feedback.student_level}")
154
- print(f" Learning Objectives: {', '.join(feedback.learning_objectives)}")
155
- print(
156
- f" Estimated Time to Improve: {feedback.estimated_time_to_improve}")
157
-
158
-
159
- def save_feedback_to_json(feedback, filename):
160
- """Save feedback to JSON file for later analysis"""
161
- feedback_dict = {
162
- "code_snippet": feedback.code_snippet,
163
- "student_level": feedback.student_level,
164
- "strengths": feedback.strengths,
165
- "weaknesses": feedback.weaknesses,
166
- "issues": feedback.issues,
167
- "step_by_step_improvement": feedback.step_by_step_improvement,
168
- "learning_points": feedback.learning_points,
169
- "review_summary": feedback.review_summary,
170
- "comprehension_question": feedback.comprehension_question,
171
- "comprehension_answer": feedback.comprehension_answer,
172
- "explanation": feedback.explanation,
173
- "improved_code": feedback.improved_code,
174
- "fix_explanation": feedback.fix_explanation,
175
- "learning_objectives": feedback.learning_objectives,
176
- "estimated_time_to_improve": feedback.estimated_time_to_improve
177
- }
178
-
179
- with open(filename, 'w') as f:
180
- json.dump(feedback_dict, f, indent=2)
181
-
182
- print(f"💾 Feedback saved to {filename}")
183
-
184
-
185
- if __name__ == "__main__":
186
- main()
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
src/fine.py DELETED
@@ -1,949 +0,0 @@
1
- """
2
- Generative AI for Enhancing Programming Education
3
- ================================================
4
-
5
- This project implements a fine-tuned CodeLlama-7B model to provide structured,
6
- educational code feedback for programming students.
7
-
8
- Problem Statement:
9
- - High dropout rates in programming education
10
- - Inefficient feedback loops
11
- - Lack of personalized learning
12
- - Limited instructor bandwidth
13
- - Current AI tools prioritize productivity over learning
14
-
15
- Solution:
16
- - Fine-tuned CodeLlama-7B for educational feedback
17
- - Structured, actionable code reviews
18
- - Beginner-friendly explanations
19
- - Personalized adaptation based on skill level
20
- - Educational focus with ethical safeguards
21
-
22
- Author: [Your Name]
23
- Date: [Current Date]
24
- """
25
-
26
- import re
27
- from dataclasses import dataclass
28
- from typing import Dict, List, Optional, Tuple
29
- import logging
30
- import json
31
- from transformers import AutoTokenizer, AutoModelForCausalLM
32
- import os
33
- import gc
34
- import torch
35
- import warnings
36
- warnings.filterwarnings("ignore", category=UserWarning)
37
-
38
- # --- Critical Environment Setup (Must be before imports) ---
39
- os.environ["TOKENIZERS_PARALLELISM"] = "false"
40
- os.environ["DATASETS_DISABLE_MULTIPROCESSING"] = "1"
41
-
42
- # Clear any existing CUDA cache (only if CUDA is available)
43
- if torch.cuda.is_available():
44
- os.environ["PYTORCH_CUDA_ALLOC_CONF"] = "max_split_size_mb:128,garbage_collection_threshold:0.6"
45
- torch.cuda.empty_cache()
46
- gc.collect()
47
-
48
-
49
- # Configure logging
50
- logging.basicConfig(level=logging.INFO)
51
- logger = logging.getLogger(__name__)
52
-
53
-
54
- def clear_cuda_cache():
55
- """Clear CUDA cache and run garbage collection"""
56
- if torch.cuda.is_available():
57
- torch.cuda.empty_cache()
58
- torch.cuda.synchronize()
59
- gc.collect()
60
-
61
-
62
- def get_system_memory():
63
- """Get system memory information"""
64
- try:
65
- import psutil
66
- memory = psutil.virtual_memory()
67
- print(
68
- f"System RAM: {memory.used / (1024**3):.1f}GB / {memory.total / (1024**3):.1f}GB used ({memory.percent:.1f}%)")
69
- except Exception as e:
70
- print(f"Could not get system memory info: {e}")
71
-
72
-
73
- def get_gpu_memory():
74
- """Get GPU memory information (if available)"""
75
- if torch.cuda.is_available():
76
- try:
77
- import subprocess
78
- result = subprocess.run(['nvidia-smi', '--query-gpu=memory.used,memory.total', '--format=csv,nounits,noheader'],
79
- capture_output=True, text=True)
80
- lines = result.stdout.strip().split('\n')
81
- for i, line in enumerate(lines):
82
- used, total = map(int, line.split(', '))
83
- print(
84
- f"GPU {i}: {used}MB / {total}MB used ({used/total*100:.1f}%)")
85
- except Exception as e:
86
- print(f"Could not get GPU memory info: {e}")
87
- else:
88
- print("No GPU available - using CPU only")
89
-
90
-
91
- @dataclass
92
- class CodeFeedback:
93
- """Data structure for storing code feedback"""
94
- code_snippet: str
95
- feedback_type: str # 'syntax', 'logic', 'optimization', 'style', 'explanation'
96
- feedback_message: str
97
- suggested_improvement: Optional[str] = None
98
- difficulty_level: str = 'beginner' # 'beginner', 'intermediate', 'advanced'
99
- learning_objectives: List[str] = None
100
-
101
-
102
- @dataclass
103
- class ComprehensiveFeedback:
104
- """Comprehensive feedback structure with all educational components"""
105
- code_snippet: str
106
- student_level: str
107
-
108
- # Analysis
109
- strengths: List[str]
110
- weaknesses: List[str]
111
- issues: List[str]
112
-
113
- # Educational content
114
- step_by_step_improvement: List[str]
115
- learning_points: List[str]
116
- review_summary: str
117
-
118
- # Interactive elements
119
- comprehension_question: str
120
- comprehension_answer: str
121
- explanation: str
122
-
123
- # Code fixes
124
- improved_code: str
125
- fix_explanation: str
126
-
127
- # Metadata
128
- difficulty_level: str
129
- learning_objectives: List[str]
130
- estimated_time_to_improve: str
131
-
132
-
133
- class ProgrammingEducationAI:
134
- """
135
- Main class for the fine-tuned CodeLlama model for programming education
136
- """
137
-
138
- def __init__(self, model_path: str = "TomoriFarouk/codellama-7b-programming-education"):
139
- """
140
- Initialize the fine-tuned model and tokenizer
141
-
142
- Args:
143
- model_path: Path to your fine-tuned CodeLlama-7B model
144
- """
145
- self.model_path = model_path
146
- self.tokenizer = None
147
- self.model = None
148
- self.feedback_templates = self._load_feedback_templates()
149
- self.code_review_prompt_template = self._load_code_review_prompt()
150
- self.code_feedback_prompt_template = self._load_code_feedback_prompt()
151
- self.comprehensive_feedback_prompt = self._load_comprehensive_feedback_prompt()
152
- self.comprehension_question_prompt = self._load_comprehension_question_prompt()
153
- self.code_fix_prompt = self._load_code_fix_prompt()
154
-
155
- def _load_code_review_prompt(self) -> str:
156
- """Load the code review prompt template used during fine-tuning"""
157
- return """You are an expert programming tutor. Review the following student code and provide educational feedback.
158
-
159
- Student Code:
160
- {code}
161
-
162
- Student Level: {level}
163
-
164
- Please provide:
165
- 1. Syntax errors (if any)
166
- 2. Logic errors (if any)
167
- 3. Style improvements
168
- 4. Optimization suggestions
169
- 5. Educational explanations
170
-
171
- Feedback:"""
172
-
173
- def _load_code_feedback_prompt(self) -> str:
174
- """Load the code feedback prompt template used during fine-tuning"""
175
- return """You are a helpful programming tutor. The student has written this code:
176
-
177
- {code}
178
-
179
- Student Level: {level}
180
-
181
- Provide constructive, educational feedback that helps the student learn. Focus on:
182
- - What they did well
183
- - What can be improved
184
- - Why the improvement matters
185
- - How to implement the improvement
186
-
187
- Feedback:"""
188
-
189
- def _load_feedback_templates(self) -> Dict[str, str]:
190
- """Load predefined feedback templates for different scenarios"""
191
- return {
192
- "syntax_error": "I notice there's a syntax issue in your code. {error_description}. "
193
- "Here's what's happening: {explanation}. "
194
- "Try this correction: {suggestion}",
195
-
196
- "logic_error": "Your code has a logical issue. {problem_description}. "
197
- "The problem is: {explanation}. "
198
- "Consider this approach: {suggestion}",
199
-
200
- "optimization": "Your code works, but we can make it more efficient! "
201
- "Current complexity: {current_complexity}. "
202
- "Optimized version: {optimized_complexity}. "
203
- "Here's how: {explanation}",
204
-
205
- "style_improvement": "Great work! Here's a style tip: {tip}. "
206
- "This makes your code more readable and maintainable.",
207
-
208
- "concept_explanation": "Let me explain this concept: {concept}. "
209
- "In simple terms: {simple_explanation}. "
210
- "Example: {example}"
211
- }
212
-
213
- def load_model(self):
214
- """
215
- Load the fine-tuned model and tokenizer optimized for HF Spaces
216
- """
217
- try:
218
- # Get HF token for private model access
219
- hf_token = os.getenv("HF_TOKEN", None)
220
-
221
- logger.info("Loading tokenizer...")
222
- self.tokenizer = AutoTokenizer.from_pretrained(
223
- self.model_path,
224
- trust_remote_code=True,
225
- token=hf_token # Use token for private models
226
- )
227
-
228
- # Set padding token
229
- if self.tokenizer.pad_token is None:
230
- self.tokenizer.pad_token = self.tokenizer.eos_token
231
- self.tokenizer.pad_token_id = self.tokenizer.eos_token_id
232
-
233
- logger.info(
234
- f"Tokenizer loaded - Vocab size: {len(self.tokenizer)}")
235
-
236
- # Load model optimized for HF Spaces (16GB RAM, 2 vCPU)
237
- print("Loading model optimized for HF Spaces (16GB RAM, 2 vCPU)...")
238
- self.model = AutoModelForCausalLM.from_pretrained(
239
- self.model_path,
240
- torch_dtype=torch.float32,
241
- device_map=None, # Force CPU for HF Spaces
242
- low_cpu_mem_usage=True,
243
- trust_remote_code=True,
244
- offload_folder="offload", # Offload to disk if needed
245
- token=hf_token # Use token for private models
246
- )
247
- # Enable gradient checkpointing for memory savings
248
- self.model.gradient_checkpointing_enable()
249
-
250
- logger.info("Fine-tuned model loaded successfully")
251
- logger.info(f"Model loaded on devices: {self.model.hf_device_map}")
252
-
253
- except Exception as e:
254
- logger.error(f"Error loading fine-tuned model: {e}")
255
- raise
256
-
257
- def generate_code_review(self, code: str, student_level: str = "beginner") -> str:
258
- """
259
- Generate code review using the fine-tuned model
260
-
261
- Args:
262
- code: Student's code to review
263
- student_level: Student's skill level
264
-
265
- Returns:
266
- Generated code review feedback
267
- """
268
- if not self.model or not self.tokenizer:
269
- raise ValueError("Model not loaded. Call load_model() first.")
270
-
271
- # Format the prompt using the template from fine-tuning
272
- prompt = self.code_review_prompt_template.format(
273
- code=code,
274
- level=student_level
275
- )
276
-
277
- # Tokenize input
278
- inputs = self.tokenizer(
279
- prompt, return_tensors="pt", truncation=True, max_length=2048)
280
-
281
- # Generate response
282
- with torch.no_grad():
283
- outputs = self.model.generate(
284
- inputs.input_ids,
285
- max_new_tokens=512,
286
- temperature=0.7,
287
- do_sample=True,
288
- pad_token_id=self.tokenizer.eos_token_id
289
- )
290
-
291
- # Decode response
292
- response = self.tokenizer.decode(outputs[0], skip_special_tokens=True)
293
-
294
- # Extract only the generated part (after the prompt)
295
- generated_text = response[len(prompt):].strip()
296
-
297
- return generated_text
298
-
299
- def generate_educational_feedback(self, code: str, student_level: str = "beginner") -> str:
300
- """
301
- Generate educational feedback using the fine-tuned model
302
-
303
- Args:
304
- code: Student's code to provide feedback on
305
- student_level: Student's skill level
306
-
307
- Returns:
308
- Generated educational feedback
309
- """
310
- if not self.model or not self.tokenizer:
311
- raise ValueError("Model not loaded. Call load_model() first.")
312
-
313
- # Format the prompt using the template from fine-tuning
314
- prompt = self.code_feedback_prompt_template.format(
315
- code=code,
316
- level=student_level
317
- )
318
-
319
- # Tokenize input
320
- inputs = self.tokenizer(
321
- prompt, return_tensors="pt", truncation=True, max_length=2048)
322
-
323
- # Generate response
324
- with torch.no_grad():
325
- outputs = self.model.generate(
326
- inputs.input_ids,
327
- max_new_tokens=512,
328
- temperature=0.7,
329
- do_sample=True,
330
- pad_token_id=self.tokenizer.eos_token_id
331
- )
332
-
333
- # Decode response
334
- response = self.tokenizer.decode(outputs[0], skip_special_tokens=True)
335
-
336
- # Extract only the generated part (after the prompt)
337
- generated_text = response[len(prompt):].strip()
338
-
339
- return generated_text
340
-
341
- def analyze_student_code(self, code: str, student_level: str = "beginner") -> List[CodeFeedback]:
342
- """
343
- Analyze student code and provide educational feedback using the fine-tuned model
344
-
345
- Args:
346
- code: The student's code to analyze
347
- student_level: Student's skill level ('beginner', 'intermediate', 'advanced')
348
-
349
- Returns:
350
- List of CodeFeedback objects
351
- """
352
- feedback_list = []
353
-
354
- # Use fine-tuned model for comprehensive code review
355
- try:
356
- code_review = self.generate_code_review(code, student_level)
357
- educational_feedback = self.generate_educational_feedback(
358
- code, student_level)
359
-
360
- # Create structured feedback from model output
361
- feedback_list.append(CodeFeedback(
362
- code_snippet=code,
363
- feedback_type="comprehensive_review",
364
- feedback_message=code_review,
365
- difficulty_level=student_level,
366
- learning_objectives=["code_analysis", "best_practices"]
367
- ))
368
-
369
- feedback_list.append(CodeFeedback(
370
- code_snippet=code,
371
- feedback_type="educational_guidance",
372
- feedback_message=educational_feedback,
373
- difficulty_level=student_level,
374
- learning_objectives=["learning", "improvement"]
375
- ))
376
-
377
- except Exception as e:
378
- logger.warning(
379
- f"Fine-tuned model failed, falling back to rule-based analysis: {e}")
380
- # Fallback to rule-based analysis if model fails
381
- feedback_list = self._fallback_analysis(code, student_level)
382
-
383
- return feedback_list
384
-
385
- def _fallback_analysis(self, code: str, student_level: str) -> List[CodeFeedback]:
386
- """Fallback analysis using rule-based methods if fine-tuned model fails"""
387
- feedback_list = []
388
-
389
- # Analyze syntax
390
- syntax_feedback = self._check_syntax(code, student_level)
391
- if syntax_feedback:
392
- feedback_list.append(syntax_feedback)
393
-
394
- # Analyze logic and structure
395
- logic_feedback = self._check_logic(code, student_level)
396
- if logic_feedback:
397
- feedback_list.extend(logic_feedback)
398
-
399
- # Check for optimization opportunities
400
- optimization_feedback = self._check_optimization(code, student_level)
401
- if optimization_feedback:
402
- feedback_list.append(optimization_feedback)
403
-
404
- # Provide style suggestions
405
- style_feedback = self._check_style(code, student_level)
406
- if style_feedback:
407
- feedback_list.append(style_feedback)
408
-
409
- return feedback_list
410
-
411
- def _check_syntax(self, code: str, student_level: str) -> Optional[CodeFeedback]:
412
- """Check for syntax errors and provide educational feedback"""
413
- # This would integrate with the fine-tuned model
414
- # For now, using basic pattern matching as placeholder
415
-
416
- common_syntax_errors = {
417
- r"print\s*\([^)]*\)\s*$": "Remember to add a colon after print statements in some contexts",
418
- r"if\s+[^:]+$": "Don't forget the colon after your if condition",
419
- r"for\s+[^:]+$": "Don't forget the colon after your for loop",
420
- }
421
-
422
- for pattern, message in common_syntax_errors.items():
423
- if re.search(pattern, code):
424
- return CodeFeedback(
425
- code_snippet=code,
426
- feedback_type="syntax",
427
- feedback_message=message,
428
- difficulty_level=student_level,
429
- learning_objectives=["syntax", "basic_python"]
430
- )
431
-
432
- return None
433
-
434
- def _check_logic(self, code: str, student_level: str) -> List[CodeFeedback]:
435
- """Check for logical errors and provide educational feedback"""
436
- feedback_list = []
437
-
438
- # Check for infinite loops
439
- if "while True:" in code and "break" not in code:
440
- feedback_list.append(CodeFeedback(
441
- code_snippet=code,
442
- feedback_type="logic",
443
- feedback_message="This while loop will run forever! Make sure to include a break statement or condition to exit the loop.",
444
- difficulty_level=student_level,
445
- learning_objectives=["control_flow", "loops"]
446
- ))
447
-
448
- # Check for unused variables
449
- # This is a simplified check - the actual model would be more sophisticated
450
- if "x = " in code and "x" not in code.replace("x = ", ""):
451
- feedback_list.append(CodeFeedback(
452
- code_snippet=code,
453
- feedback_type="logic",
454
- feedback_message="You created variable 'x' but didn't use it. Consider removing unused variables to keep your code clean.",
455
- difficulty_level=student_level,
456
- learning_objectives=["variables", "code_cleanliness"]
457
- ))
458
-
459
- return feedback_list
460
-
461
- def _check_optimization(self, code: str, student_level: str) -> Optional[CodeFeedback]:
462
- """Check for optimization opportunities"""
463
- # Check for nested loops that could be optimized
464
- if code.count("for") > 1 and code.count("in range") > 1:
465
- return CodeFeedback(
466
- code_snippet=code,
467
- feedback_type="optimization",
468
- feedback_message="You have nested loops here. Consider if you can optimize this to O(n) instead of O(n²).",
469
- suggested_improvement="Use a hashmap or set to reduce complexity",
470
- difficulty_level=student_level,
471
- learning_objectives=["algorithms",
472
- "complexity", "optimization"]
473
- )
474
-
475
- return None
476
-
477
- def _check_style(self, code: str, student_level: str) -> Optional[CodeFeedback]:
478
- """Check for style improvements"""
479
- # Check for meaningful variable names
480
- if "x" in code or "y" in code or "z" in code:
481
- return CodeFeedback(
482
- code_snippet=code,
483
- feedback_type="style",
484
- feedback_message="Consider using more descriptive variable names instead of x, y, z. This makes your code easier to understand.",
485
- difficulty_level=student_level,
486
- learning_objectives=["naming_conventions", "readability"]
487
- )
488
-
489
- return None
490
-
491
- def generate_explanation(self, concept: str, student_level: str) -> str:
492
- """
493
- Generate explanations for programming concepts based on student level
494
-
495
- Args:
496
- concept: The concept to explain
497
- student_level: Student's skill level
498
-
499
- Returns:
500
- Explanation tailored to the student's level
501
- """
502
- explanations = {
503
- "variables": {
504
- "beginner": "Variables are like labeled boxes where you store information. Think of 'name = \"John\"' as putting \"John\" in a box labeled 'name'.",
505
- "intermediate": "Variables are memory locations that store data. They have a name, type, and value. Python is dynamically typed, so the type is inferred.",
506
- "advanced": "Variables in Python are references to objects in memory. They're dynamically typed and use reference counting for memory management."
507
- },
508
- "loops": {
509
- "beginner": "Loops repeat code multiple times. 'for' loops are great when you know how many times to repeat, 'while' loops when you don't.",
510
- "intermediate": "Loops control program flow. 'for' iterates over sequences, 'while' continues until a condition is False. Consider time complexity.",
511
- "advanced": "Loops are fundamental control structures. Python's 'for' is actually a foreach loop. Consider iterator patterns and generator expressions."
512
- }
513
- }
514
-
515
- return explanations.get(concept, {}).get(student_level, f"Explanation for {concept} at {student_level} level")
516
-
517
- def _load_comprehensive_feedback_prompt(self) -> str:
518
- """Load the comprehensive feedback prompt template"""
519
- return """You are an expert programming tutor. Provide comprehensive educational feedback for the following student code.
520
-
521
- Student Code:
522
- {code}
523
-
524
- Student Level: {level}
525
-
526
- Please provide a detailed analysis in the following JSON format:
527
-
528
- {{
529
- "strengths": ["strength1", "strength2", "strength3"],
530
- "weaknesses": ["weakness1", "weakness2", "weakness3"],
531
- "issues": ["issue1", "issue2", "issue3"],
532
- "step_by_step_improvement": [
533
- "Step 1: Description of first improvement",
534
- "Step 2: Description of second improvement",
535
- "Step 3: Description of third improvement"
536
- ],
537
- "learning_points": [
538
- "Learning point 1: What the student should understand",
539
- "Learning point 2: Key concept to grasp",
540
- "Learning point 3: Best practice to follow"
541
- ],
542
- "review_summary": "A comprehensive review of the code highlighting key areas for improvement",
543
- "learning_objectives": ["objective1", "objective2", "objective3"],
544
- "estimated_time_to_improve": "5-10 minutes"
545
- }}
546
-
547
- Focus on educational value and constructive feedback that helps the student learn and improve."""
548
-
549
- def _load_comprehension_question_prompt(self) -> str:
550
- """Load the comprehension question generation prompt"""
551
- return """Based on the learning points and improvements discussed, generate a comprehension question to test the student's understanding.
552
-
553
- Learning Points: {learning_points}
554
- Code Issues: {issues}
555
- Student Level: {level}
556
-
557
- Generate a question that tests understanding of the key concepts discussed. The question should be appropriate for the student's level.
558
-
559
- Format your response as JSON:
560
- {{
561
- "question": "Your comprehension question here",
562
- "answer": "The correct answer",
563
- "explanation": "Detailed explanation of why this answer is correct"
564
- }}
565
-
566
- Make the question challenging but fair for the student's level."""
567
-
568
- def _load_code_fix_prompt(self) -> str:
569
- """Load the code fix generation prompt"""
570
- return """You are an expert programming tutor. Based on the analysis and learning points, provide an improved version of the student's code.
571
-
572
- Original Code:
573
- {code}
574
-
575
- Issues Identified: {issues}
576
- Learning Points: {learning_points}
577
- Student Level: {level}
578
-
579
- Provide an improved version of the code that addresses the issues while maintaining educational value. Include comments to explain the improvements.
580
-
581
- Format your response as JSON:
582
- {{
583
- "improved_code": "The improved code with comments",
584
- "fix_explanation": "Detailed explanation of what was changed and why"
585
- }}
586
-
587
- Focus on educational improvements that help the student understand better practices."""
588
-
589
- def adapt_feedback_complexity(self, feedback: CodeFeedback, student_level: str) -> CodeFeedback:
590
- """
591
- Adapt feedback complexity based on student level
592
-
593
- Args:
594
- feedback: Original feedback
595
- student_level: Student's skill level
596
-
597
- Returns:
598
- Adapted feedback
599
- """
600
- if student_level == "beginner":
601
- # Simplify language and add more examples
602
- feedback.feedback_message = feedback.feedback_message.replace(
603
- "O(n²)", "quadratic time (slower)"
604
- ).replace(
605
- "O(n)", "linear time (faster)"
606
- )
607
- elif student_level == "advanced":
608
- # Add more technical details
609
- if "optimization" in feedback.feedback_type:
610
- feedback.feedback_message += " Consider the space-time tradeoff and cache locality."
611
-
612
- return feedback
613
-
614
- def generate_comprehensive_feedback(self, code: str, student_level: str = "beginner") -> ComprehensiveFeedback:
615
- """
616
- Generate comprehensive educational feedback with all components
617
-
618
- Args:
619
- code: Student's code to analyze
620
- student_level: Student's skill level
621
-
622
- Returns:
623
- ComprehensiveFeedback object with all educational components
624
- """
625
- if not self.model or not self.tokenizer:
626
- raise ValueError("Model not loaded. Call load_model() first.")
627
-
628
- try:
629
- # Step 1: Generate comprehensive analysis
630
- comprehensive_analysis = self._generate_comprehensive_analysis(
631
- code, student_level)
632
-
633
- # Step 2: Generate comprehension question
634
- comprehension_data = self._generate_comprehension_question(
635
- comprehensive_analysis["learning_points"],
636
- comprehensive_analysis["issues"],
637
- student_level
638
- )
639
-
640
- # Step 3: Generate improved code
641
- code_fix_data = self._generate_code_fix(
642
- code,
643
- comprehensive_analysis["issues"],
644
- comprehensive_analysis["learning_points"],
645
- student_level
646
- )
647
-
648
- # Create comprehensive feedback object
649
- return ComprehensiveFeedback(
650
- code_snippet=code,
651
- student_level=student_level,
652
- strengths=comprehensive_analysis["strengths"],
653
- weaknesses=comprehensive_analysis["weaknesses"],
654
- issues=comprehensive_analysis["issues"],
655
- step_by_step_improvement=comprehensive_analysis["step_by_step_improvement"],
656
- learning_points=comprehensive_analysis["learning_points"],
657
- review_summary=comprehensive_analysis["review_summary"],
658
- comprehension_question=comprehension_data["question"],
659
- comprehension_answer=comprehension_data["answer"],
660
- explanation=comprehension_data["explanation"],
661
- improved_code=code_fix_data["improved_code"],
662
- fix_explanation=code_fix_data["fix_explanation"],
663
- difficulty_level=student_level,
664
- learning_objectives=comprehensive_analysis["learning_objectives"],
665
- estimated_time_to_improve=comprehensive_analysis["estimated_time_to_improve"]
666
- )
667
-
668
- except Exception as e:
669
- logger.error(f"Error generating comprehensive feedback: {e}")
670
- # Return a basic comprehensive feedback if model fails
671
- return self._create_fallback_comprehensive_feedback(code, student_level)
672
-
673
- def _generate_comprehensive_analysis(self, code: str, student_level: str) -> Dict:
674
- """Generate comprehensive analysis using the fine-tuned model"""
675
- prompt = self.comprehensive_feedback_prompt.format(
676
- code=code,
677
- level=student_level
678
- )
679
-
680
- response = self._generate_model_response(prompt)
681
-
682
- try:
683
- # Try to parse JSON response
684
- import json
685
- return json.loads(response)
686
- except json.JSONDecodeError:
687
- logger.warning("Failed to parse JSON response, using fallback")
688
- return self._create_fallback_analysis(code, student_level)
689
-
690
- def _generate_comprehension_question(self, learning_points: List[str], issues: List[str], student_level: str) -> Dict:
691
- """Generate comprehension question using the fine-tuned model"""
692
- prompt = self.comprehension_question_prompt.format(
693
- learning_points=", ".join(learning_points),
694
- issues=", ".join(issues),
695
- level=student_level
696
- )
697
-
698
- response = self._generate_model_response(prompt)
699
-
700
- try:
701
- import json
702
- return json.loads(response)
703
- except json.JSONDecodeError:
704
- logger.warning(
705
- "Failed to parse comprehension question JSON, using fallback")
706
- return {
707
- "question": "What is the main concept you learned from this code review?",
708
- "answer": "The main concept is understanding code structure and best practices.",
709
- "explanation": "This question tests your understanding of the key learning points discussed."
710
- }
711
-
712
- def _generate_code_fix(self, code: str, issues: List[str], learning_points: List[str], student_level: str) -> Dict:
713
- """Generate improved code using the fine-tuned model"""
714
- prompt = self.code_fix_prompt.format(
715
- code=code,
716
- issues=", ".join(issues),
717
- learning_points=", ".join(learning_points),
718
- level=student_level
719
- )
720
-
721
- response = self._generate_model_response(prompt)
722
-
723
- try:
724
- import json
725
- return json.loads(response)
726
- except json.JSONDecodeError:
727
- logger.warning("Failed to parse code fix JSON, using fallback")
728
- return {
729
- "improved_code": "# Improved version of your code\n# Add comments and improvements here",
730
- "fix_explanation": "This is a fallback improved version. The model should provide specific improvements."
731
- }
732
-
733
- def _generate_model_response(self, prompt: str) -> str:
734
- """Generate response from the fine-tuned model"""
735
- inputs = self.tokenizer(
736
- prompt, return_tensors="pt", truncation=True, max_length=2048)
737
-
738
- # Move to CPU if no GPU available
739
- if not torch.cuda.is_available():
740
- inputs = {k: v.cpu() for k, v in inputs.items()}
741
-
742
- with torch.no_grad():
743
- outputs = self.model.generate(
744
- inputs.input_ids,
745
- max_new_tokens=512,
746
- temperature=0.7,
747
- do_sample=True,
748
- pad_token_id=self.tokenizer.eos_token_id
749
- )
750
-
751
- response = self.tokenizer.decode(outputs[0], skip_special_tokens=True)
752
- return response[len(prompt):].strip()
753
-
754
- def _create_fallback_analysis(self, code: str, student_level: str) -> Dict:
755
- """Create fallback analysis when model fails"""
756
- return {
757
- "strengths": ["Your code has a clear structure", "You're using appropriate data types"],
758
- "weaknesses": ["Could improve variable naming", "Consider adding comments"],
759
- "issues": ["Basic syntax and style issues"],
760
- "step_by_step_improvement": [
761
- "Step 1: Add descriptive variable names",
762
- "Step 2: Include comments explaining your logic",
763
- "Step 3: Consider code optimization"
764
- ],
765
- "learning_points": [
766
- "Good variable naming improves code readability",
767
- "Comments help others understand your code",
768
- "Always consider efficiency in your solutions"
769
- ],
770
- "review_summary": "Your code works but could be improved with better practices.",
771
- "learning_objectives": ["code_quality", "best_practices", "readability"],
772
- "estimated_time_to_improve": "10-15 minutes"
773
- }
774
-
775
- def _create_fallback_comprehensive_feedback(self, code: str, student_level: str) -> ComprehensiveFeedback:
776
- """Create fallback comprehensive feedback when model fails"""
777
- fallback_analysis = self._create_fallback_analysis(code, student_level)
778
-
779
- return ComprehensiveFeedback(
780
- code_snippet=code,
781
- student_level=student_level,
782
- strengths=fallback_analysis["strengths"],
783
- weaknesses=fallback_analysis["weaknesses"],
784
- issues=fallback_analysis["issues"],
785
- step_by_step_improvement=fallback_analysis["step_by_step_improvement"],
786
- learning_points=fallback_analysis["learning_points"],
787
- review_summary=fallback_analysis["review_summary"],
788
- comprehension_question="What is the importance of good variable naming in programming?",
789
- comprehension_answer="Good variable naming makes code more readable and maintainable.",
790
- explanation="Descriptive variable names help other developers (and yourself) understand what the code does.",
791
- improved_code="# Improved version\n# Add your improvements here",
792
- fix_explanation="This is a fallback version. The model should provide specific improvements.",
793
- difficulty_level=student_level,
794
- learning_objectives=fallback_analysis["learning_objectives"],
795
- estimated_time_to_improve=fallback_analysis["estimated_time_to_improve"]
796
- )
797
-
798
-
799
- def main():
800
- """Main function to demonstrate the system with fine-tuned model"""
801
- print("Generative AI for Programming Education")
802
- print("Using Fine-tuned CodeLlama-7B Model")
803
- print("=" * 50)
804
-
805
- # System information
806
- print(f"Available GPUs: {torch.cuda.device_count()}")
807
- if torch.cuda.is_available():
808
- print("GPU Memory before loading:")
809
- get_gpu_memory()
810
- else:
811
- print("System Memory before loading:")
812
- get_system_memory()
813
-
814
- # Initialize the system with your fine-tuned model path
815
- # Update this path to point to your actual fine-tuned model
816
- model_path = r"C:\Users\farou\OneDrive - Aston University\finetunning"
817
- ai_tutor = ProgrammingEducationAI(model_path)
818
-
819
- try:
820
- # Load the fine-tuned model
821
- print("Loading fine-tuned model...")
822
- ai_tutor.load_model()
823
- print("✓ Model loaded successfully!")
824
-
825
- # Clear cache after loading
826
- clear_cuda_cache()
827
- if torch.cuda.is_available():
828
- print("GPU Memory after loading:")
829
- get_gpu_memory()
830
- else:
831
- print("System Memory after loading:")
832
- get_system_memory()
833
-
834
- # Example student code for testing
835
- student_code = """
836
- def find_duplicates(numbers):
837
- x = []
838
- for i in range(len(numbers)):
839
- for j in range(i+1, len(numbers)):
840
- if numbers[i] == numbers[j]:
841
- x.append(numbers[i])
842
- return x
843
-
844
- # Test the function
845
- result = find_duplicates([1, 2, 3, 2, 4, 5, 3])
846
- print(result)
847
- """
848
-
849
- print(f"\nAnalyzing student code:\n{student_code}")
850
-
851
- # Get feedback using fine-tuned model
852
- feedback_list = ai_tutor.analyze_student_code(student_code, "beginner")
853
-
854
- print("\n" + "="*50)
855
- print("FINE-TUNED MODEL FEEDBACK:")
856
- print("="*50)
857
-
858
- for i, feedback in enumerate(feedback_list, 1):
859
- print(f"\n{i}. {feedback.feedback_type.upper()}:")
860
- print(f" {feedback.feedback_message}")
861
- if feedback.suggested_improvement:
862
- print(f" Suggestion: {feedback.suggested_improvement}")
863
- print(
864
- f" Learning objectives: {', '.join(feedback.learning_objectives)}")
865
-
866
- # Demonstrate direct model calls
867
- print("\n" + "="*50)
868
- print("DIRECT MODEL GENERATION:")
869
- print("="*50)
870
-
871
- # Code review
872
- print("\n1. CODE REVIEW:")
873
- code_review = ai_tutor.generate_code_review(student_code, "beginner")
874
- print(code_review)
875
-
876
- # Educational feedback
877
- print("\n2. EDUCATIONAL FEEDBACK:")
878
- educational_feedback = ai_tutor.generate_educational_feedback(
879
- student_code, "beginner")
880
- print(educational_feedback)
881
-
882
- # Demonstrate comprehensive feedback system
883
- print("\n" + "="*50)
884
- print("COMPREHENSIVE EDUCATIONAL FEEDBACK SYSTEM:")
885
- print("="*50)
886
-
887
- comprehensive_feedback = ai_tutor.generate_comprehensive_feedback(
888
- student_code, "beginner")
889
-
890
- # Display comprehensive feedback
891
- print("\n📊 CODE ANALYSIS:")
892
- print("="*30)
893
-
894
- print("\n✅ STRENGTHS:")
895
- for i, strength in enumerate(comprehensive_feedback.strengths, 1):
896
- print(f" {i}. {strength}")
897
-
898
- print("\n❌ WEAKNESSES:")
899
- for i, weakness in enumerate(comprehensive_feedback.weaknesses, 1):
900
- print(f" {i}. {weakness}")
901
-
902
- print("\n⚠️ ISSUES:")
903
- for i, issue in enumerate(comprehensive_feedback.issues, 1):
904
- print(f" {i}. {issue}")
905
-
906
- print("\n📝 STEP-BY-STEP IMPROVEMENT GUIDE:")
907
- print("="*40)
908
- for i, step in enumerate(comprehensive_feedback.step_by_step_improvement, 1):
909
- print(f" Step {i}: {step}")
910
-
911
- print("\n🎓 LEARNING POINTS:")
912
- print("="*25)
913
- for i, point in enumerate(comprehensive_feedback.learning_points, 1):
914
- print(f" {i}. {point}")
915
-
916
- print("\n📋 REVIEW SUMMARY:")
917
- print("="*20)
918
- print(f" {comprehensive_feedback.review_summary}")
919
-
920
- print("\n❓ COMPREHENSION QUESTION:")
921
- print("="*30)
922
- print(f" Question: {comprehensive_feedback.comprehension_question}")
923
- print(f" Answer: {comprehensive_feedback.comprehension_answer}")
924
- print(f" Explanation: {comprehensive_feedback.explanation}")
925
-
926
- print("\n🔧 IMPROVED CODE:")
927
- print("="*20)
928
- print(comprehensive_feedback.improved_code)
929
-
930
- print("\n💡 FIX EXPLANATION:")
931
- print("="*20)
932
- print(f" {comprehensive_feedback.fix_explanation}")
933
-
934
- print("\n📊 METADATA:")
935
- print("="*15)
936
- print(f" Student Level: {comprehensive_feedback.student_level}")
937
- print(
938
- f" Learning Objectives: {', '.join(comprehensive_feedback.learning_objectives)}")
939
- print(
940
- f" Estimated Time to Improve: {comprehensive_feedback.estimated_time_to_improve}")
941
-
942
- except Exception as e:
943
- print(f"Error: {e}")
944
- print(
945
- "Make sure to update the model_path variable to point to your fine-tuned model.")
946
-
947
-
948
- if __name__ == "__main__":
949
- main()
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
src/requirements.txt DELETED
@@ -1 +0,0 @@
1
- streamlit>=1.28.0
 
 
src/streamlit_app.py CHANGED
@@ -1,229 +1,40 @@
1
- """
2
- AI Programming Tutor - Full Version with Fine-tuned Model Support
3
- Works on Hugging Face Spaces with fallback to demo mode
4
- Version: 2.0 - No Demo Fallback, Shows Detailed Errors
5
- """
6
-
7
  import streamlit as st
8
- import os
9
-
10
- # Configure page
11
- st.set_page_config(
12
- page_title="AI Programming Tutor",
13
- page_icon="🤖",
14
- layout="wide"
15
- )
16
-
17
- # Try to import the fine-tuned model components
18
- try:
19
- from fine import ProgrammingEducationAI, ComprehensiveFeedback
20
- MODEL_AVAILABLE = True
21
- except Exception as e:
22
- MODEL_AVAILABLE = False
23
-
24
- # Note: Using public model - no HF_TOKEN required
25
- HF_TOKEN = None # Set to None for public model
26
-
27
-
28
- # Demo feedback function removed - app now shows actual errors instead of falling back to demo
29
-
30
-
31
- def main():
32
- st.title("🤖 AI Programming Tutor")
33
- st.markdown("### Enhancing Programming Education with Generative AI")
34
-
35
- # Sidebar for model selection
36
- with st.sidebar:
37
- st.header("⚙️ Settings")
38
-
39
- if MODEL_AVAILABLE:
40
- model_option = st.selectbox(
41
- "Choose Model:",
42
- ["Use Demo Mode", "Use Fine-tuned Model"],
43
- help="Demo mode works immediately. Fine-tuned model requires loading."
44
- )
45
- else:
46
- model_option = "Use Demo Mode"
47
- st.warning("⚠️ Fine-tuned model not available - using demo mode")
48
- st.info(
49
- "💡 To enable AI model: Make sure your model is uploaded to HF Model Hub as public")
50
-
51
- student_level = st.selectbox(
52
- "Student Level:",
53
- ["beginner", "intermediate", "advanced"],
54
- help="Adjusts feedback complexity"
55
- )
56
-
57
- st.markdown("---")
58
- st.markdown("### 📚 About")
59
- st.markdown("""
60
- This AI tutor provides structured feedback on programming code:
61
-
62
- - **Strengths**: What you did well
63
- - **Weaknesses**: Areas for improvement
64
- - **Issues**: Problems to fix
65
- - **Improvements**: Step-by-step guidance
66
- - **Learning Points**: Key concepts to understand
67
- - **Questions**: Test your comprehension
68
- - **Code Fix**: Improved version
69
- """)
70
-
71
- # Show model status
72
- if MODEL_AVAILABLE:
73
- st.success("✅ Fine-tuned model available")
74
- st.success("🌐 Using public model - no authentication required")
75
-
76
- # Show current model path
77
- st.info(f"📁 Model path: FaroukTomori/codellama-7b-programming-education")
78
-
79
- # Show if model is loaded in session
80
- if 'ai_tutor' in st.session_state:
81
- st.success("✅ Model loaded in session")
82
- else:
83
- st.info("⏳ Model not loaded yet - will load when you analyze code")
84
- else:
85
- st.error("❌ Fine-tuned model not available")
86
- st.error("🔍 Check the import error above to fix the issue")
87
-
88
- # Main content
89
- st.markdown("---")
90
 
91
- # Code input
92
- code_input = st.text_area(
93
- "📝 Enter your code here:",
94
- height=200,
95
- placeholder="def hello_world():\n print('Hello, World!')\n return 'success'",
96
- help="Paste your Python code here for analysis"
97
- )
98
-
99
- if st.button("🚀 Analyze Code", type="primary"):
100
- if not code_input.strip():
101
- st.warning("⚠️ Please enter some code to analyze")
102
- return
103
-
104
- with st.spinner("🤖 Analyzing your code..."):
105
- try:
106
- if model_option == "Use Fine-tuned Model" and MODEL_AVAILABLE:
107
- # Check if model is already loaded
108
- if 'ai_tutor' not in st.session_state:
109
- with st.spinner("🚀 Loading fine-tuned model (this may take 5-10 minutes on HF Spaces)..."):
110
- try:
111
- # Use Hugging Face Model Hub
112
- # Replace with your actual model name
113
- model_path = "FaroukTomori/codellama-7b-programming-education"
114
-
115
- # Using public model - no authentication required
116
- st.info(
117
- "🌐 Using public model - no authentication required")
118
-
119
- st.info(
120
- f"🔍 Attempting to load model from: {model_path}")
121
-
122
- ai_tutor = ProgrammingEducationAI(model_path)
123
- st.success(
124
- "✅ Model class instantiated successfully")
125
-
126
- ai_tutor.load_model()
127
- st.session_state['ai_tutor'] = ai_tutor
128
- st.success(
129
- "✅ Fine-tuned model loaded successfully!")
130
- except Exception as e:
131
- st.error(f"❌ Error loading model: {e}")
132
- st.error("🔍 Full error details:")
133
- st.code(str(e), language="text")
134
- st.info(
135
- "💡 Check the error above to fix the model loading issue")
136
- return # Stop here and show the error
137
-
138
- if 'ai_tutor' in st.session_state:
139
- # Use fine-tuned model
140
- try:
141
- feedback = st.session_state['ai_tutor'].generate_comprehensive_feedback(
142
- code_input, student_level)
143
- st.success(
144
- "✅ Feedback generated using fine-tuned model!")
145
- except Exception as e:
146
- st.error(f"❌ Error generating feedback: {e}")
147
- st.error("🔍 Full error details:")
148
- st.code(str(e), language="text")
149
- st.info(
150
- "💡 Check the error above to fix the feedback generation issue")
151
- return
152
- else:
153
- # Model failed to load - show error instead of falling back
154
- st.error(
155
- "❌ Model failed to load - cannot generate feedback")
156
- st.info("💡 Fix the model loading error above first")
157
- return
158
- else:
159
- # Model not available or not selected - show error
160
- if not MODEL_AVAILABLE:
161
- st.error("❌ Fine-tuned model components not available")
162
- st.error("🔍 Check the import error in the sidebar")
163
- return
164
- else:
165
- st.error(
166
- "❌ Please select 'Use Fine-tuned Model' to analyze with AI")
167
- st.info("💡 The model is available but not selected")
168
- return
169
-
170
- # Display AI feedback in tabs
171
- tab1, tab2, tab3, tab4, tab5, tab6, tab7 = st.tabs([
172
- "✅ Strengths", "❌ Weaknesses", "🚨 Issues",
173
- "📈 Improvements", "🎓 Learning", "❓ Questions", "🔧 Code Fix"
174
- ])
175
-
176
- with tab1:
177
- st.subheader("✅ Code Strengths")
178
- for strength in feedback.strengths:
179
- st.markdown(f"• {strength}")
180
-
181
- with tab2:
182
- st.subheader("❌ Areas for Improvement")
183
- for weakness in feedback.weaknesses:
184
- st.markdown(f"• {weakness}")
185
-
186
- with tab3:
187
- st.subheader("🚨 Issues to Address")
188
- for issue in feedback.issues:
189
- st.markdown(f"• {issue}")
190
-
191
- with tab4:
192
- st.subheader("📈 Step-by-Step Improvements")
193
- for i, step in enumerate(feedback.step_by_step_improvement, 1):
194
- st.markdown(f"**Step {i}:** {step}")
195
-
196
- with tab5:
197
- st.subheader("🎓 Key Learning Points")
198
- for point in feedback.learning_points:
199
- st.markdown(f"• {point}")
200
-
201
- with tab6:
202
- st.subheader("❓ Comprehension Questions")
203
- st.markdown(
204
- f"**Question:** {feedback.comprehension_question}")
205
- st.markdown(f"**Answer:** {feedback.comprehension_answer}")
206
- st.markdown(f"**Explanation:** {feedback.explanation}")
207
-
208
- with tab7:
209
- st.subheader("🔧 Improved Code")
210
- st.code(feedback.improved_code, language="python")
211
- st.markdown("**What Changed:**")
212
- st.info(feedback.fix_explanation)
213
-
214
- st.success(
215
- "✅ Analysis complete! Review each tab for comprehensive feedback.")
216
 
217
- except Exception as e:
218
- st.error(f"❌ Error during analysis: {e}")
219
- st.error("🔍 Full error details:")
220
- st.code(str(e), language="text")
221
- st.info("💡 Check the error above to understand what went wrong")
222
 
 
 
223
 
224
- if __name__ == "__main__":
225
- try:
226
- main()
227
- except Exception as e:
228
- st.error(f"❌ Application error: {e}")
229
- st.info("💡 Please refresh the page and try again")
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import altair as alt
2
+ import numpy as np
3
+ import pandas as pd
 
 
 
4
  import streamlit as st
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
5
 
6
+ """
7
+ # Welcome to Streamlit!
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
8
 
9
+ Edit `/streamlit_app.py` to customize this app to your heart's desire :heart:.
10
+ If you have any questions, checkout our [documentation](https://docs.streamlit.io) and [community
11
+ forums](https://discuss.streamlit.io).
 
 
12
 
13
+ In the meantime, below is an example of what you can do with just a few lines of code:
14
+ """
15
 
16
+ num_points = st.slider("Number of points in spiral", 1, 10000, 1100)
17
+ num_turns = st.slider("Number of turns in spiral", 1, 300, 31)
18
+
19
+ indices = np.linspace(0, 1, num_points)
20
+ theta = 2 * np.pi * num_turns * indices
21
+ radius = indices
22
+
23
+ x = radius * np.cos(theta)
24
+ y = radius * np.sin(theta)
25
+
26
+ df = pd.DataFrame({
27
+ "x": x,
28
+ "y": y,
29
+ "idx": indices,
30
+ "rand": np.random.randn(num_points),
31
+ })
32
+
33
+ st.altair_chart(alt.Chart(df, height=700, width=700)
34
+ .mark_point(filled=True)
35
+ .encode(
36
+ x=alt.X("x", axis=None),
37
+ y=alt.Y("y", axis=None),
38
+ color=alt.Color("idx", legend=None, scale=alt.Scale()),
39
+ size=alt.Size("rand", legend=None, scale=alt.Scale(range=[1, 150])),
40
+ ))