|
|
--- |
|
|
title: AICELS |
|
|
emoji: π₯ |
|
|
colorFrom: yellow |
|
|
colorTo: indigo |
|
|
sdk: gradio |
|
|
app_file: app.py |
|
|
pinned: false |
|
|
license: mit |
|
|
short_description: AI Code Evaluation and Learning System - Correctness First |
|
|
--- |
|
|
|
|
|
# π AICELS: AI-Powered Code Evaluation and Learning System |
|
|
|
|
|
**Correctness First, Quality Second** |
|
|
|
|
|
AICELS evaluates Python code in two phases, then makes recommendations: |
|
|
1. **Priority 0 - Correctness**: Does your code run without errors? |
|
|
2. **Quality Assessment**: How well does it follow best practices? |
|
|
3. **How can it be improved?**: AIFCEL suggests a learning path |
|
|
|
|
|
## Features |
|
|
|
|
|
- β
Syntax and runtime error detection |
|
|
- π PEP 8 compliance checking |
|
|
- π Documentation quality assessment |
|
|
- π― Naming convention validation |
|
|
- π Personalized learning paths |
|
|
- π Quality grading (A/B/C/D) |
|
|
|
|
|
## How to Use |
|
|
|
|
|
1. Paste your Python code in the input box |
|
|
2. Click "Evaluate Code" |
|
|
3. Review your correctness status and quality grade |
|
|
4. Follow the personalized learning path to improve |
|
|
|
|
|
## Philosophy |
|
|
|
|
|
Unlike traditional code quality tools, AICELS prioritizes functional correctness before style. |
|
|
No amount of beautiful formatting can fix code that doesn't work! The app was developeded following extensive consultations and discussions with the community of teachers and educators |
|
|
|
|
|
|
|
|
## Development |
|
|
|
|
|
Developed collaboratively by Paola Di Maio (Ronin Institute, W3C AI Knowledge Representation Community Group) with AI Coding agents that she is becoming embedded with. |
|
|
|
|
|
Currently in testing phase. Colab version working well, HF deployment being refined. |
|
|
|
|
|
--- |
|
|
|
|
|
Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference |