| --- |
| language: |
| - en |
| license: apache-2.0 |
| tags: |
| - code |
| - code-review |
| - lora |
| - tinyllama |
| - python |
| - fine-tuned |
| base_model: TinyLlama/TinyLlama-1.1B-Chat-v1.0 |
| --- |
| |
| # AI Code Review Assistant — TinyLlama 1.1B (LoRA) |
|
|
| Fine-tuned version of TinyLlama 1.1B for automated Python code review, |
| trained on CodeSearchNet using LoRA adapters. |
|
|
| ## Model Details |
|
|
| - **Base model:** TinyLlama/TinyLlama-1.1B-Chat-v1.0 |
| - **Fine-tuning method:** LoRA (r=16, alpha=32) |
| - **Trainable parameters:** 0.58% of total (42M / 7.3B) |
| - **Training data:** 10,000 Python functions from CodeSearchNet |
| - **Training time:** 32 minutes on Kaggle T4 GPU |
|
|
| ## Evaluation Results |
|
|
| | Metric | Base Model | Fine-Tuned | |
| |--------|-----------|------------| |
| | ROUGE-L | 0.1541 | 0.5573 (+261%) | |
| | BERTScore F1 | 0.8226 | 0.9265 (+12.6%) | |
|
|
| ## Usage |
| ````python |
| from transformers import AutoModelForCausalLM, AutoTokenizer |
| from peft import PeftModel |
| import torch |
| |
| base_model = AutoModelForCausalLM.from_pretrained( |
| "TinyLlama/TinyLlama-1.1B-Chat-v1.0", |
| torch_dtype=torch.float32 |
| ) |
| model = PeftModel.from_pretrained( |
| base_model, |
| "Swarnimm22HF/ai-code-review-tinyllama" |
| ) |
| tokenizer = AutoTokenizer.from_pretrained( |
| "TinyLlama/TinyLlama-1.1B-Chat-v1.0" |
| ) |
| |
| prompt = """### Instruction: |
| Review the following Python function. |
| |
| ### Code: |
| ```python |
| def divide(a, b): |
| return a / b |
| ``` |
| |
| ### Response:""" |
| |
| inputs = tokenizer(prompt, return_tensors="pt") |
| outputs = model.generate(**inputs, max_new_tokens=200) |
| print(tokenizer.decode(outputs[0], skip_special_tokens=True)) |
| ```` |
|
|
| ## Full Project |
|
|
| GitHub: [AI Code Review Assistant](https://github.com/Swarnimm22/AI_Code_Review_Assistant) |