How to use from
llama.cppInstall from WinGet (Windows)
winget install llama.cpp
# Start a local OpenAI-compatible server with a web UI:
llama-server -hf iq2i/ai-code-review:Q4_K_M# Run inference directly in the terminal:
llama-cli -hf iq2i/ai-code-review:Q4_K_MUse pre-built binary
# Download pre-built binary from:
# https://github.com/ggerganov/llama.cpp/releases# Start a local OpenAI-compatible server with a web UI:
./llama-server -hf iq2i/ai-code-review:Q4_K_M# Run inference directly in the terminal:
./llama-cli -hf iq2i/ai-code-review:Q4_K_MBuild from source code
git clone https://github.com/ggerganov/llama.cpp.git
cd llama.cpp
cmake -B build
cmake --build build -j --target llama-server llama-cli# Start a local OpenAI-compatible server with a web UI:
./build/bin/llama-server -hf iq2i/ai-code-review:Q4_K_M# Run inference directly in the terminal:
./build/bin/llama-cli -hf iq2i/ai-code-review:Q4_K_MUse Docker
docker model run hf.co/iq2i/ai-code-review:Q4_K_MQuick Links
AI Code Review Model
Multi-language code review model optimized for automated code review in CI/CD pipelines.
Model Details
- Base Model: Qwen/Qwen2.5-Coder-1.5B-Instruct
- Training Method: LoRA fine-tuning with MLX
- Format: GGUF (Q4_K_M quantization)
- Purpose: Automated code review for CI/CD pipelines
Usage
Docker (Recommended)
docker pull ghcr.io/iq2i/ai-code-review:latest
# Review your codebase
docker run --rm -v $(pwd):/workspace ghcr.io/iq2i/ai-code-review:latest /workspace/src
llama.cpp
# Download the model
wget https://huggingface.co/iq2i/ai-code-review/resolve/main/model-Q4_K_M.gguf
# Run inference
./llama-cli -m model-Q4_K_M.gguf -p "Review this code: ..."
Python (llama-cpp-python)
from llama_cpp import Llama
llm = Llama(model_path="model-Q4_K_M.gguf")
output = llm("Review this code: ...", max_tokens=512)
print(output)
Output Format
The model outputs concise text-based code reviews:
**SQL injection vulnerability**
User input is concatenated directly into a raw SQL query without parameterization or escaping.
Impact: An attacker can execute arbitrary SQL commands, potentially dumping the entire database, deleting data, or escalating privileges. For example: keyword=' OR '1'='1' -- would return all products.
Suggestion:
Use parameter binding: DB::select("SELECT * FROM products WHERE name LIKE ?", ['%' . $keyword . '%']) or better, use Eloquent: Product::where('name', 'like', '%' . $keyword . '%')->get()
Training
- Training examples: 100+ real-world code issues
- Format: ChatML conversation format with concise reviews
- Framework: MLX for Apple Silicon acceleration
- Method: LoRA adapters (r=4, alpha=8)
- Iterations: 625
For training details, see the GitHub repository.
Limitations
- Should be used as a supplementary tool, not a replacement for human review
- May not catch all edge cases or security vulnerabilities
- Best results on common programming patterns and frameworks
License
Apache 2.0
Citation
@software{ai_code_review,
title = {AI Code Review Model},
author = {IQ2i Team},
year = {2025},
url = {https://github.com/iq2i/ai-code-review}
}
- Downloads last month
- 6
Hardware compatibility
Log In to add your hardware
4-bit
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐ Ask for provider support
Model tree for iq2i/ai-code-review
Base model
Qwen/Qwen2.5-1.5B Finetuned
Qwen/Qwen2.5-Coder-1.5B Finetuned
Qwen/Qwen2.5-Coder-1.5B-Instruct
Install from brew
# Start a local OpenAI-compatible server with a web UI: llama-server -hf iq2i/ai-code-review:Q4_K_M# Run inference directly in the terminal: llama-cli -hf iq2i/ai-code-review:Q4_K_M