TOEFL Speaking Evaluation Model
A fine-tuned Llama 3.2 model for automated TOEFL speaking assessment using LoRA adapters.
Model Description
This repository contains a fine-tuned LoRA adapter for evaluating TOEFL speaking responses. The model has been trained to assess speaking quality and provide scores based on TOEFL evaluation criteria.
Base Model: Llama 3.2 Training Method: LoRA (Low-Rank Adaptation) Task: TOEFL Speaking Assessment
Repository Contents
Model Weights
toefl_judge_adapter/- LoRA adapter weights in safetensors format- Multiple checkpoint files (every 100 steps from 100-1000)
- Final adapter:
adapters.safetensors - Configuration:
adapter_config.json
Training Data
train.jsonl(1.9MB) - Training datasetvalid.jsonl(642KB) - Validation datasettoefl_speaking_data_test.jsonl(650KB) - Test dataset
Application Code
toefl_judge_app.py- Web application for TOEFL evaluationcli.py- Command-line interfacefine_tune.py- Fine-tuning scriptdata_formatter.py- Data preprocessing utilitiesevaluator.py- Model evaluation tools
Results & Evaluation
toefl_predictions.csv- Model predictions on test settoefl_evaluations.csv- Evaluation metricstoefl_evaluation_results.png- Visualization of results
Configuration
lora_config.json- LoRA training configuration
Requirements
Platform: Apple Silicon Mac (M1/M2/M3) - This application uses MLX framework Python: 3.9 or higher
Dependencies
Install the required packages:
pip install -r requirements.txt
Or manually install:
pip install streamlit mlx mlx-lm pandas transformers peft torch
Installation & Setup
Clone or download this repository
Install dependencies:
pip install -r requirements.txtDownload the base Llama 3.2 model:
The application requires the base Llama 3.2 model. You can download it from:
- Hugging Face:
meta-llama/Llama-3.2-3Bormeta-llama/Llama-3.2-1B - You'll need to request access from Meta if you haven't already
- Hugging Face:
Update model paths in the application code:
Edit
toefl_judge_app.pyand set the correct paths for:model_path: Path to your base Llama 3.2 modeladapter_path: Path to the downloaded adapter (from this repo)
Usage
Loading the Model
from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel
# Load base model
base_model = AutoModelForCausalLM.from_pretrained("meta-llama/Llama-3.2-3B")
tokenizer = AutoTokenizer.from_pretrained("meta-llama/Llama-3.2-3B")
# Load LoRA adapter
model = PeftModel.from_pretrained(base_model, "Ylemnox/llama-toefl-checkbot/toefl_judge_adapter")
Running the Web Application
streamlit run toefl_judge_app.py
The web interface will open in your browser where you can:
- Enter TOEFL speaking questions
- Input student responses
- Get automated evaluations with scores (0-4 scale)
Using the CLI
python cli.py
Example: Download and Use the Model
# 1. Install dependencies
pip install -r requirements.txt
# 2. Download this repository
git clone https://huggingface.co/Ylemnox/llama-toefl-checkbot
cd llama-toefl-checkbot
# 3. Run the web app (make sure to set model paths first)
streamlit run toefl_judge_app.py
Training Details
The model was trained using LoRA with the following configuration:
- Configuration available in
lora_config.json - Training performed on TOEFL speaking response data
- Multiple checkpoints saved during training
Evaluation Results
Evaluation metrics and visualizations are available in:
toefl_evaluations.csv- Detailed metricstoefl_evaluation_results.png- Performance visualization
License
Please ensure compliance with Meta's Llama license when using this model.
Citation
If you use this model, please cite:
@misc{llama-toefl-checkbot,
author = {Davis Kwak},
title = {TOEFL Speaking Evaluation Model},
year = {2025},
publisher = {Hugging Face},
howpublished = {\url{https://huggingface.co/Ylemnox/llama-toefl-checkbot}}
}
Model tree for Ylemnox/llama-toefl-checkbot
Base model
meta-llama/Llama-3.2-3B