Mr-FineTuner's picture
Add model card with exact and within-1 confusion matrices and per-class metrics
0f1cbd5 verified
# Fine-Tuned LLaMA-3-8B CEFR Model
This is a fine-tuned version of `unsloth/gemma-7b-bnb-4bit` for CEFR-level sentence generation, evaluated with a fine-tuned classifier from `Mr-FineTuner/Skripsi_validator_best_model`.
- **Base Model**: unsloth/gemma-7b-bnb-4bit
- **Fine-Tuning**: LoRA with SMOTE-balanced dataset
- **Training Details**:
- Dataset: CEFR-level sentences with SMOTE and undersampling for balance
- LoRA Parameters: r=32, lora_alpha=32, lora_dropout=0.5
- Training Args: learning_rate=2e-5, batch_size=8, epochs=0.01, cosine scheduler
- Optimizer: adamw_8bit
- Early Stopping: Patience=3, threshold=0.01
- **Evaluation Metrics (Exact Matches)**:
- CEFR Classifier Accuracy: 0.167
- Precision (Macro): 0.028
- Recall (Macro): 0.167
- F1-Score (Macro): 0.048
- **Evaluation Metrics (Within ±1 Level)**:
- CEFR Classifier Accuracy: 0.500
- Precision (Macro): 0.375
- Recall (Macro): 0.500
- F1-Score (Macro): 0.400
- **Other Metrics**:
- Perplexity: 5.344
- Diversity (Unique Sentences): 0.100
- Inference Time (ms): 5802.883
- Model Size (GB): 4.8
- Robustness (F1): 0.045
- **Confusion Matrix (Exact Matches)**:
- CSV: [confusion_matrix_exact.csv](confusion_matrix_exact.csv)
- Image: [confusion_matrix_exact.png](confusion_matrix_exact.png)
- **Confusion Matrix (Within ±1 Level)**:
- CSV: [confusion_matrix_within1.csv](confusion_matrix_within1.csv)
- Image: [confusion_matrix_within1.png](confusion_matrix_within1.png)
- **Per-Class Confusion Metrics (Exact Matches)**:
- A1: TP=0, FP=0, FN=10, TN=50
- A2: TP=10, FP=50, FN=0, TN=0
- B1: TP=0, FP=0, FN=10, TN=50
- B2: TP=0, FP=0, FN=10, TN=50
- C1: TP=0, FP=0, FN=10, TN=50
- C2: TP=0, FP=0, FN=10, TN=50
- **Per-Class Confusion Metrics (Within ±1 Level)**:
- A1: TP=10, FP=0, FN=0, TN=50
- A2: TP=10, FP=30, FN=0, TN=20
- B1: TP=10, FP=0, FN=0, TN=50
- B2: TP=0, FP=0, FN=10, TN=50
- C1: TP=0, FP=0, FN=10, TN=50
- C2: TP=0, FP=0, FN=10, TN=50
- **Usage**:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("Mr-FineTuner/Test_02_llama_trainPercen_myValidator_2ndTry")
tokenizer = AutoTokenizer.from_pretrained("Mr-FineTuner/Test_02_llama_trainPercen_myValidator_2ndTry")
# Example inference
prompt = "<|user|>Generate a CEFR B1 level sentence.<|end|>"
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_length=50)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
Uploaded using `huggingface_hub`.