| # Fine-Tuned LLaMA-3-8B CEFR Model | |
| This is a fine-tuned version of `unsloth/llama-3-8b-instruct-bnb-4bit` for CEFR-level sentence generation. | |
| - **Base Model**: unsloth/llama-3-8b-instruct-bnb-4bit | |
| - **Fine-Tuning**: LoRA with SMOTE-balanced dataset | |
| - **Training Details**: | |
| - Dataset: CEFR-level sentences with SMOTE and undersampling for balance | |
| - LoRA Parameters: r=32, lora_alpha=32, lora_dropout=0.5 | |
| - Training Args: learning_rate=2e-5, batch_size=8, epochs=0.1, cosine scheduler | |
| - Optimizer: adamw_8bit | |
| - Early Stopping: Patience=3, threshold=0.01 | |
| - **Evaluation Metrics**: | |
| - CEFR Classifier Accuracy: 0.250 | |
| - Precision (Macro): 0.130 | |
| - Recall (Macro): 0.250 | |
| - F1-Score (Macro): 0.153 | |
| - Perplexity: 14.218 | |
| - Diversity (Unique Sentences): 0.933 | |
| - Inference Time (ms): 2242.946 | |
| - Model Size (GB): 4.8 | |
| - Robustness (F1): 0.145 | |
| - **Confusion Matrix**: | |
| - CSV: [confusion_matrix.csv](confusion_matrix.csv) | |
| - Image: [confusion_matrix.png](confusion_matrix.png) | |
| - **Per-Class Confusion Metrics**: | |
| - A1: TP=0, FP=2, FN=10, TN=48 | |
| - A2: TP=0, FP=0, FN=10, TN=50 | |
| - B1: TP=10, FP=29, FN=0, TN=21 | |
| - B2: TP=2, FP=7, FN=8, TN=43 | |
| - C1: TP=3, FP=7, FN=7, TN=43 | |
| - C2: TP=0, FP=0, FN=10, TN=50 | |
| - **Usage**: | |
| ```python | |
| from transformers import AutoModelForCausalLM, AutoTokenizer | |
| model = AutoModelForCausalLM.from_pretrained("Mr-FineTuner/Test___01_withNewEval") | |
| tokenizer = AutoTokenizer.from_pretrained("Mr-FineTuner/Test___01_withNewEval") | |
| # Example inference | |
| prompt = "<|user|>Generate a CEFR B1 level sentence.<|end|>" | |
| inputs = tokenizer(prompt, return_tensors="pt") | |
| outputs = model.generate(**inputs, max_length=50) | |
| print(tokenizer.decode(outputs[0], skip_special_tokens=True)) | |
| ``` | |
| Uploaded using `huggingface_hub`. | |