Mr-FineTuner commited on
Commit
ce683fa
·
verified ·
1 Parent(s): b391c35

Add model card with exact and within-1 confusion matrices and per-class metrics for non-fine-tuned LLaMA evaluation

Browse files
Files changed (1) hide show
  1. README.md +63 -0
README.md ADDED
@@ -0,0 +1,63 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+ # Non-Fine-Tuned LLaMA-3-8B CEFR Evaluation
3
+
4
+ This repository contains the evaluation results of the base `unsloth/llama-3-8b-instruct-bnb-4bit` model for CEFR-level sentence generation, without fine-tuning, as part of an ablation study. The model is evaluated using a fine-tuned classifier from `Mr-FineTuner/Skripsi_validator_best_model`.
5
+
6
+ - **Base Model**: unsloth/llama-3-8b-instruct-bnb-4bit
7
+ - **Evaluation Details**:
8
+ - Dataset: Rebalanced test dataset (`test_merged_output.txt`), which was also used to train and evaluate the classifier, potentially introducing bias.
9
+ - No fine-tuning performed; base model used directly.
10
+ - Classifier: MLP classifier trained on `train_merged_output.txt`, `dev_merged_output.txt`, and `test_merged_output.txt` for CEFR level prediction.
11
+ - **Evaluation Metrics (Exact Matches)**:
12
+ - CEFR Classifier Accuracy: 0.150
13
+ - Precision (Macro): 0.194
14
+ - Recall (Macro): 0.150
15
+ - F1-Score (Macro): 0.140
16
+ - **Evaluation Metrics (Within ±1 Level)**:
17
+ - CEFR Classifier Accuracy: 0.750
18
+ - Precision (Macro): 0.826
19
+ - Recall (Macro): 0.750
20
+ - F1-Score (Macro): 0.741
21
+ - **Other Metrics**:
22
+ - Perplexity: 86.022
23
+ - Diversity (Unique Sentences): 0.967
24
+ - Inference Time (ms): 4952.351
25
+ - Model Size (GB): 8.0
26
+ - Robustness (F1): 0.133
27
+ - **Confusion Matrix (Exact Matches)**:
28
+ - CSV: [confusion_matrix_exact.csv](confusion_matrix_exact.csv)
29
+ - Image: [confusion_matrix_exact.png](confusion_matrix_exact.png)
30
+ - **Confusion Matrix (Within ±1 Level)**:
31
+ - CSV: [confusion_matrix_within1.csv](confusion_matrix_within1.csv)
32
+ - Image: [confusion_matrix_within1.png](confusion_matrix_within1.png)
33
+ - **Per-Class Confusion Metrics (Exact Matches)**:
34
+ - A1: TP=0, FP=0, FN=10, TN=50
35
+ - A2: TP=1, FP=11, FN=9, TN=39
36
+ - B1: TP=3, FP=19, FN=7, TN=31
37
+ - B2: TP=2, FP=16, FN=8, TN=34
38
+ - C1: TP=2, FP=4, FN=8, TN=46
39
+ - C2: TP=1, FP=1, FN=9, TN=49
40
+ - **Per-Class Confusion Metrics (Within ±1 Level)**:
41
+ - A1: TP=4, FP=0, FN=6, TN=50
42
+ - A2: TP=8, FP=2, FN=2, TN=48
43
+ - B1: TP=10, FP=6, FN=0, TN=44
44
+ - B2: TP=8, FP=7, FN=2, TN=43
45
+ - C1: TP=10, FP=0, FN=0, TN=50
46
+ - C2: TP=5, FP=0, FN=5, TN=50
47
+ - **Note on Bias**:
48
+ - The test dataset used for evaluation (`test_merged_output.txt`) was part of the training and evaluation data for the classifier (`Mr-FineTuner/Skripsi_validator_best_model`). This may lead to inflated performance metrics due to the classifier's familiarity with the dataset. For a more robust evaluation, a new dataset not used in classifier training is recommended.
49
+ - **Usage**:
50
+ ```python
51
+ from transformers import AutoModelForCausalLM, AutoTokenizer
52
+
53
+ model = AutoModelForCausalLM.from_pretrained("unsloth/llama-3-8b-instruct-bnb-4bit")
54
+ tokenizer = AutoTokenizer.from_pretrained("unsloth/llama-3-8b-instruct-bnb-4bit")
55
+
56
+ # Example inference
57
+ prompt = "[INST] Generate a CEFR B1 level sentence. [/INST]"
58
+ inputs = tokenizer(prompt, return_tensors="pt")
59
+ outputs = model.generate(**inputs, max_length=50)
60
+ print(tokenizer.decode(outputs[0], skip_special_tokens=True))
61
+ ```
62
+
63
+ Uploaded using `huggingface_hub`.