File size: 3,036 Bytes
58de933
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64

# Non-Fine-Tuned Gemma-7B CEFR Evaluation

This repository contains the evaluation results of the base `unsloth/gemma-7b-bnb-4bit` model for CEFR-level sentence generation, without fine-tuning, as part of an ablation study. The model is evaluated using a fine-tuned classifier from `Mr-FineTuner/Skripsi_validator_best_model`.

- **Base Model**: unsloth/gemma-7b-bnb-4bit
- **Evaluation Details**:
  - Dataset: Rebalanced test dataset (`test_merged_output.txt`), which was also used to train and evaluate the classifier, potentially introducing bias.
  - No fine-tuning performed; base model used directly.
  - Classifier: MLP classifier trained on `train_merged_output.txt`, `dev_merged_output.txt`, and `test_merged_output.txt` for CEFR level prediction.
- **Evaluation Metrics (Exact Matches)**:
  - CEFR Classifier Accuracy: 0.167
  - Precision (Macro): 0.028
  - Recall (Macro): 0.167
  - F1-Score (Macro): 0.048
- **Evaluation Metrics (Within ±1 Level)**:
  - CEFR Classifier Accuracy: 0.500
  - Precision (Macro): 0.375
  - Recall (Macro): 0.500
  - F1-Score (Macro): 0.400
- **Other Metrics**:
  - Perplexity: 55.377
  - Diversity (Unique Sentences): 0.100
  - Inference Time (ms): 5461.263
  - Model Size (GB): 4.2
  - Robustness (F1): 0.045
- **Confusion Matrix (Exact Matches)**:
  - CSV: [confusion_matrix_exact.csv](confusion_matrix_exact.csv)
  - Image: [confusion_matrix_exact.png](confusion_matrix_exact.png)
- **Confusion Matrix (Within ±1 Level)**:
  - CSV: [confusion_matrix_within1.csv](confusion_matrix_within1.csv)
  - Image: [confusion_matrix_within1.png](confusion_matrix_within1.png)
- **Per-Class Confusion Metrics (Exact Matches)**:
  - A1: TP=0, FP=0, FN=10, TN=50
  - A2: TP=0, FP=0, FN=10, TN=50
  - B1: TP=10, FP=50, FN=0, TN=0
  - B2: TP=0, FP=0, FN=10, TN=50
  - C1: TP=0, FP=0, FN=10, TN=50
  - C2: TP=0, FP=0, FN=10, TN=50
- **Per-Class Confusion Metrics (Within ±1 Level)**:
  - A1: TP=0, FP=0, FN=10, TN=50
  - A2: TP=10, FP=0, FN=0, TN=50
  - B1: TP=10, FP=30, FN=0, TN=20
  - B2: TP=10, FP=0, FN=0, TN=50
  - C1: TP=0, FP=0, FN=10, TN=50
  - C2: TP=0, FP=0, FN=10, TN=50
- **Note on Bias**:
  - The test dataset used for evaluation (`test_merged_output.txt`) was part of the training and evaluation data for the classifier (`Mr-FineTuner/Skripsi_validator_best_model`). This may lead to inflated performance metrics due to the classifier's familiarity with the dataset. For a more robust evaluation, a new dataset not used in classifier training is recommended.
- **Usage**:
  ```python
  from transformers import AutoModelForCausalLM, AutoTokenizer

  model = AutoModelForCausalLM.from_pretrained("unsloth/gemma-7b-bnb-4bit")
  tokenizer = AutoTokenizer.from_pretrained("unsloth/gemma-7b-bnb-4bit")

  # Example inference
  prompt = "<|user|>Generate a CEFR B1 level sentence.<|end|>"
  inputs = tokenizer(prompt, return_tensors="pt")
  outputs = model.generate(**inputs, max_length=50)
  print(tokenizer.decode(outputs[0], skip_special_tokens=True))
  ```

Uploaded using `huggingface_hub`.