Mr-FineTuner commited on
Commit
cb413b2
·
verified ·
1 Parent(s): faf87f7

Add model card with evaluation matrix

Browse files
Files changed (1) hide show
  1. README.md +38 -0
README.md ADDED
@@ -0,0 +1,38 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+ # Fine-Tuned LLaMA-3-8B CEFR Model
3
+
4
+ This is a fine-tuned version of `unsloth/llama-3-8b-instruct-bnb-4bit` for CEFR-level sentence generation.
5
+
6
+ - **Base Model**: unsloth/llama-3-8b-instruct-bnb-4bit
7
+ - **Fine-Tuning**: LoRA with SMOTE-balanced dataset
8
+ - **Training Details**:
9
+ - Dataset: CEFR-level sentences with SMOTE and undersampling for balance
10
+ - LoRA Parameters: r=32, lora_alpha=32, lora_dropout=0.5
11
+ - Training Args: learning_rate=2e-5, batch_size=8, epochs=0.1, cosine scheduler
12
+ - Optimizer: adamw_8bit
13
+ - Early Stopping: Patience=3, threshold=0.01
14
+ - **Evaluation Metrics**:
15
+ - CEFR Classifier Accuracy: 0.167
16
+ - Precision (Macro): 0.042
17
+ - Recall (Macro): 0.167
18
+ - F1-Score (Macro): 0.067
19
+ - Perplexity: 14.218
20
+ - Diversity (Unique Sentences): 1.000
21
+ - Inference Time (ms): 2216.789
22
+ - Model Size (GB): 4.8
23
+ - Robustness (F1): 0.063
24
+ - **Usage**:
25
+ ```python
26
+ from transformers import AutoModelForCausalLM, AutoTokenizer
27
+
28
+ model = AutoModelForCausalLM.from_pretrained("Mr-FineTuner/Test___01")
29
+ tokenizer = AutoTokenizer.from_pretrained("Mr-FineTuner/Test___01")
30
+
31
+ # Example inference
32
+ prompt = "<|user|>Generate a CEFR B1 level sentence.<|end|>"
33
+ inputs = tokenizer(prompt, return_tensors="pt")
34
+ outputs = model.generate(**inputs, max_length=50)
35
+ print(tokenizer.decode(outputs[0], skip_special_tokens=True))
36
+ ```
37
+
38
+ Uploaded using `huggingface_hub`.