File size: 1,384 Bytes
cb413b2
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39

# Fine-Tuned LLaMA-3-8B CEFR Model

This is a fine-tuned version of `unsloth/llama-3-8b-instruct-bnb-4bit` for CEFR-level sentence generation.

- **Base Model**: unsloth/llama-3-8b-instruct-bnb-4bit
- **Fine-Tuning**: LoRA with SMOTE-balanced dataset
- **Training Details**:
  - Dataset: CEFR-level sentences with SMOTE and undersampling for balance
  - LoRA Parameters: r=32, lora_alpha=32, lora_dropout=0.5
  - Training Args: learning_rate=2e-5, batch_size=8, epochs=0.1, cosine scheduler
  - Optimizer: adamw_8bit
  - Early Stopping: Patience=3, threshold=0.01
- **Evaluation Metrics**:
  - CEFR Classifier Accuracy: 0.167
  - Precision (Macro): 0.042
  - Recall (Macro): 0.167
  - F1-Score (Macro): 0.067
  - Perplexity: 14.218
  - Diversity (Unique Sentences): 1.000
  - Inference Time (ms): 2216.789
  - Model Size (GB): 4.8
  - Robustness (F1): 0.063
- **Usage**:
  ```python
  from transformers import AutoModelForCausalLM, AutoTokenizer

  model = AutoModelForCausalLM.from_pretrained("Mr-FineTuner/Test___01")
  tokenizer = AutoTokenizer.from_pretrained("Mr-FineTuner/Test___01")

  # Example inference
  prompt = "<|user|>Generate a CEFR B1 level sentence.<|end|>"
  inputs = tokenizer(prompt, return_tensors="pt")
  outputs = model.generate(**inputs, max_length=50)
  print(tokenizer.decode(outputs[0], skip_special_tokens=True))
  ```

Uploaded using `huggingface_hub`.