YAML Metadata Warning:empty or missing yaml metadata in repo card

Check out the documentation for more information.

Fine-Tuned LLaMA-3-8B CEFR Model

This is a fine-tuned version of unsloth/llama-3-8b-instruct-bnb-4bit for CEFR-level sentence generation, evaluated with a fine-tuned classifier from Mr-FineTuner/Skripsi_validator_best_model.

  • Base Model: unsloth/llama-3-8b-instruct-bnb-4bit
  • Fine-Tuning: LoRA with balanced dataset
  • Training Details:
    • Dataset: CEFR-level sentences
    • LoRA Parameters: r=32, lora_alpha=32, lora_dropout=0.5
    • Training Args: learning_rate=1e-5, batch_size=8, epochs=0.01, cosine scheduler
    • Optimizer: adamw_8bit
    • Early Stopping: Patience=2, threshold=0.01
  • Evaluation Metrics (Exact Matches):
    • CEFR Classifier Accuracy: 0.367
    • Precision (Macro): 0.340
    • Recall (Macro): 0.367
    • F1-Score (Macro): 0.327
  • Evaluation Metrics (Within 卤1 Level):
    • CEFR Classifier Accuracy: 0.800
    • Precision (Macro): 0.833
    • Recall (Macro): 0.800
    • F1-Score (Macro): 0.801
  • Other Metrics:
    • Perplexity: 2.733
    • Diversity (Unique Sentences): 1.000
    • Inference Time (ms): 6839.595
    • Model Size (GB): 8.0 # Updated to reflect PyTorch format
    • Robustness (F1): 0.310
  • Confusion Matrix (Exact Matches):
  • Confusion Matrix (Within 卤1 Level):
  • Per-Class Confusion Metrics (Exact Matches):
    • A1: TP=4, FP=1, FN=6, TN=49
    • A2: TP=8, FP=14, FN=2, TN=36
    • B1: TP=2, FP=6, FN=8, TN=44
    • B2: TP=2, FP=10, FN=8, TN=40
    • C1: TP=6, FP=7, FN=4, TN=43
    • C2: TP=0, FP=0, FN=10, TN=50
  • Per-Class Confusion Metrics (Within 卤1 Level):
    • A1: TP=7, FP=0, FN=3, TN=50
    • A2: TP=10, FP=5, FN=0, TN=45
    • B1: TP=10, FP=2, FN=0, TN=48
    • B2: TP=5, FP=5, FN=5, TN=45
    • C1: TP=9, FP=0, FN=1, TN=50
    • C2: TP=7, FP=0, FN=3, TN=50
  • Usage:
    from transformers import AutoModelForCausalLM, AutoTokenizer
    
    model = AutoModelForCausalLM.from_pretrained("Mr-FineTuner/With_synthetic_Dataset_llama-3epoch-02dropout")
    tokenizer = AutoTokenizer.from_pretrained("Mr-FineTuner/With_synthetic_Dataset_llama-3epoch-02dropout")
    
    # Example inference
    prompt = "<|user|>Generate a CEFR B1 level sentence.<|end|>"
    inputs = tokenizer(prompt, return_tensors="pt")
    outputs = model.generate(**inputs, max_length=50)
    print(tokenizer.decode(outputs[0], skip_special_tokens=True))
    Uploaded using huggingface_hub.
    
Downloads last month
1
Safetensors
Model size
7B params
Tensor type
F32
BF16
U8
Inference Providers NEW
This model isn't deployed by any Inference Provider. 馃檵 Ask for provider support