Llama-3 8B Multi-Dataset SFT (LoRA)

This model is a fine-tuned version of Llama-3-8B aligned using Supervised Fine-Tuning (SFT) on a merged dataset of Alpaca, Dolly 15k, and OpenAssistant.

πŸš€ Key Features

  • Training Framework: Unsloth (LoRA)
  • Datasets: Merged Instruction tuning (Alpaca, Dolly, OASST)
  • Quantization: 4-bit (bitsandbytes)
  • Monitoring: Tracked via Weights & Biases

πŸ”— Project Links

πŸ› οΈ Usage

You can load this model using the unsloth library for ultra-fast inference:

from unsloth import FastLanguageModel
import torch

model, tokenizer = FastLanguageModel.from_pretrained(
    model_name = "Karan6124/llama3-8b-multi-dataset-sft",
    max_seq_length = 2048,
    load_in_4bit = True,
)
FastLanguageModel.for_inference(model)

# Test prompt
instruction = "Write a clear Python function to check if a string is a palindrome."
prompt = f"### Instruction:\n{instruction}\n\n### Response:\n"

inputs = tokenizer([prompt], return_tensors="pt").to("cuda")
outputs = model.generate(**inputs, max_new_tokens=128)
print(tokenizer.batch_decode(outputs, skip_special_tokens=True)[0])

πŸ“Š Training Details

  • Rank (r): 16
  • Alpha: 32
  • Learning Rate: 2e-4
  • Optim: adamw_8bit
Downloads last month
36
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for Karan6124/llama3-8b-multi-dataset-sft

Adapter
(282)
this model