LLaMA LoRA Instruction Fine-Tuning (FP16)
This repository contains LoRA adapter weights for a LLaMA language model fine-tuned on instruction-style question–answer data using FP16 precision. The goal is to improve instruction following and reasoning while keeping training efficient via parameter-efficient fine-tuning (PEFT).
Model Details
Base Model: LLaMA
Fine-Tuning Method: LoRA (Low-Rank Adaptation)
Precision: FP16
Task: Instruction Following / Question Answering
Frameworks: Hugging Face Transformers, PEFT, PyTorch
Checkpoint Format: .safetensors
🔹 This repository contains LoRA adapter weights only. 🔹 The base LLaMA model must be loaded separately.
Dataset
The model was fine-tuned using the FineTome-100k dataset, a curated collection of high-quality instruction–response pairs designed for supervised fine-tuning (SFT) of large language models.
Dataset Name: FineTome-100k
Type: Instruction / Q&A pairs
Size: ~100K samples
🔗 Dataset Link: https://huggingface.co/datasets/mlabonne/FineTome-100k
Training Procedure
Objective: Causal Language Modeling
Trainable Parameters: LoRA adapters only
Frozen Parameters: All base model weights
Optimizer: AdamW
Precision: FP16
Approach: Supervised Fine-Tuning (SFT)
This setup allows efficient adaptation of a large model without updating all parameters.
How to Use Requirements pip install transformers peft accelerate torch
Load Model with LoRA Adapters from transformers import AutoModelForCausalLM, AutoTokenizer from peft import PeftModel import torch
base_model_id = "meta-llama/Llama-2-7b-hf" adapter_id = "Vinay-11/Llama-lora-instruct-finetuning"
tokenizer = AutoTokenizer.from_pretrained(base_model_id)
base_model = AutoModelForCausalLM.from_pretrained( base_model_id, torch_dtype=torch.float16, device_map="auto" )
model = PeftModel.from_pretrained(base_model, adapter_id) model.eval()
Example Inference prompt = "What is LoRA fine-tuning?"
inputs = tokenizer(prompt, return_tensors="pt").to("cuda")
outputs = model.generate( **inputs, max_new_tokens=120, temperature=0.7, do_sample=True )
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
Model tree for Vinay-11/Llama-lora-instruct-finetuning
Base model
meta-llama/Llama-2-7b-hf