tinyllama-dora-model

Model Description

This model is a parameter-efficient fine-tuned version of TinyLlama/TinyLlama-1.1B-Chat-v1.0 using DoRA combined with 4-bit quantization.


Key Features

  • Base Model: TinyLlama-1.1B-Chat
  • Fine-tuning Method: DoRA
  • Quantization: 4-bit
  • Framework: Transformers + PEFT

Intended Use

  • Instruction-based text generation
  • Conversational AI
  • Research and experimentation

Limitations

  • Small dataset (1k samples)
  • May produce incorrect outputs

Dataset

mlabonne/guanaco-llama2-1k


Training Details

  • Learning Rate: 5e-5
  • Batch Size: 2
  • Epochs: 1

Results

Validation Loss: 1.5644 Perplexity = exp(loss)


Usage

from transformers import AutoTokenizer, AutoModelForCausalLM
from peft import PeftModel

base_model = "TinyLlama/TinyLlama-1.1B-Chat-v1.0"
adapter_model = "Sujith2121/tinyllama-dora-model"

tokenizer = AutoTokenizer.from_pretrained(adapter_model)

model = AutoModelForCausalLM.from_pretrained(base_model, device_map="auto")
model = PeftModel.from_pretrained(model, adapter_model)

prompt = "Explain Docker simply"

inputs = tokenizer(prompt, return_tensors="pt").to("cuda")
outputs = model.generate(**inputs, max_new_tokens=100)

print(tokenizer.decode(outputs[0], skip_special_tokens=True))

License

Apache 2.0

Downloads last month
1
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for Sujith2121/tinyllama-dora-model

Adapter
(1490)
this model

Evaluation results

  • validation_loss on mlabonne/guanaco-llama2-1k
    self-reported
    1.564