Sujith2121's picture
Update README.md
afa8436 verified
metadata
library_name: peft
license: apache-2.0
base_model: TinyLlama/TinyLlama-1.1B-Chat-v1.0
tags:
  - base_model:adapter:TinyLlama/TinyLlama-1.1B-Chat-v1.0
  - dora
  - qlora
  - transformers
  - text-generation
pipeline_tag: text-generation
model-index:
  - name: tinyllama-dora-model
    results:
      - task:
          type: text-generation
        dataset:
          name: mlabonne/guanaco-llama2-1k
          type: instruction-tuning
        metrics:
          - type: loss
            value: 1.5644
            name: validation_loss

tinyllama-dora-model

Model Description

This model is a parameter-efficient fine-tuned version of TinyLlama/TinyLlama-1.1B-Chat-v1.0 using DoRA combined with 4-bit quantization.


Key Features

  • Base Model: TinyLlama-1.1B-Chat
  • Fine-tuning Method: DoRA
  • Quantization: 4-bit
  • Framework: Transformers + PEFT

Intended Use

  • Instruction-based text generation
  • Conversational AI
  • Research and experimentation

Limitations

  • Small dataset (1k samples)
  • May produce incorrect outputs

Dataset

mlabonne/guanaco-llama2-1k


Training Details

  • Learning Rate: 5e-5
  • Batch Size: 2
  • Epochs: 1

Results

Validation Loss: 1.5644 Perplexity = exp(loss)


Usage

from transformers import AutoTokenizer, AutoModelForCausalLM
from peft import PeftModel

base_model = "TinyLlama/TinyLlama-1.1B-Chat-v1.0"
adapter_model = "Sujith2121/tinyllama-dora-model"

tokenizer = AutoTokenizer.from_pretrained(adapter_model)

model = AutoModelForCausalLM.from_pretrained(base_model, device_map="auto")
model = PeftModel.from_pretrained(model, adapter_model)

prompt = "Explain Docker simply"

inputs = tokenizer(prompt, return_tensors="pt").to("cuda")
outputs = model.generate(**inputs, max_new_tokens=100)

print(tokenizer.decode(outputs[0], skip_special_tokens=True))

License

Apache 2.0