YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

WikiLlama

Accuracy

WikiLlama is a LoRA fine-tuned version of TinyLlama-1.1B, trained on the WikiText-103 dataset to improve general NLP performance.


Model Details

  • Base Model: TinyLlama/TinyLlama-1.1B-Chat-v1.0
  • Training Dataset: WikiText-103
  • Training Method: LoRA (Low-Rank Adaptation) with base weights frozen.
  • Author: Rudransh Joshi
  • License: Same as TinyLlama

Evaluation & Performance

The model was evaluated on the HellaSwag dataset (Sentence Completion / Multiple Choice) using a sample size of 100 examples. The results demonstrate a significant accuracy improvement over the base model.

Model Accuracy (HellaSwag)
Original TinyLlama 24%
WikiLlama (LoRA) 30%

Example Usage

from transformers import AutoTokenizer, AutoModelForCausalLM
import torch

# Load the model and tokenizer
model_id = "rudranshjoshi/WikiLlama"

tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    torch_dtype=torch.float16,
    device_map="auto"
)

# Prepare input
messages = [
    {"role": "user", "content": "What is the capital of France?"}
]

# Apply chat template (if available) or format prompt
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)

# Generate response
outputs = model.generate(
    **inputs,
    max_new_tokens=256,
    temperature=0.7,
    do_sample=True
)

response = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(response)

Note: Fine-tuning on WikiText-103 resulted in a 6% absolute improvement in accuracy on the HellaSwag benchmark compared to the vanilla TinyLlama-1.1B checkpoint.

Downloads last month
39
Safetensors
Model size
1B params
Tensor type
F16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support