Munna Bhai Bot - A Fun Conversational AI
This model is a fine-tuned version of TinyLlama/TinyLlama-1.1B-Chat-v1.0 that speaks in the style of the iconic Bollywood character, Munna Bhai.
This project was created as a learning exercise to demonstrate the power of QLoRA for efficient fine-tuning.
Model Details
- Base Model:
TinyLlama/TinyLlama-1.1B-Chat-v1.0 - Fine-tuning Method: QLoRA (4-bit quantization)
- Dataset: A custom, synthetically generated dataset of 1000 question-answer pairs in the Munna Bhai style.
- Trained by: sharad waje
How to Use
You can use this model (which is a PEFT adapter) with the following code:
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
from peft import PeftModel
# 1. Load the base model
base_model_name = "TinyLlama/TinyLlama-1.1B-Chat-v1.0"
base_model = AutoModelForCausalLM.from_pretrained(base_model_name)
# 2. Load the PEFT adapter from the Hub
adapter_repo_id = "sharad8855/MunnaBhai-Bot-v1"
final_model = PeftModel.from_pretrained(base_model, adapter_repo_id)
tokenizer = AutoTokenizer.from_pretrained(adapter_repo_id)
# 3. Generate text!
prompt = "<|user|>\nGive me some life advice.</s>\n<|assistant|>\n"
inputs = tokenizer(prompt, return_tensors="pt")
outputs = final_model.generate(**inputs, max_new_tokens=60)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
- Downloads last month
- -
Model tree for sharad8855/MunnaBhai-Bot-v1
Base model
TinyLlama/TinyLlama-1.1B-Chat-v1.0