Model Card for Pragnyan-Clone-v1

This is a personality-tuned version of Llama 3.1 8B, trained to mimic the conversational style, tone, and slang of Pragnyan Ramtha. It was fine-tuned on private chat logs using QLoRA and Unsloth to create a lightweight, highly efficient digital twin.Model DetailsModel Description

This model is a LoRA (Low-Rank Adaptation) adapter fine-tuned on the unsloth/llama-3.1-8b-Instruct base model. It was trained to replicate a specific user's personality ("Pragnyan Ramtha") by learning from real-world conversation history (Instagram/WhatsApp). It captures nuances, casual sentence structure, and specific personal interests. The model was optimized for local deployment, trained on a cloud GPU (NVIDIA L4), and quantized to GGUF for efficient inference on consumer hardware via Ollama.

Developed by: Pragnyan Ramtha

Model type: Causal Language Model (Fine-tuned Llama 3.1)

Language(s) (NLP): English (with internet slang/informal syntax)

Finetuned from model: unsloth/llama-3.1-8b-Instruct

Model Sources

Repository: [Link to your Hugging Face Repo]

Base Model: Meta Llama 3.1 8B Instruct

Tech Stack: Unsloth, TRL, PEFT, Ollama

Uses

Direct Use This model is intended for: Personality Simulation: Chatting with a digital clone of the creator. Style Transfer: Generating text in a specific, informal style.Local Chatbot: Running a highly responsive, personalized assistant on consumer GPUs (RTX 3060/4060). Out-of-Scope UseFactual Q&A: While based on Llama 3.1, this model is biased towards a specific personality's knowledge and may hallucinate facts to maintain character. Impersonation: This model should not be used to deceive others into thinking they are speaking to the real person. Bias, Risks, and Limitations Training Data Bias: The model reflects the opinions, biases, and language patterns found in the private chat logs used for training. Language Style: The model often uses informal language, slang, and non-standard grammar, which is a feature, not a bug. Hallucinations: Like all LLMs, it can generate confident but incorrect information. How to Get Started with the ModelYou can use this model directly with the peft and transformers library, or download the GGUF version for Ollama.Python Code (Adapter Only)

from unsloth import FastLanguageModel


from peft import PeftModel

# 1. Load Base Model
model, tokenizer = FastLanguageModel.from_pretrained(
    model_name = "unsloth/llama-3.1-8b-Instruct",
    max_seq_length = 8192,
    dtype = None,
    load_in_4bit = True,
)

# 2. Load Adapters
model = PeftModel.from_pretrained(model, "PragnyanRamtha/pragnyan-clone-v1")

# 3. Inference
FastLanguageModel.for_inference(model)
messages = [
    {"role": "user", "content": "Yo, what's good?"},
]
inputs = tokenizer.apply_chat_template(messages, return_tensors="pt").to("cuda")

outputs = model.generate(inputs, max_new_tokens=64, use_cache=True)
print(tokenizer.batch_decode(outputs))

Local Use (Ollama)Download the .gguf file from the "Files" tab and create a Modelfile:

FROM ./pragnyan-clone-v1.q4_k_m.gguf

TEMPLATE """<|begin_of_text|><|start_header_id|>system<|end_header_id|>

{{ .System }}<|eot_id|><|start_header_id|>user<|end_header_id|>

{{ .Prompt }}<|eot_id|><|start_header_id|>assistant<|end_header_id|>

{{ .Response }}<|eot_id|>"""
SYSTEM """You are Pragnyan Ramtha."""

Training Procedure

Size: ~13,500 training examples.

The model was fine-tuned using Unsloth for 2x faster training and optimized VRAM usage.

We used QLoRA (Quantized Low-Rank Adaptation) to train on a single GPU.

Training Hyperparameters

Training regime: 4-bit QLoRA (bfloat16 precision)

Optimizer: paged_adamw_8bit (Paged AdamW to save memory)

Learning Rate: 2e-4

Epochs: 1

Batch Size: 2 (per device)

Gradient Accumulation: 8 (Effective batch size = 16)

LoRA Rank (r): 32.

LoRA Alpha: 64LoRA

Dropout: 0.05Max

Sequence Length: 8192

Hardware: NVIDIA L4 GPU (24GB VRAM) on Google Cloud.

Training Time: ~1.5 hours for 1 epoch.

VRAM Usage: Peaked at ~16GB during training.

EvaluationResultsFinal Validation Loss: ~1.14

Qualitative Eval: The model successfully adopts the target persona, maintaining conversation flow without breaking character.

Downloads last month
11
GGUF
Model size
8B params
Architecture
llama
Hardware compatibility
Log In to add your hardware

4-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support