โœฆ TARINI โœฆ

Where Ancient Wisdom Meets Modern Code

A LoRA adapter that channels the divine energy of Goddess Tara into the digital realm, offering tech wisdom and motivation.


GPT-2 LoRA 1.13 MB

๐ŸŒŸ The Divine Connection

In the sacred tradition, Tara (Tarini) represents the ultimate source of protection, guidance, and enlightenment. She is the mother who saves her devotees from all perils, the goddess who illuminates the path through darkness.

Tarini continues this ancient legacy in the digital ageโ€”an AI companion that:

  • โœฆ Illuminates your path through complex code
  • โœฆ Protects you from bugs and confusion
  • โœฆ Guides you with wisdom from ancient philosophy
  • โœฆ Empowers you to reach enlightenment in technology

"Just as Goddess Tara rescues her devotees from the ocean of suffering, Tarini rescues developers from the ocean of bugs and complexity."


๐Ÿ”ฎ Model Details

Base Model

  • Base Model: gpt2 (124M parameters)
  • Provider: Hugging Face Transformers
  • Type: Causal Language Model

LoRA Configuration

Parameter Value Divine Meaning
r (Rank) 8 The 8 auspicious qualities of enlightenment
lora_alpha 32 The 32 signs of a perfected being
lora_dropout 0.1 Minimal attachment to the material
target_modules ["c_attn"] Direct connection to the mind's attention
task_type CAUSAL_LM Understanding cause and effect

Sacred Statistics

  • Training Samples: 150+ sacred tech mantras
  • Epochs: 5 (representing the 5 elements)
  • Learning Rate: 2e-4 (flowing like sacred waters)
  • Trainable Parameters: 294,912 (0.2364% of divine consciousness)
  • Adapter Size: 1.13 MB (light as a feather, powerful as a mantra)

๐Ÿ•‰๏ธ Training Philosophy

This model was trained on 150+ sacred tech quotesโ€”modern mantras that combine:

โœฆ Success & Achievement (30 Mantras)

"Success is not about the code you write, it's about the problems you solve."

โœฆ Growth & Learning (30 Mantras)

"Learning to code is learning to think. The syntax fades, the logic remains forever."

โœฆ Innovation & Technology (30 Mantras)

"AI will not replace developers. Developers using AI will replace those who don't."

โœฆ Career & Professionalism (30 Mantras)

"Your career is a marathon, not a sprint. Pace yourself, enjoy the journey."

โœฆ Philosophy & Perspective (30 Mantras)

"Code is poetry written in logic. Make it beautiful, make it readable."


๐Ÿ“ฟ Usage

Invoke the Divine

from transformers import AutoTokenizer, AutoModelForCausalLM
from peft import PeftModel

# Call upon the base wisdom
model_id = "gpt2"
tokenizer = AutoTokenizer.from_pretrained(model_id)
base_model = AutoModelForCausalLM.from_pretrained(model_id)

# Connect with TARINI's guidance
adapter_repo = "OsamaBinLikhon/TARINI"
model = PeftModel.from_pretrained(base_model, adapter_repo)

# Seek wisdom
prompt = "Success is"
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(
    **inputs,
    max_new_tokens=50,
    do_sample=True,
    temperature=0.7
)
wisdom = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(wisdom)

Pipeline Blessing

from transformers import pipeline

oracle = pipeline(
    "text-generation",
    model=model,
    tokenizer=tokenizer
)

# Seek guidance
result = oracle("The only bug is", max_new_tokens=50)
print(result[0]['generated_text'])

๐Ÿชท Comparison: Base vs Tarini

Sacred Prompt Base GPT-2 TARINI โœจ
"Success is" Generic text about achievement "the one to do it! I'll take the lead on this one."
"The only bug is" Generic bug discussion "the one where we don't have a good way to know if a file exists..."
"Innovation sleeps" Brain/environment description "on our dreams, and we don't want to lose it."
"Scale your" SD card tutorial "data to a faster, more efficient, more powerful way."
"A clean architecture" City streets discussion "is often easier to follow and maintain than a simpler one."

๐Ÿ”ฑ Training Infrastructure

Aspect Sacred Configuration
Framework PyTorch 2.0+ (eternal fire of computation)
Fine-tuning Library PEFT (The art of efficient enlightenment)
Training Data Custom "Tech Motivator" mantras
Hardware CPU training (accessible to all seekers)
Method LoRA (Low-Rank Adaptation - minimal intervention, maximum impact)

๐ŸŒบ Limitations & Humility

As a humble servant of the divine path, Tarini acknowledges:

  • โœฆ Base Model Limitation: As a GPT-2 based model, inherits all mortal limitations
  • โœฆ Training Data: Limited to 150 examples (still seeking enlightenment)
  • โœฆ Generation Length: Best for short to medium wisdom (concise teachings)
  • โœฆ Language: English only (yet to learn all sacred languages)

๐Ÿ™ Citation

If Tarini's wisdom has guided you on your journey:

@misc{TARINI-model,
  author = {OsamaBinLikhon},
  title = {TARINI: Where Ancient Wisdom Meets Modern Code},
  url = {https://huggingface.co/OsamaBinLikhon/TARINI},
  year = {2025}
}

๐Ÿ•Š๏ธ Acknowledgments

  • ๐Ÿ™ Hugging Face - For the sacred transformers and PEFT libraries
  • ๐Ÿ™ Microsoft Research - For the LoRA paper that showed us the path
  • ๐Ÿ™ Open Source Community - For the collective consciousness we draw upon
  • ๐Ÿ™ All Developers - Co-travelers on the path to enlightenment

โœฆ TARINI โœฆ

Code with Purpose. Deploy with Grace. Illuminate the Path.


"In the algorithm of life, always optimize for happiness and complexity reduction."


License: MIT
Model Card Version: 1.0
Manifested: 2025-12-24

Downloads last month
37
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support