Krishna Wisdom Assistant

Model Details

Model Description

Krishna Wisdom Assistant is a conversational language model fine-tuned to respond in a calm, reflective, spiritually inspired tone influenced by Krishna philosophy and dharmic wisdom.

The model is designed to answer everyday emotional, philosophical, and personal-reflection questions in a gentle and uplifting style. It is intended for supportive, inspirational, and wisdom-oriented dialogue rather than factual authority or professional advice.

This model was fine-tuned using supervised fine-tuning (SFT) with LoRA on a custom prompt-response dataset containing philosophical and analogy-based conversational examples.

  • Developed by: aaryanpethkar48
  • Funded by: Personal project
  • Shared by: Aaryan Pethkar
  • Model type: Causal Language Model
  • Language(s): English
  • License: Apache-2.0
  • Finetuned from model: TinyLlama/TinyLlama-1.1B-Chat-v1.0

Model Sources


Uses

Direct Use

This model is intended for:

  • spiritual-style conversation
  • motivational and reflective chat
  • calm, wisdom-oriented responses
  • journaling prompts
  • gentle emotional support language
  • quote-style or dharmic-inspired assistant use cases

Example prompts:

  • Why do people change?
  • How do I stay calm when life feels unfair?
  • What does detachment really mean?
  • How can I stop overthinking?

Downstream Use

This model may be used in:

  • spiritual chatbot apps
  • journaling assistants
  • mindfulness or reflection tools
  • content generation for short inspirational responses
  • devotional or wisdom-themed conversational interfaces

Out-of-Scope Use

This model is not intended for:

  • medical advice
  • mental health diagnosis or crisis intervention
  • legal advice
  • financial advice
  • religious authority or scripture interpretation with scholarly accuracy
  • high-stakes decision-making
  • factual Q&A requiring high reliability

This model should not be used as a substitute for professional, clinical, legal, or emergency support.


Bias, Risks, and Limitations

This model reflects the style and tone of its fine-tuning data and may:

  • produce repetitive spiritual phrasing
  • overgeneralize life advice
  • respond poetically instead of analytically
  • provide emotionally comforting but incomplete answers
  • hallucinate if asked factual or technical questions outside its domain
  • reflect bias present in the source dataset

Because the fine-tuning dataset is relatively large but domain-specific, the model may also:

  • overfit to certain response patterns
  • produce stylistically similar outputs
  • struggle with highly technical or factual queries
  • underperform on general-purpose assistant tasks

Recommendations

Users should:

  • use this model primarily for inspiration, reflection, and gentle conversation
  • avoid relying on it for factual, legal, financial, or medical decisions
  • test outputs carefully before deployment in public-facing products
  • add moderation and safety layers for production use

How to Get Started with the Model

Transformers Inference Example

from transformers import AutoTokenizer, AutoModelForCausalLM
import torch

model_id = "aaryanpethkar48/mindful-ai"

tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    torch_dtype=torch.float16 if torch.cuda.is_available() else torch.float32,
    device_map="auto"
)

messages = [
    {"role": "system", "content": "You are a wise AI inspired by Krishna philosophy."},
    {"role": "user", "content": "Why do people change?"},
]

text = tokenizer.apply_chat_template(
    messages,
    tokenize=False,
    add_generation_prompt=True
)

inputs = tokenizer(text, return_tensors="pt").to(model.device)

outputs = model.generate(
    **inputs,
    max_new_tokens=150,
    do_sample=True,
    temperature=0.7,
    top_p=0.9,
    repetition_penalty=1.1
)

print(tokenizer.decode(outputs[0], skip_special_tokens=True))
Downloads last month
492
Safetensors
Model size
1B params
Tensor type
F16
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Space using aaryanpethkar48/mindful-ai 1