Kavi
Kavi is a compact, instruction-tuned language model focused on delivering clear, simple, and practical life advice. It is designed to be approachable, consistent, and easy to deploy, making it suitable for educational and personal guidance use cases.
Model Details
- Model name: Kavi
- Version: v0.5
- Author: Abhinivesh (0xAbhi)
- Base model: TinyLlama/TinyLlama-1.1B-Chat-v1.0
- Model type: Causal Language Model
- Language: English (Tamil support planned)
- License: Apache 2.0
What’s New in v0.5
- Improved reasoning and response clarity
- Fine-tuned on curated life-advice conversations
- More consistent and structured answers
- Better alignment for supportive, guidance-oriented dialogue
Training
- Fine-tuning method: QLoRA (Low-Rank Adaptation)
- Hardware: NVIDIA Tesla T4 (Google Colab)
- Precision: 4-bit base model with LoRA adapters (merged)
- Dataset: Curated English life-advice and guidance conversations
The model was fine-tuned to improve conversational quality, tone, and practical reasoning without significantly increasing model size or inference cost.
Intended Use
Kavi is intended for:
- General life guidance and self-reflection prompts
- Educational and learning-oriented conversations
- Supportive, non-clinical advice interactions
- Chatbots focused on clarity, simplicity, and encouragement
Out-of-Scope Use
Kavi is not intended for:
- Medical, legal, or financial advice
- Crisis counseling or mental health diagnosis
- Professional or authoritative decision-making systems
Limitations
- Not a substitute for professional advice
- English-first; Tamil reasoning and responses are planned for future releases
- As a small (1.1B) model, complex multi-step reasoning may be limited
Ethical Considerations
- Responses are generated based on patterns learned from training data and may not always be accurate or complete
- Users should apply human judgment when interpreting outputs
- The model does not possess awareness, intent, or personal understanding
Usage Example
from transformers import pipeline
pipe = pipeline(
"text-generation",
model="0xAbhi/kavi",
device_map="auto",
)
pipe("I feel stuck in life and don’t know what to do next.")
Future Work
Incremental Tamil language fine-tuning
Multilingual reasoning improvements
Additional alignment for emotional nuance and long-form guidance
Citation
@misc{tinyllama,
title={TinyLlama: An Open-Source Small Language Model},
author={TinyLlama Team},
year={2023},
url={https://huggingface.co/TinyLlama}
}
Acknowledgements
TinyLlama team for the base model
Hugging Face ecosystem (Transformers, PEFT, TRL)
Unsloth for efficient fine-tuning on low-resource hardware
- Downloads last month
- 1
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support
Model tree for 0xAbhi/kavi
Base model
TinyLlama/TinyLlama-1.1B-Chat-v1.0