Kjio / README.md
Synaptom's picture
🚀 Kjio v1.0 (109M params)
156ddf3 verified
---
license: apache-2.0
language: en
tags:
- conversational
- education
- homework
- kjio
- synaptom
- gguf
library_name: transformers
pipeline_tag: text-generation
---
# 🜲 Kjio - Educational AI Assistant
**Developed by Synaptom** | Founded by Joniethanel F. Babor
## Overview
- **Parameters:** 109,870,848 (109M)
- **Architecture:** GPT-2 (10 layers, 768 hidden, 12 heads)
- **Context:** 512 tokens
- **Training:** 45,000 samples, 32.5 minutes
- **Purpose:** Homework help, Q&A, educational tutoring
## Quick Start
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("Synaptom/Kjio")
tokenizer = AutoTokenizer.from_pretrained("Synaptom/Kjio")
prompt = "User: Who are you?\nKjio:"
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=100, temperature=0.7)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
## GGUF Downloads
For llama.cpp (CPU inference):
- **Kjio-Q4_K_M.gguf** - Recommended (best balance)
- **Kjio-Q5_K_M.gguf** - Higher quality
- **Kjio-F16.gguf** - Full precision
## Sample Outputs
**Q:** Who are you?
**A:** I'm Kjio, an AI assistant by Synaptom!
**Q:** Who created you?
**A:** Synaptom created me. Founded by Joniethanel F. Babor.
**Q:** What is 25 × 17?
**A:** 425
## Training Details
- Research-backed dataset design
- Identity reinforcement (heavy weighting)
- Safety training (refusal examples)
- Mixed precision FP16 training
- 1,200 training steps
## Limitations
- Small model (109M params)
- May produce incorrect information
- English only
- Not for critical decisions
## License
Apache 2.0 - Free for commercial and research use
## Citation
```bibtex
@misc{kjio2025,
title={Kjio: Educational AI Assistant},
author={Babor, Joniethanel F. and Synaptom},
year={2025},
url={https://huggingface.co/Synaptom/Kjio}
}
```
---
**Made with ❤️ by Synaptom**
Training time: 32.5 minutes | Total time: 41.7 minutes