this is finetuned model 6k
Keural-Alpha UA-SFT (6K)
This repository contains a Supervised Fine-Tuned (SFT) checkpoint of Keural-Alpha, trained to improve instruction-following and chat-style responses.
๐น Model Overview
- Base model: keural-alpha-base
- Fine-tuning type: Supervised Fine-Tuning (Instruction / UA-style)
- Training steps: 6,000
- Context length: 2048 tokens
- Precision: bfloat16
- Framework: PyTorch + Hugging Face Transformers
- Checkpoint format:
model.safetensors
๐น Training Data
The model was fine-tuned on cleaned instructionโresponse data focused on:
- Question answering
- Short explanations
- Basic reasoning tasks
- General conversational instructions
This checkpoint represents an intermediate training stage and is intended for further experimentation or continued fine-tuning.
๐น Intended Use
- Instruction-following research
- Chat-style text generation
- Continued fine-tuning (SFT / chat datasets)
- Local and offline inference
โ ๏ธ This is not a final model. Output quality will improve with additional training steps and more diverse chat data.
- Downloads last month
- 12
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐ Ask for provider support
Model tree for mkd-ai/keural-alpha-chat-v0.2
Unable to build the model tree, the base model loops to the model itself. Learn more.