this is finetuned model 6k

Keural-Alpha UA-SFT (6K)

This repository contains a Supervised Fine-Tuned (SFT) checkpoint of Keural-Alpha, trained to improve instruction-following and chat-style responses.


๐Ÿ”น Model Overview

  • Base model: keural-alpha-base
  • Fine-tuning type: Supervised Fine-Tuning (Instruction / UA-style)
  • Training steps: 6,000
  • Context length: 2048 tokens
  • Precision: bfloat16
  • Framework: PyTorch + Hugging Face Transformers
  • Checkpoint format: model.safetensors

๐Ÿ”น Training Data

The model was fine-tuned on cleaned instructionโ€“response data focused on:

  • Question answering
  • Short explanations
  • Basic reasoning tasks
  • General conversational instructions

This checkpoint represents an intermediate training stage and is intended for further experimentation or continued fine-tuning.


๐Ÿ”น Intended Use

  • Instruction-following research
  • Chat-style text generation
  • Continued fine-tuning (SFT / chat datasets)
  • Local and offline inference

โš ๏ธ This is not a final model. Output quality will improve with additional training steps and more diverse chat data.


Downloads last month
12
Safetensors
Model size
2B params
Tensor type
BF16
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for mkd-ai/keural-alpha-chat-v0.2

Unable to build the model tree, the base model loops to the model itself. Learn more.