mkiwi

mkiwi is a model trained on Kimi K2.5 and Opus 4.7 outputs. This model is partially intended to evaluate the impact of conversational filler.

Training

It was trained on a tokenized version of mkiwi-data using 12500 steps, 4 batch size, 1.5e-4 learning rate, and the pika 3 tokenizer.

Limitations

Due to its size, the model is not suitable for production workloads.

Downloads last month
39
Safetensors
Model size
19.3M params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Dataset used to train qikp/mkiwi-20m