mkiwi
mkiwi is a model trained on Kimi K2.5 and Opus 4.7 outputs. This model is partially intended to evaluate the impact of conversational filler.
Training
It was trained on a tokenized version of mkiwi-data using 12500 steps, 4 batch size, 1.5e-4 learning rate, and the pika 3 tokenizer.
Limitations
Due to its size, the model is not suitable for production workloads.
- Downloads last month
- 39