Kite

🎉 You are looking at Kite 4.2, which is using a more optimized dataset!

Kite is a small, trained, 14 million parameter language model.

Training

It was trained on a tokenized version of qikp/small-data-2, which is a mixture of various datasets, using 1 epoch, 32 batch size, 1.5e-4 learning rate, and the pika 4 tokenizer.

Also, evaluation on a tokenized and truncated byunggill/gpt-2-output was done during training.

Limitations

Due to its size, the model is not suitable for production workloads.

Downloads last month
42
Safetensors
Model size
14.2M params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Dataset used to train qikp/kite-4.2-14m