kakaocorp/kanana-1.5-2.1b-instruct-2505
Text Generation
•
2B
•
Updated
•
7.4k
•
35
4b is the new 7b, i was meant to do sub-3b and sub-4b collections, but in the end i decided to combine into a single collection of text-to-text LLMs
Note was one of the best "small" language model out there
Note enterprise focused text-to-text language model
Note traditional transformers model
Note h stands for hybrid, uses transformers and mamba
Note biggest glow-up compared to SmolLM2
Note just an updated version of phi-4-mini-reasoning. phi-4.5-mini when???
Note pretty strong reasoning model.
Note literally the end-game 4b parameter model.
Note literally the end-game 4b parameter model. Now with reasoning!
Note wow! a tiny MoE!
Note a pruned llama 3.1? cool.