GGUF Files for Kai-0.35B-Instruct

These are the GGUF files for NoesisLab/Kai-0.35B-Instruct.

Downloads

GGUF Link Quantization Description
Download Q2_K Lowest quality
Download Q3_K_S
Download IQ3_S Integer quant, preferable over Q3_K_S
Download IQ3_M Integer quant
Download Q3_K_M
Download Q3_K_L
Download IQ4_XS Integer quant
Download Q4_K_S Fast with good performance
Download Q4_K_M Recommended: Perfect mix of speed and performance
Download Q5_K_S
Download Q5_K_M
Download Q6_K Very good quality
Download Q8_0 Best quality
Download f16 Full precision, don't bother; use a quant

Note from Flexan

I provide GGUFs and quantizations of publicly available models that do not have a GGUF equivalent available yet, usually for models I deem interesting and wish to try out.

If there are some quants missing that you'd like me to add, you may request one in the community tab. If you want to request a public model to be converted, you can also request that in the community tab. If you have questions regarding this model, please refer to the original model repo.

You can find more info about me and what I do here.

Kai-0.35B-Instruct

A compact 0.35B-parameter instruction-tuned language model optimized for reasoning, math, and code generation tasks.

Model Details

Model Kai-0.35B-Instruct
Architecture LlamaForCausalLM
Parameters 360M
Hidden size 960
Layers 32
Attention heads 15 (5 KV heads, GQA)
Context length 8192
Precision bfloat16
Vocab size 49,152

Benchmark Results (5-shot, log-likelihood)

Benchmark Kai-0.35B-Instruct Mamba (370M) TinyLlama (1.1B) Llama-3.2 (1B)
ARC-Challenge (science reasoning) 37.80% ~29.1% ~30.1% ~44.5%
HellaSwag (sentence completion) 55.88% ~53.8% ~59.2% ~61.1%
PIQA (physical commonsense) 71.82% ~69.6% ~73.0% ~74.5%

Code Generation โ€” MBPP (3-shot, pass@1)

Model Params MBPP pass@1
Mamba / Mamba-2 370M <10.0%
TinyLlama 1.1B ~19.91%
Kai-0.35B-Instruct 360M 22.20%
Llama-3.2-1B (Base) 1.0B ~25-30%
Llama-3.2-1B-Instruct 1.0B ~49.0%

Key Observations

  1. ARC-Challenge: Kai-0.35B scores 37.80% (5-shot), significantly outperforming both Mamba-370M (+8.7pp) and TinyLlama-1.1B (+7.7pp) โ€” a model 3x its size.

  2. PIQA: At 71.82%, Kai-0.35B nearly matches TinyLlama-1.1B (73.0%) with only 1/3 the parameters, and trails the 1B-class Llama-3.2 by less than 3pp.

  3. MBPP: At 22.20% pass@1, Kai-0.35B surpasses TinyLlama-1.1B (~19.91%) in code generation despite being 3x smaller.

Usage

from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
model = AutoModelForCausalLM.from_pretrained(
    "NoesisLab/Kai-0.35B-Instruct",
    torch_dtype=torch.bfloat16,
)
tokenizer = AutoTokenizer.from_pretrained("NoesisLab/Kai-0.35B-Instruct")
messages = [{"role": "user", "content": "What is 25 * 4?"}]
input_ids = tokenizer.apply_chat_template(messages, return_tensors="pt")
output = model.generate(input_ids, max_new_tokens=256)
print(tokenizer.decode(output[0], skip_special_tokens=True))

Citation

@misc{noesislab2026nkai,
  title={Kai-0.35B-Instruct},
  author={NoesisLab},
  year={2026},
  url={https://huggingface.co/NoesisLab/Kai-0.35B-Instruct}
}

License

Apache 2.0

Downloads last month
-
GGUF
Model size
0.4B params
Architecture
llama
Hardware compatibility
Log In to add your hardware

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

16-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for Flexan/NoesisLab-Kai-0.35B-Instruct-GGUF

Quantized
(3)
this model

Collection including Flexan/NoesisLab-Kai-0.35B-Instruct-GGUF

Evaluation results