This finetuned model is specialized in STEM like LCB, CodeForce, AIME24, AIME25, AMC23, MATH500

This model is finetuned from model : unsloth/gemma-3n-e4b-it-unsloth-bnb-4bit. It was trained 500 steps for OpenThoughts reasoning dataset.

How to use this model:

# Load model directly
from transformers import AutoTokenizer, AutoModelForCausalLM

tokenizer = AutoTokenizer.from_pretrained("EpistemeAI/Hercules1-Gemma-3n-E4B-it")
model = AutoModelForCausalLM.from_pretrained("EpistemeAI/Hercules1-Gemma-3n-E4B-it")
messages = [
    {"role": "user", "content": "Write me a Python function to calculate the nth fibonacci number.<think></think>"},
]
inputs = tokenizer.apply_chat_template(
    messages,
    add_generation_prompt=True,
    tokenize=True,
    return_dict=True,
    return_tensors="pt",
).to(model.device)

outputs = model.generate(**inputs, max_new_tokens=1024)
print(tokenizer.decode(outputs[0][inputs["input_ids"].shape[-1]:]))

LLM benchmark

Tasks Version Filter n-shot Metric Herculus Gemma Claude 3.5 Haiku Command R+
winogrande 1 none 1 acc โ†‘ 0.6771
gpqa_diamond_zeroshot 1 none 0 acc/acc_norm โ†‘ 0.4031 0.394 0.290
mmlu_pro 0.1 none 1 acc โ†‘ 0.2451
global_mmlu_full_en 0 none acc โ†‘ 0.6092

Math500

Task Version Metric Value
lighteval:math_500:0 math_pass@1:4_samples 0.584

see full file global_mmlu_full_en

Note:

  • Currently only text is supported.
  • Set temperature = 1.0, top_k = 64, top_p = 0.95, min_p = 0.0
  • Gemma 3n max tokens (context length): 32K.

Uploaded finetuned model

  • Developed by: EpistemeAI
  • License: apache-2.0
  • Finetuned from model : unsloth/gemma-3n-e4b-it-unsloth-bnb-4bit

This gemma3n model was trained 2x faster with Unsloth and Huggingface's TRL library.

Citation

@misc{guha2025openthoughtsdatarecipesreasoning,
  title={OpenThoughts: Data Recipes for Reasoning Models}, 
  author={Etash Guha and Ryan Marten and Sedrick Keh and Negin Raoof and Georgios Smyrnis and Hritik Bansal and Marianna Nezhurina and Jean Mercat and Trung Vu and Zayne Sprague and Ashima Suvarna and Benjamin Feuer and Liangyu Chen and Zaid Khan and Eric Frankel and Sachin Grover and Caroline Choi and Niklas Muennighoff and Shiye Su and Wanjia Zhao and John Yang and Shreyas Pimpalgaonkar and Kartik Sharma and Charlie Cheng-Jie Ji and Yichuan Deng and Sarah Pratt and Vivek Ramanujan and Jon Saad-Falcon and Jeffrey Li and Achal Dave and Alon Albalak and Kushal Arora and Blake Wulfe and Chinmay Hegde and Greg Durrett and Sewoong Oh and Mohit Bansal and Saadia Gabriel and Aditya Grover and Kai-Wei Chang and Vaishaal Shankar and Aaron Gokaslan and Mike A. Merrill and Tatsunori Hashimoto and Yejin Choi and Jenia Jitsev and Reinhard Heckel and Maheswaran Sathiamoorthy and Alexandros G. Dimakis and Ludwig Schmidt},
  year={2025},
  eprint={2506.04178},
  archivePrefix={arXiv},
  primaryClass={cs.LG},
  url={https://arxiv.org/abs/2506.04178}, 
}
Downloads last month
4
Safetensors
Model size
8B params
Tensor type
BF16
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for EpistemeAI/Hercules1-8B-E4B-it

Quantizations
1 model

Collection including EpistemeAI/Hercules1-8B-E4B-it