--- license: mit library_name: transformers datasets: - FractalAIResearch/Fathom-V0.4-SFT-Shortest-Chains - FractalAIResearch/Fathom-V0.6-Iterative-Curriculum-Learning base_model: - deepseek-ai/DeepSeek-R1-Distill-Qwen-14B --- # Fathom-R1-14B GGUF Models ## Model Generation Details This model was generated using [llama.cpp](https://github.com/ggerganov/llama.cpp) at commit [`ea1431b0`](https://github.com/ggerganov/llama.cpp/commit/ea1431b0fa3a8108aac1e0a94a13ccc4a749963e). ## **Choosing the Right Model Format** Selecting the correct model format depends on your **hardware capabilities** and **memory constraints**. ### **BF16 (Brain Float 16) – Use if BF16 acceleration is available** - A 16-bit floating-point format designed for **faster computation** while retaining good precision. - Provides **similar dynamic range** as FP32 but with **lower memory usage**. - Recommended if your hardware supports **BF16 acceleration** (check your device's specs). - Ideal for **high-performance inference** with **reduced memory footprint** compared to FP32. 📌 **Use BF16 if:** ✔ Your hardware has native **BF16 support** (e.g., newer GPUs, TPUs). ✔ You want **higher precision** while saving memory. ✔ You plan to **requantize** the model into another format. 📌 **Avoid BF16 if:** ❌ Your hardware does **not** support BF16 (it may fall back to FP32 and run slower). ❌ You need compatibility with older devices that lack BF16 optimization. --- ### **F16 (Float 16) – More widely supported than BF16** - A 16-bit floating-point **high precision** but with less of range of values than BF16. - Works on most devices with **FP16 acceleration support** (including many GPUs and some CPUs). - Slightly lower numerical precision than BF16 but generally sufficient for inference. 📌 **Use F16 if:** ✔ Your hardware supports **FP16** but **not BF16**. ✔ You need a **balance between speed, memory usage, and accuracy**. ✔ You are running on a **GPU** or another device optimized for FP16 computations. 📌 **Avoid F16 if:** ❌ Your device lacks **native FP16 support** (it may run slower than expected). ❌ You have memory limitations. --- ### **Quantized Models (Q4_K, Q6_K, Q8, etc.) – For CPU & Low-VRAM Inference** Quantization reduces model size and memory usage while maintaining as much accuracy as possible. - **Lower-bit models (Q4_K)** → **Best for minimal memory usage**, may have lower precision. - **Higher-bit models (Q6_K, Q8_0)** → **Better accuracy**, requires more memory. 📌 **Use Quantized Models if:** ✔ You are running inference on a **CPU** and need an optimized model. ✔ Your device has **low VRAM** and cannot load full-precision models. ✔ You want to reduce **memory footprint** while keeping reasonable accuracy. 📌 **Avoid Quantized Models if:** ❌ You need **maximum accuracy** (full-precision models are better for this). ❌ Your hardware has enough VRAM for higher-precision formats (BF16/F16). --- ### **Very Low-Bit Quantization (IQ3_XS, IQ3_S, IQ3_M, Q4_K, Q4_0)** These models are optimized for **extreme memory efficiency**, making them ideal for **low-power devices** or **large-scale deployments** where memory is a critical constraint. - **IQ3_XS**: Ultra-low-bit quantization (3-bit) with **extreme memory efficiency**. - **Use case**: Best for **ultra-low-memory devices** where even Q4_K is too large. - **Trade-off**: Lower accuracy compared to higher-bit quantizations. - **IQ3_S**: Small block size for **maximum memory efficiency**. - **Use case**: Best for **low-memory devices** where **IQ3_XS** is too aggressive. - **IQ3_M**: Medium block size for better accuracy than **IQ3_S**. - **Use case**: Suitable for **low-memory devices** where **IQ3_S** is too limiting. - **Q4_K**: 4-bit quantization with **block-wise optimization** for better accuracy. - **Use case**: Best for **low-memory devices** where **Q6_K** is too large. - **Q4_0**: Pure 4-bit quantization, optimized for **ARM devices**. - **Use case**: Best for **ARM-based devices** or **low-memory environments**. --- ### **Summary Table: Model Format Selection** | Model Format | Precision | Memory Usage | Device Requirements | Best Use Case | |--------------|------------|---------------|----------------------|---------------| | **BF16** | Highest | High | BF16-supported GPU/CPUs | High-speed inference with reduced memory | | **F16** | High | High | FP16-supported devices | GPU inference when BF16 isn't available | | **Q4_K** | Medium Low | Low | CPU or Low-VRAM devices | Best for memory-constrained environments | | **Q6_K** | Medium | Moderate | CPU with more memory | Better accuracy while still being quantized | | **Q8_0** | High | Moderate | CPU or GPU with enough VRAM | Best accuracy among quantized models | | **IQ3_XS** | Very Low | Very Low | Ultra-low-memory devices | Extreme memory efficiency and low accuracy | | **Q4_0** | Low | Low | ARM or low-memory devices | llama.cpp can optimize for ARM devices | --- ## **Included Files & Details** ### `Fathom-R1-14B-bf16.gguf` - Model weights preserved in **BF16**. - Use this if you want to **requantize** the model into a different format. - Best if your device supports **BF16 acceleration**. ### `Fathom-R1-14B-f16.gguf` - Model weights stored in **F16**. - Use if your device supports **FP16**, especially if BF16 is not available. ### `Fathom-R1-14B-bf16-q8_0.gguf` - **Output & embeddings** remain in **BF16**. - All other layers quantized to **Q8_0**. - Use if your device supports **BF16** and you want a quantized version. ### `Fathom-R1-14B-f16-q8_0.gguf` - **Output & embeddings** remain in **F16**. - All other layers quantized to **Q8_0**. ### `Fathom-R1-14B-q4_k.gguf` - **Output & embeddings** quantized to **Q8_0**. - All other layers quantized to **Q4_K**. - Good for **CPU inference** with limited memory. ### `Fathom-R1-14B-q4_k_s.gguf` - Smallest **Q4_K** variant, using less memory at the cost of accuracy. - Best for **very low-memory setups**. ### `Fathom-R1-14B-q6_k.gguf` - **Output & embeddings** quantized to **Q8_0**. - All other layers quantized to **Q6_K** . ### `Fathom-R1-14B-q8_0.gguf` - Fully **Q8** quantized model for better accuracy. - Requires **more memory** but offers higher precision. ### `Fathom-R1-14B-iq3_xs.gguf` - **IQ3_XS** quantization, optimized for **extreme memory efficiency**. - Best for **ultra-low-memory devices**. ### `Fathom-R1-14B-iq3_m.gguf` - **IQ3_M** quantization, offering a **medium block size** for better accuracy. - Suitable for **low-memory devices**. ### `Fathom-R1-14B-q4_0.gguf` - Pure **Q4_0** quantization, optimized for **ARM devices**. - Best for **low-memory environments**. - Prefer IQ4_NL for better accuracy. # 🚀 If you find these models useful ❤ **Please click "Like" if you find this useful!** Help me test my **AI-Powered Network Monitor Assistant** with **quantum-ready security checks**: 👉 [Quantum Network Monitor](https://readyforquantum.com/dashboard/?assistant=open&utm_source=huggingface&utm_medium=referral&utm_campaign=huggingface_repo_readme) 💬 **How to test**: Choose an **AI assistant type**: - `TurboLLM` (GPT-4o-mini) - `HugLLM` (Hugginface Open-source) - `TestLLM` (Experimental CPU-only) ### **What I’m Testing** I’m pushing the limits of **small open-source models for AI network monitoring**, specifically: - **Function calling** against live network services - **How small can a model go** while still handling: - Automated **Nmap scans** - **Quantum-readiness checks** - **Network Monitoring tasks** 🟡 **TestLLM** – Current experimental model (llama.cpp on 2 CPU threads): - ✅ **Zero-configuration setup** - ⏳ 30s load time (slow inference but **no API costs**) - 🔧 **Help wanted!** If you’re into **edge-device AI**, let’s collaborate! ### **Other Assistants** 🟢 **TurboLLM** – Uses **gpt-4o-mini** for: - **Create custom cmd processors to run .net code on Quantum Network Monitor Agents** - **Real-time network diagnostics and monitoring** - **Security Audits** - **Penetration testing** (Nmap/Metasploit) 🔵 **HugLLM** – Latest Open-source models: - 🌐 Runs on Hugging Face Inference API ### 💡 **Example commands to you could test**: 1. `"Give me info on my websites SSL certificate"` 2. `"Check if my server is using quantum safe encyption for communication"` 3. `"Run a comprehensive security audit on my server"` 4. '"Create a cmd processor to .. (what ever you want)" Note you need to install a Quantum Network Monitor Agent to run the .net code from. This is a very flexible and powerful feature. Use with caution! ### Final Word I fund the servers used to create these model files, run the Quantum Network Monitor service, and pay for inference from Novita and OpenAI—all out of my own pocket. All the code behind the model creation and the Quantum Network Monitor project is [open source](https://github.com/Mungert69). Feel free to use whatever you find helpful. If you appreciate the work, please consider [buying me a coffee](https://www.buymeacoffee.com/mahadeva) ☕. Your support helps cover service costs and allows me to raise token limits for everyone. I'm also open to job opportunities or sponsorship. Thank you! 😊 # 🧮 Fathom-R1-14B: $499 Training Recipe for Unlocking Math Reasoning at o4-mini level using R1-distilled-14B model under 16K context
- **Second Stage (Leveraging SFT to improve reasoning efficiently at higher sequence length):** We build upon the RL checkpoint and perform SFT under a **16K context window** to encourage more detailed reasoning that would be required for solving more complex problems. For this stage, we strategically curate a dataset consisting of hard problems — specifically, questions with lower solve rates (0 < pass_rate <=0.4). Then, we obtain the shortest possible reasoning chains for these questions forming the **SFT Shortest Chains dataset** comprising of 9.5K examples. Through supervised fine-tuning on this dataset, the model is able to stablize its reasoning at sequence length upto 16K. The resulting model is named **Fathom-R1-14B-v0.4**, optimized for concise yet accurate mathematical reasoning.
Total H100 GPU Hours: 293
Cost: $831
### Training Recipe for Fathom-R1-14B-v0.4
Given the performance improvement we noticed during the second fine-tuning stage of developing Fathom-R1-14B-v0.4-RS and in attempt to further reduce the cost, we experiment by eliminating RL and directly performing second stage SFT on Deepseek-R1-Distilled-Qwen-14B base model.
Total H100 GPU Hours: 128
Cost: $363
## Model Merging
Given v0.6 and v0.4 models have been developed by following different training methodologies, we perform linear merging to combine the strengths to obtain final 2 checkpoints.
- **Fathom-R1-14B**: Obtained via merging Fathom-R1-14B-V0.6 (Iterative Curriculum SFT) and Fathom-R1-14B-V0.4 (SFT-Shortest-Chains)
- **Fathom-R1-14B-RS**: Obtained via merging Fathom-R1-14B-V0.6 (Iterative Curriculum SFT) and Fathom-R1-14B-V0.4 (RL-compression + SFT-Shortest-Chains)
## 💰 Post-Training Cost
We developed **Fathom-R1-14B** models using a focused, resource-efficient strategy that balances performance with compute budget. Below is the GPU time utilized and the cost incurred
| Model Weights | GPU Hours (H100) | Cost(USD) |
|----------------------------|------------------|------|
| Fathom-R1-14B-V0.4-RS | 293 | 831 |
| Fathom-R1-14B-V0.4 | 128 | 363 |
| Fathom-R1-14B-V0.6 | 48 | 136 |
| Fathom-R1-14B-RS | 341 | 967 |
| **Fathom-R1-14B** | **176** | **499** |
So, the final Fathom-R1-14B took just 499$ to be trained overall! This low training cost highlights the efficiency of our method — enabling high-level mathematical reasoning comparable to **o4-mini** in **$499** , all within a **16k sequence length budget**.
---
## 📊 Evaluation
We evaluate Fathom‑R1-14B using the same metrics and sampling configuration introduced in the DeepSeek‑R1 paper, namely **pass@1** and **cons@64**. However, our evaluation is conducted under a reduced output budget of 16,384 tokens, compared to DeepSeek‑R1’s 32,768 tokens, to better reflect practical deployment constraints.
- **pass@1**: Pass@1 is computed as the average correctness over k sampled solution chains per problem (in our experiments we keep k=64).
- **cons@64**: Assesses consistency by sampling 64 reasoning chains per question and computing the majority vote accuracy.
**Evaluation Configuration**:
- Temperature: 0.6
- top_p: 0.95
- Number of sampled chains: 64
- Context: 16,384 tokens
This setup allows us to benchmark Fathom-R1-14B’s reasoning performance and stability under realistic memory and inference budgets, while maintaining compatibility with the DeepSeek‑R1 evaluation protocol.
We utilize the evaluation framework provided by the [LIMO](https://github.com/GAIR-NLP/LIMO) repository to run inference and compute metrics.
For detailed instructions and implementation details, please refer to [`eval/README.md`](https://github.com/FractalAIResearchLabs/Fathom-R1/blob/main/eval/readme.md).
---
## Results
We evaluate and compare **Fathom‑R1-14B** with several baseline models across 3 challenging benchmarks: **AIME25**, **HMMT25**, and **GPQA**. For each, we report `pass@1` and `cons@64`, following the same evaluation configuration.
| Model | AIME25 | | HMMT25 | |
|------------------|----------------|---------------|----------------|---------------|
| | pass@1 | cons@64 | pass@1 | cons@64 |
| **Closed-Source Models** | | | | |
| o1‑mini | 50.71 | 63.33 | 35.15 | 46.67 |
| o3‑mini‑low | 42.60 | 53.33 | 26.61 | 33.33 |
| o3‑mini‑medium | 72.24 | 83.33 | 49.21 | 60.00 |
| o4-mini-low | 60.20 | 76.67 | 39.11 | 53.33 |
| o1‑preview | 33.33 | 36.67 | 17.78 | 20.00 |
| gpt‑4.5‑preview | 34.44 | 40.00 | 16.67 | 20.00 |
| **Open-Source Models** | | | | |
| DeepSeek-R1-Distill-Qwen-14B | 45.50 | 63.33 | 30.00 | 50.00 |
| DeepSeek-R1-Distill-Qwen-32B | 49.64 | 73.33 | 33.02 | 53.33 |
| DeepSeekR1‑670B | 61.25 | 83.33 | 42.19 | 56.67 |
| LightR1‑14B | 51.15 | 76.67 | 33.75 | 50.00 |
| Fathom‑R1-14B-V0.4-RS | 50.94 | 73.33 | 33.70 | 40.00 |
| Fathom‑R1-14B-V0.4 | 50.94 | 70.00 | 34.53 | 56.67 |
| Fathom‑R1-14B-V0.6 | 50.63 | 76.67 | 32.19 | 50.00 |
| Fathom‑R1-14B-RS | 52.03 | 76.67 | 35.00 | 53.33 |
| **Fathom‑R1-14B** | **52.71** | **76.67** | **35.26** | **56.67** |
**Fathom‑R1-14B** demonstrates highly competitive performance across all datasets, improving over the original R1-distilled models while closely matching or surpassing other strong baselines in several settings.
On both AIME 25 and HMMT 25, our model shows the highest pass@1 as well as cons@64 scores among all the open-source models (including the bigger R1-Distilled-32B model), with R1-670B being the only exception.
In fact, we observe that Fathom-R1-14B is superior to the first two generations of OpenAI's mini-reasoning models, including **o1-mini** and **o3-mini-low-** and it's performance closely matches that of newly released **o4-mini-low** (self-consistency decoding).
---
## 🌍 Generalization Beyond Math: GPQA-Diamond
Notably, we also observe out-of-domain improvement in **GPQA-Diamond**, even though there wasn't a single instance of non-math questions in our training data.
This indicates that our training methodology mentioned above and training on math wuestions facilitates generalization across diverse domains, a finding similar to what LightR1-14B & LIMO had observed.
#### ✅ GPQA Benchmark Comparison (16k)
| **Model** | **pass@1** | **cons@64** |
|-------------------|------------|-------------|
| DeepSeek-R1-Distill-Qwen-14B | 54.19 | 64.14 |
| LightR1‑14B | 56.94 | 65.15 |
| Fathom‑R1-14B-RS | 59.13 | 66.16 |
| **Fathom‑R1-14B** | **59.46** | **66.16** |
---
## ✂️ Ablation Study on Token Efficiency
To assess reasoning token efficiency, we compare the **average response token count**, under 16k context length, across AIME25, and HMMT25. On AIME25, Fathom‑R1-14B-RS uses 10% fewer response tokens than LightR1-14B despite having higher pass@1. HMMT25 questions are relatively tougher compared to AIME'25 and tougher questions usually require more thinking tokens. On HMMT25, Fathom‑R1-14B-RS uses 4.5% fewer response tokens than LightR1-14B despite having higher pass@1.
#### Average Response Length (Tokens)
| Model | AIME25 | HMMT25 |
|------------------|--------|--------|
| LightR1-14B | 11330 | 12680 |
| DeepSeek-R1-Distill-Qwen-14B | 10878 | 12263 |
| Fathom‑R1-14B-V0.4 | 10570 | 11950 |
| Fathom‑R1-14B | 10956 | 12125 |
| **Fathom‑R1-14B-RS** | **10083** | **12100** |
---
## Data Decontimination
Both benchmarks used (AIME 25 and HMMT 25) were released a few weeks after the release of the base model, ensuring no contamination occurred during the model's pre-training. The dataset corpora (Numina-Math 1.5 & OpenR1-Math) were released around the same time as these exams, with a cutoff date no later than 2024. Additionally, we conduct checks to verify there is no contamination in the training data.
---
## Release Assets
- Training Recipe Blog: [🤗 $499 training recipe for creating Fathom-R1-14B](https://huggingface.co/FractalAIResearch/Fathom-R1-14B)
- Final Merged Models: [🤗 Fathom-R1-14B](https://huggingface.co/FractalAIResearch/Fathom-R1-14B), [🤗 Fathom-R1-14B-RS](https://huggingface.co/FractalAIResearch/Fathom-R1-14B-RS)
- Intermediate Models: [🤗 Fathom-R1-14B-V0.6](https://huggingface.co/FractalAIResearch/Fathom-R1-14B-V0.6), [🤗 Fathom-R1-14B-V0.4](https://huggingface.co/FractalAIResearch/Fathom-R1-14B-V0.4), [🤗 Fathom-R1-14B-V0.4-RS](https://huggingface.co/FractalAIResearch/Fathom-R1-14B-V0.4-RS)
- Fathom-R1-14B Datasets: [🤗 V0.6-Iterative-Curriculum-Learning](https://huggingface.co/datasets/FractalAIResearch/Fathom-V0.6-Iterative-Curriculum-Learning), [🤗 V0.4-SFT-Shortest-Chains](https://huggingface.co/datasets/FractalAIResearch/Fathom-V0.4-SFT-Shortest-Chains), [🤗 V0.4-RL-Compression](https://huggingface.co/datasets/FractalAIResearch/Fathom-V0.4-RL-Compression)
---
## 📜 License
This repository and all the release assets are available under the MIT License, underscoring our dedication to open and inclusive AI innovation. By freely sharing our work, we aim to democratize AI technology, empowering researchers, developers, and enthusiasts everywhere to use, adapt, and expand upon it without limitation. This open and permissive approach promotes global collaboration, accelerates innovation, and enriches the AI community as a whole.
## Acknowledgments
We would like to acknowledge the following works for enabling our project:
- [Deepseek-R1-Distill-Qwen-14B](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-14B)
- [NuminaMath-1.5](https://huggingface.co/datasets/AI-MO/NuminaMath-1.5)
- [OpenR1-Math](https://huggingface.co/datasets/open-r1/OpenR1-Math-220k)
- [360-LLAMA-Factory](https://github.com/Qihoo360/360-LLaMA-Factory)
- [verl](https://github.com/volcengine/verl)
- [LIMO](https://github.com/GAIR-NLP/LIMO)
- [FuseAI](https://github.com/fanqiwan/FuseAI)
---
## 📖 Citation
```bibtex
@misc{fathom14b2025,
title={Fathom-R1: $499 Training Recipe for Unlocking Math Reasoning at o4-mini level with just 14B parameters under 16K context},
author={Kunal Singh and Pradeep Moturi and Ankan Biswas and Siva Gollapalli and Sayandeep Bhowmick},
howpublished={\url{https://huggingface.co/FractalAIResearch/Fathom-R1-14B}},
note={Hugging Face},
year={2025}
}
```
## About Project Ramanujan
We initiated Project Ramanujan approximately one year ago, aiming to unlock intelligence and enhance AI agents by pushing the boundaries of advanced reasoning. Our key accomplishments include:
- ICLR'25 & NeurIPS'24-MATH-AI: [SBSC: Step-By-Step Coding for Improving Mathematical Olympiad Performance](https://arxiv.org/abs/2502.16666)
- Winners of HackerCupAI@NeurIPS'24 & ICLR'25-VerifAI: [Stress Testing Based Self-Consistency For Olympiad Programming](https://openreview.net/forum?id=7SlCSjhBsq)
- CVPR'25-MULA: [TRISHUL: Towards Region Identification and Screen Hierarchy Understanding for Large VLM based GUI Agents
](https://arxiv.org/abs/2502.08226))
- Silver Medal in AIMO'24