--- license: apache-2.0 base_model: - Qwen/Qwen3-30B-A3B-Thinking-2507 --- # Nomos 1 [![](https://img.shields.io/badge/X-NousResearch-000000?logo=x&logoColor=white)](https://x.com/NousResearch) [![repo](https://img.shields.io/badge/github-repo-blue?logo=github)](https://github.com/NousResearch/nomos) [![apache 2.0](https://img.shields.io/badge/License-Apache%202.0-orange?logoColor=white&logoUrl=https://pbs.twimg.com/profile_images/1816254738234761216/TX7TW-Mp_400x400.jpg)](https://www.apache.org/licenses/LICENSE-2.0) We release *Nomos 1*, a specialization of [Qwen/Qwen3-30B-A3B-Thinking-2507](https://huggingface.co/Qwen/Qwen3-30B-A3B-Thinking-2507) for mathematical problem-solving and proof-writing in natural language. Nomos-1 was trained in collaboration with [Hillclimb AI](https://www.hillclimb.ing/). Nomos 1 is designed to be used with the [Nomos Reasoning Harness](https://github.com/NousResearch/nomos), which we open-source concurrently. On Putnam 2025, Nomos 1 scores **87/120** when wrapped in the Nomos reasoning harness. Under the same conditions, [Qwen/Qwen3-30B-A3B-Thinking-2507](https://huggingface.co/Qwen/Qwen3-30B-A3B-Thinking-2507) scores **24/120**. ## Usage We recommend using Nomos-1 *without* a system prompt. ### Hugging Face ```python from transformers import AutoTokenizer, AutoModelForCausalLM import torch model_id = "NousResearch/nomos-1" tokenizer = AutoTokenizer.from_pretrained(model_id, trust_remote_code=True) model = AutoModelForCausalLM.from_pretrained( model_id, torch_dtype=torch.float16, device_map="auto", trust_remote_code=True, ) messages = [ { "role": "user", "content": ( "Solve the following problem and show your reasoning:\n\n" "Let a, b, c be positive real numbers such that abc = 1. " "Prove that\n" "\\[\n" " \\frac{1}{1+a} + \\frac{1}{1+b} + \\frac{1}{1+c} \\ge 1.\n" "\\]" ), }, ] inputs = tokenizer.apply_chat_template( messages, add_generation_prompt=True, return_tensors="pt", ).to(model.device) with torch.no_grad(): outputs = model.generate( **inputs, max_new_tokens=400, temperature=0.6, top_p=0.95, top_k=20, do_sample=True, ) print(tokenizer.decode(outputs[0], skip_special_tokens=True)) ``` ### SGLang ``` python -m sglang.launch_server \ --model-path NousResearch/nomos-1 \ --tp-size 8 ``` ### vLLM ``` vllm serve \ --model NousResearch/nomos-1 \ --tensor-parallel-size 8 ``` ## Acknowledgements We would like to thank the following contributors: ### Nous Research Roger Jin, Jeffrey Quesnelle, Dakota Mahan, Chen Guang, Teknium ### Hillclimb AI Jun Park, Ibrakhim Ustelbay ### Math experts Samuel Kim, Miron Yurkevich, Adilet Zauytkhan, Rinat Amankos, Alexander Andreyev, Damir Nurlanov, Abuzer Abuov ### Other massiveaxe ## Citation ``` @misc{nomos_model2025, title = {Nomos Model}, author = {Jin, Roger and Quesnelle, Jeffrey and Mahan, Dakota and Guang, Chen and Teknium, Ryan and Park, Jun and Ustelbay, Ibrakhim and Kim, Samuel and Yurkevich, Miron and Zauytkhan, Adilet and Amankos, Rinat and Andreyev, Alex and Nurlanov, Damir and Abuov, Abuzer and massiveaxe, Askar}, year = {2025}, howpublished = {\url{https://huggingface.co/NousResearch/nomos-1}}, note = {Model release}, } ```