firmanda commited on
Commit
fd8a018
·
verified ·
1 Parent(s): 08e0b93

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +68 -1
README.md CHANGED
@@ -6,4 +6,71 @@ base_model:
6
  - allenai/Olmo-3-7B-Think
7
  ---
8
 
9
- Static quant of https://huggingface.co/allenai/Olmo-3-7B-Instruct
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
6
  - allenai/Olmo-3-7B-Think
7
  ---
8
 
9
+ Static quant of https://huggingface.co/allenai/Olmo-3-7B-Instruct
10
+
11
+ ### Model Description
12
+
13
+ - **Developed by:** Allen Institute for AI (Ai2)
14
+ - **Model type:** a Transformer style autoregressive language model.
15
+ - **Language(s) (NLP):** English
16
+ - **License:** This model is licensed under Apache 2.0. It is intended for research and educational use in accordance with Ai2's [Responsible Use Guidelines](https://allenai.org/responsible-use).
17
+ - **Contact:** Technical inquiries: `olmo@allenai.org`. Press: `press@allenai.org`
18
+ - **Date cutoff:** Dec. 2024.
19
+
20
+
21
+ ### Model Sources
22
+
23
+ - **Project Page:** https://allenai.org/olmo
24
+ - **Repositories:**
25
+ - Open-Instruct for DPO and RLVR: https://github.com/allenai/open-instruct
26
+ - OLMo-Core for pre-training and SFT: https://github.com/allenai/OLMo-core
27
+ - OLMo-Eval for evaluation: https://github.com/allenai/OLMo-Eval
28
+ - **Paper:** [TBD]
29
+ <!-- - **Technical blog post:** (URL) -->
30
+ <!-- - **W&B Logs:** [SFT](()), [DPO](()), [RLVR](()) -->
31
+
32
+
33
+ ## Evaluation
34
+
35
+ | **Skill** | **Benchmark** | **Olmo 3 Instruct 7B SFT** | **Olmo 3 Instruct 7B DPO** | **Olmo3 Instruct 7B** | **Qwen 3 8B (no reasoning)** | **Qwen 3 VL 8B Instruct** | **Qwen 2.5 7B** | **Olmo 2 7B Instruct** | **Apertus 8B Instruct** | **Granite 3.3 8B Instruct** |
36
+ |-----------|--------------|---------------------------|---------------------------|------------------------|------------------------------|----------------------------|-------------------|--------------------------|----------------------------|-------------------------------|
37
+ | **Math** | MATH | 65.1 | 79.6 | 87.3 | 82.3 | 91.6 | 71.0 | 30.1 | 21.9 | 67.3 |
38
+ | | AIME 2024 | 6.7 | 23.5 | 44.3 | 26.2 | 55.1 | 11.3 | 1.3 | 0.5 | 7.3 |
39
+ | | AIME 2025 | 7.2 | 20.4 | 32.5 | 21.7 | 43.3 | 6.3 | 0.4 | 0.2 | 6.3 |
40
+ | | OMEGA | 14.4 | 22.8 | 28.9 | 20.5 | 32.3 | 13.7 | 5.2 | 5.0 | 10.7 |
41
+ | **Reasoning** | BigBenchHard | 51.0 | 69.3 | 71.2 | 73.7 | 85.6 | 68.8 | 43.8 | 42.2 | 61.2 |
42
+ | | ZebraLogic | 18.0 | 28.4 | 32.9 | 25.4 | 64.3 | 10.7 | 5.3 | 5.3 | 17.6 |
43
+ | | AGI Eval English | 59.2 | 64.0 | 64.4 | 76.0 | 84.5 | 69.8 | 56.1 | 50.8 | 64.0 |
44
+ | **Coding** | HumanEvalPlus | 69.8 | 72.9 | 77.2 | 79.8 | 82.9 | 74.9 | 25.8 | 34.4 | 64.0 |
45
+ | | MBPP+ | 56.5 | 55.9 | 60.2 | 64.4 | 66.3 | 62.6 | 40.7 | 42.1 | 54.0 |
46
+ | | LiveCodeBench v3 | 20.0 | 18.8 | 29.5 | 53.2 | 55.9 | 34.5 | 7.2 | 7.8 | 11.5 |
47
+ | **IF** | IFEval | 81.7 | 82.0 | 85.6 | 86.3 | 87.8 | 73.4 | 72.2 | 71.4 | 77.5 |
48
+ | | IFBench | 27.4 | 29.3 | 32.3 | 29.3 | 34.0 | 28.4 | 26.7 | 22.1 | 22.3 |
49
+ | **Knowledge** | MMLU | 67.1 | 69.1 | 69.1 | 80.4 | 83.6 | 77.2 | 61.6 | 62.7 | 63.5 |
50
+ | **QA** | PopQA | 16.5 | 20.7 | 14.1 | 20.4 | 26.5 | 21.5 | 25.5 | 25.5 | 28.9 |
51
+ | | GPQA | 30.0 | 37.9 | 40.4 | 44.6 | 51.1 | 35.6 | 31.3 | 28.8 | 33.0 |
52
+ | **Chat** | AlpacaEval 2 LC | 21.8 | 43.3 | 40.9 | 49.8 | 73.5 | 23.0 | 18.3 | 8.1 | 28.6 |
53
+ | **Tool Use** | SimpleQA | 74.2 | 79.8 | 79.3 | 79.0 | 90.3 | 78.0 | – | – | – |
54
+ | | LitQA2 | 38.0 | 43.3 | 38.2 | 39.6 | 30.7 | 29.8 | – | – | – |
55
+ | | BFCL | 48.9 | 49.6 | 49.8 | 60.2 | 66.2 | 55.8 | – | – | – |
56
+ | **Safety** | Safety | 89.2 | 90.2 | 87.3 | 78.0 | 80.2 | 73.4 | 93.1 | 72.2 | 73.7 |
57
+
58
+ ## Model Details
59
+
60
+ #### Stage 1: SFT
61
+ - supervised fine-tuning on the Dolci-Think-SFT-7B dataset. This dataset consits of math, code, chat, and general knowledge queries.
62
+ - Datasets: [Dolci-Think-SFT-7B](https://huggingface.co/datasets/allenai/dolci-thinking-sft), [Dolci-Instruct-SFT-7B](https://huggingface.co/datasets/allenai/dolci-instruct-sft)
63
+
64
+ #### Stage 2:DPO
65
+ - direct preference optimization on the Dolci-Think-DPO-7B dataset. This dataset consits of math, code, chat, and general knowledge queries.
66
+ - Datasets: [Dolci-Think-DPO-7B](https://huggingface.co/datasets/allenai/dolci-thinking-dpo), [Dolci-Instruct-DPO-7B](https://huggingface.co/datasets/allenai/dolci-3-instruct-dpo-with-metadata)
67
+
68
+ #### Stage 3: RLVR
69
+ - reinforcement learning from verifiable rewards on the Dolci-Think-RL-7B dataset. This dataset consits of math, code, instruction-following, and general chat queries.
70
+ - Datasets: [Dolci-Think-RL-7B](https://huggingface.co/datasets/allenai/Dolci-Think-RL-7B), [Dolci-Instruct-RL-7B](https://huggingface.co/datasets/allenai/Dolci-Instruct-RL-7B)
71
+
72
+ ## Inference & Recommended Settings
73
+ We evaluated our models on the following settings. We also recommend using them for generation:
74
+ - **temperature:** `0.6`
75
+ - **top_p:** `0.95`
76
+ - **max_tokens:** `32768`