natolambert commited on
Commit
8d15ffc
·
verified ·
1 Parent(s): f3ca18e

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +131 -3
README.md CHANGED
@@ -1,3 +1,131 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ base_model:
4
+ - allenai/Olmo-3-1025-7B
5
+ language:
6
+ - en
7
+ datasets:
8
+ - allenai/Dolci-RL-Zero-Math-7B
9
+ library_name: transformers
10
+ ---
11
+
12
+ ## Model Details
13
+ <img alt="Logo for Olmo 3 7B Zero model" src="olmo-rl-zero.png" width="268px" style="margin-left:'auto' margin-right:'auto' display:'block'">
14
+
15
+
16
+ # Model Card for Olmo 3.1 7B RL-Zero Math
17
+
18
+ We introduce Olmo 3, a new family of 7B and 32B models both Instruct and Think variants. Long chain-of-thought thinking improves reasoning tasks like math and coding.
19
+
20
+ Olmo is a series of **O**pen **l**anguage **mo**dels designed to enable the science of language models.
21
+ These models are pre-trained on the Dolma 3 dataset and post-trained on the Dolci datasets. We are releasing all code, checkpoints, logs (coming soon), and associated training details.
22
+
23
+ The RL-Zero family of models is an experimental set of model for the scientific exploration of RLVR training.
24
+
25
+ For the other Olmo 3 RL-Zero models see:
26
+
27
+ | **Domain** | **Model** | **RLVR Dataset**
28
+ |--------------------------|---------------|---------------|
29
+ | **Base Model** | [Olmo-3-7B](https://huggingface.co/allenai/Olmo-3-1025-7B) |
30
+ | **Math** | [Olmo-3-7B-RL-Zero-Math](https://huggingface.co/allenai/Olmo-3-7B-RL-Zero-Math/), [Olmo-3.1-7B-RL-Zero-Math](https://huggingface.co/allenai/Olmo-3-7B-RL-Zero-Math/) | [Dolci-RL-Zero-Math-7B](https://huggingface.co/datasets/allenai/Dolci-RL-Zero-Math-7B)
31
+ | **Code** | [Olmo-3-7B-RL-Zero-Code](https://huggingface.co/allenai/Olmo-3-7B-RL-Zero-Code/), [Olmo-3.1-7B-RL-Zero-Code](https://huggingface.co/allenai/Olmo-3-7B-RL-Zero-Code/) | [Dolci-RL-Zero-Code-7B](https://huggingface.co/datasets/allenai/Dolci-RL-Zero-Code-7B)
32
+ | **IF** | [Olmo-3-7B-RL-Zero-IF](https://huggingface.co/allenai/Olmo-3-7B-RL-Zero-IF/) | [Dolci-RL-Zero-IF-7B](https://huggingface.co/datasets/allenai/Dolci-RL-Zero-IF-7B)
33
+ | **General** | [Olmo-3-7B-RL-Zero-General](https://huggingface.co/allenai/Olmo-3-7B-RL-Zero-General/) | [Dolci-RL-Zero-General-7B](https://huggingface.co/datasets/allenai/Dolci-RL-Zero-General-7B)
34
+ | **Mix** | [Olmo-3-7B-RL-Zero-Mix](https://huggingface.co/allenai/Olmo-3-7B-RL-Zero-Mix/) | [Dolci-RL-Zero-Mix-7B](https://huggingface.co/datasets/allenai/Dolci-RL-Zero-Mix-7B)
35
+
36
+ For the core Olmo 3 models see:
37
+
38
+ | **Stage** | **Olmo 3 7B Think** | **Olmo (3/3.1) 32B Think** | **Olmo 3 7B Instruct** | **Olmo 3.1 32B Instruct** |
39
+ |--------------------------|-----------------------|------------------------|---------------------------|----------------------------|
40
+ | **Base Model** | [Olmo-3-7B](https://huggingface.co/allenai/Olmo-3-1025-7B) | [Olmo-3-32B](https://huggingface.co/allenai/Olmo-3-1125-32B) | [Olmo-3-7B](https://huggingface.co/allenai/Olmo-3-1025-7B) | [Olmo-3-32B](https://huggingface.co/allenai/Olmo-3-1125-32B) |
41
+ | **SFT** | [Olmo-3-7B-Think-SFT](https://huggingface.co/allenai/Olmo-3-7B-Think-SFT) | [Olmo-3-32B-Think-SFT](https://huggingface.co/allenai/Olmo-3-32B-Think-SFT) | [Olmo-3-7B-Instruct-SFT](https://huggingface.co/allenai/Olmo-3-7B-Instruct-SFT) | [Olmo-3.1-32B-Instruct-SFT](https://huggingface.co/allenai/Olmo-3.1-32B-Instruct-SFT) |
42
+ | **DPO** | [Olmo-3-7B-Think-DPO](https://huggingface.co/allenai/Olmo-3-7B-Think-DPO) | [Olmo-3-32B-Think-DPO](https://huggingface.co/allenai/Olmo-3-32B-Think-DPO) | [Olmo-3-7B-Instruct-DPO](https://huggingface.co/allenai/Olmo-3-7B-Instruct-DPO) | [Olmo-3.1-32B-Instruct-DPO](https://huggingface.co/allenai/Olmo-3.1-32B-Instruct-DPO) |
43
+ | **Final Models (RLVR)** | [Olmo-3-7B-Think](https://huggingface.co/allenai/Olmo-3-7B-Think) | [Olmo-3-32B-Think](https://huggingface.co/allenai/Olmo-3-32B-Think)<br>[Olmo-3.1-32B-Think](https://huggingface.co/allenai/Olmo-3.1-32B-Think) | [Olmo-3-7B-Instruct](https://huggingface.co/allenai/Olmo-3-7B-Instruct) | [Olmo-3.1-32B-Instruct](https://huggingface.co/allenai/Olmo-3.1-32B-Instruct) |
44
+
45
+
46
+ ## Installation
47
+
48
+ Olmo 3 is supported in transformers 4.57.0 or higher:
49
+ ```bash
50
+ pip install transformers>=4.57.0
51
+ ```
52
+
53
+ ## Inference
54
+
55
+ You can use OLMo with the standard HuggingFace transformers library:
56
+ ```python
57
+ from transformers import AutoModelForCausalLM, AutoTokenizer
58
+ olmo = AutoModelForCausalLM.from_pretrained("allenai/Olmo-3.1-7B-RL-Zero-Math")
59
+ tokenizer = AutoTokenizer.from_pretrained("allenai/Olmo-3.1-7B-RL-Zero-Math")
60
+ message = ["Language modeling is "]
61
+ inputs = tokenizer(message, return_tensors='pt', return_token_type_ids=False)
62
+ # optional verifying cuda
63
+ # inputs = {k: v.to('cuda') for k,v in inputs.items()}
64
+ # olmo = olmo.to('cuda')
65
+ response = olmo.generate(**inputs, max_new_tokens=100, do_sample=True, top_k=50, top_p=0.95)
66
+ print(tokenizer.batch_decode(response, skip_special_tokens=True)[0])
67
+ >> 'Language modeling is a key component of any text-based application, but its effectiveness...'
68
+ ```
69
+
70
+ For faster performance, you can quantize the model using the following method:
71
+ ```python
72
+ AutoModelForCausalLM.from_pretrained("allenai/Olmo-3.1-7B-RL-Zero-Math",
73
+ torch_dtype=torch.float16,
74
+ load_in_8bit=True) # Requires bitsandbytes
75
+ ```
76
+ The quantized model is more sensitive to data types and CUDA operations. To avoid potential issues, it's recommended to pass the inputs directly to CUDA using:
77
+ ```python
78
+ inputs.input_ids.to('cuda')
79
+ ```
80
+
81
+ ### Fine-tuning
82
+ Model fine-tuning can be done from the final checkpoint (the `main` revision of this model) or the base model.
83
+
84
+ We recommend fine-tuning with the open-instruct repository:
85
+ ```bash
86
+ bash ./scripts/train/olmo3/rlvr_script.sh
87
+ ```
88
+ You can override most configuration options from the command-line. For example, to override the learning rate you could launch the script like this:
89
+
90
+ ```bash
91
+ bash ./scripts/train/olmo3/rlvr_script.sh --learning_rate=1e-3
92
+ ```
93
+ For more documentation, see the [GitHub readme](https://github.com/allenai/open-instruct).
94
+
95
+ ### Model Description
96
+
97
+ - **Developed by:** Allen Institute for AI (Ai2)
98
+ - **Model type:** a Transformer style autoregressive language model.
99
+ - **Language(s) (NLP):** English
100
+ - **License:** This model is licensed under Apache 2.0. It is intended for research and educational use in accordance with Ai2's [Responsible Use Guidelines](https://allenai.org/responsible-use).
101
+ - **Contact:** Technical inquiries: `olmo@allenai.org`. Press: `press@allenai.org`
102
+ - **Date cutoff:** Dec. 2023.
103
+
104
+
105
+ ### Model Sources
106
+
107
+ - **Project Page:** https://allenai.org/olmo
108
+ - **Repositories:**
109
+ - Open-Instruct for DPO and RLVR: https://github.com/allenai/open-instruct
110
+ - OLMo-Core for pre-training and SFT: https://github.com/allenai/OLMo-core
111
+ - OLMo-Eval for evaluation: https://github.com/allenai/OLMo-Eval
112
+ - **Paper:** [TBD]
113
+ <!-- - **Technical blog post:** (URL) -->
114
+ <!-- - **W&B Logs:** [SFT](()), [DPO](()), [RLVR](()) -->
115
+
116
+ ## Model Details
117
+
118
+ #### RLVR
119
+ - reinforcement learning from verifiable rewards on the Dolci-RL-Zero-Code-7B dataset which consists of coding queries.
120
+ - Datasets: [Dolci-RL-Zero-Code-7B](https://huggingface.co/datasets/allenai/Dolci-RL-Zero-Code-7B)
121
+
122
+
123
+ ## Bias, Risks, and Limitations
124
+ Like any base language model or fine-tuned model without safety filtering, these models can easily be prompted by users to generate harmful and sensitive content. Such content may also be produced unintentionally, especially in cases involving bias, so we recommend that users consider the risks when applying this technology. Additionally, many statements from OLMo or any LLM are often inaccurate, so facts should be verified.
125
+
126
+
127
+ ## Citation
128
+ A technical manuscript is forthcoming!
129
+
130
+ ## Model Card Contact
131
+ For errors in this model card, contact `olmo@allenai.org`.