File size: 37,427 Bytes
e39ff3a
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
9b1f436
e39ff3a
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
9b1f436
 
e39ff3a
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
c45463c
e39ff3a
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
c45463c
e39ff3a
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
c45463c
 
 
 
 
 
 
 
 
 
 
e39ff3a
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
---
license: other
license_name: fair-noncommercial-research
extra_gated_prompt: >
  FAIR Noncommercial Research License v1 Last Updated: August 18, 2025

  “Acceptable Use Policy” means the FAIR Acceptable Use Policy, applicable to
  Research Materials, that is incorporated into this Agreement.

  “Agreement” means the terms and conditions for use, reproduction, distribution
  and modification of the Research Materials set forth herein.


  “Documentation” means the specifications, manuals and documentation
  accompanying  Research Materials distributed by Meta.


  “Licensee” or “you” means you, or your employer or any other person or entity
  (if you are entering into this Agreement on such person or entity’s behalf),
  of the age required under applicable laws, rules or regulations to provide
  legal consent and that has legal authority to bind your employer or such other
  person or entity if you are entering in this Agreement on their behalf.


  “Meta” or “we” means Meta Platforms Ireland Limited (if you are located in or,
  if you are an entity, your principal place of business is in the EEA or
  Switzerland) and Meta Platforms, Inc. (if you are located outside of the EEA
  or Switzerland).

  “Noncommercial Research Uses” means noncommercial research use cases related
  to research, development, education, processing, or analysis and in each case,
  is not primarily intended for commercial advantage or monetary compensation to
  you or others.

  “Research Materials” means, collectively, Documentation and the models,
  software and algorithms, including machine-learning model code, trained model
  weights, inference-enabling code, training-enabling code, fine-tuning enabling
  code, demonstration materials and other elements of the foregoing distributed
  by Meta and made available under this Agreement.

  By clicking “I Accept” below or by using or distributing any portion or
  element of the Research Materials, you agree to be bound by this Agreement.


  1. License Rights and Redistribution.


  a. Grant of Rights. You are granted a non-exclusive, worldwide,
  non-transferable and royalty-free limited license under Meta’s intellectual
  property or other rights owned by Meta embodied in the Research Materials to
  use, reproduce, distribute, copy, create derivative works of, and make
  modifications to the Research Materials.  

  b. Redistribution and Use.   i. You will not use the Research Materials or any
  outputs or results of the Research Materials in connection with any commercial
  uses or for any uses other than Noncommercial Research Uses;


  ii. Distribution of Research Materials, and any derivative works thereof, are
  subject to the terms of this Agreement. If you distribute or make the Research
  Materials, or any derivative works thereof, available to a third party, you
  may only do so under the terms of this Agreement. You shall also provide a
  copy of this Agreement to such third party.


  iii.  If you submit for publication the results of research you perform on,
  using, or otherwise in connection with Research Materials, you must
  acknowledge the use of Research Materials in your publication.


  iv. Your use of the Research Materials must comply with applicable laws and
  regulations (including Trade Control Laws) and adhere to the FAIR Acceptable
  Use Policy, which is hereby incorporated by reference into this Agreement. 2.
  User Support. Your Noncommercial Research Use of the Research Materials is
  done at your own discretion; Meta does not process any information nor provide
  any service in relation to such use.  Meta is under no obligation to provide
  any support services for the Research Materials. Any support provided is “as
  is”, “with all faults”, and without warranty of any kind.


  3. Disclaimer of Warranty. UNLESS REQUIRED BY APPLICABLE LAW, THE RESEARCH
  MATERIALS AND ANY OUTPUT AND RESULTS THEREFROM ARE PROVIDED ON AN “AS IS”
  BASIS, WITHOUT WARRANTIES OF ANY KIND, AND META DISCLAIMS ALL WARRANTIES OF
  ANY KIND, BOTH EXPRESS AND IMPLIED, INCLUDING, WITHOUT LIMITATION, ANY
  WARRANTIES OF TITLE, NON-INFRINGEMENT, MERCHANTABILITY, OR FITNESS FOR A
  PARTICULAR PURPOSE. YOU ARE SOLELY RESPONSIBLE FOR DETERMINING THE
  APPROPRIATENESS OF USING OR REDISTRIBUTING THE RESEARCH MATERIALS AND ASSUME
  ANY RISKS ASSOCIATED WITH YOUR USE OF THE RESEARCH MATERIALS AND ANY OUTPUT
  AND RESULTS.

  4. Limitation of Liability. IN NO EVENT WILL META OR ITS AFFILIATES BE LIABLE
  UNDER ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, TORT, NEGLIGENCE, PRODUCTS
  LIABILITY, OR OTHERWISE, ARISING OUT OF THIS AGREEMENT, FOR ANY LOST PROFITS
  OR ANY DIRECT OR INDIRECT, SPECIAL, CONSEQUENTIAL, INCIDENTAL, EXEMPLARY OR
  PUNITIVE DAMAGES, EVEN IF META OR ITS AFFILIATES HAVE BEEN ADVISED OF THE
  POSSIBILITY OF ANY OF THE FOREGOING.

  5. Intellectual Property.


  a. Subject to Meta’s ownership of Research Materials and derivatives made by
  or for Meta, with respect to any derivative works and modifications of the
  Research Materials that are made by you, as between you and Meta, you are and
  will be the owner of such derivative works and modifications.

  b. If you institute litigation or other proceedings against Meta or any entity
  (including a cross-claim or counterclaim in a lawsuit) alleging that the
  Research Materials, outputs or results, or any portion of any of the
  foregoing, constitutes infringement of intellectual property or other rights
  owned or licensable by you, then any licenses granted to you under this
  Agreement shall terminate as of the date such litigation or claim is filed or
  instituted. You will indemnify and hold harmless Meta from and against any
  claim by any third party arising out of or related to your use or distribution
  of the Research Materials.

  6. Term and Termination. The term of this Agreement will commence upon your
  acceptance of this Agreement or access to the Research Materials and will
  continue in full force and effect until terminated in accordance with the
  terms and conditions herein. Meta may terminate this Agreement if you are in
  breach of any term or condition of this Agreement. Upon termination of this
  Agreement, you shall delete and cease use of the Research Materials. Sections
  3, 4 and 7 shall survive the termination of this Agreement. 

  7. Governing Law and Jurisdiction. This Agreement will be governed and
  construed under the laws of the State of California without regard to choice
  of law principles, and the UN Convention on Contracts for the International
  Sale of Goods does not apply to this Agreement. The courts of California shall
  have exclusive jurisdiction of any dispute arising out of this Agreement. 


  8. Modifications and Amendments. Meta may modify this Agreement from time to
  time; provided that they are similar in spirit to the current version of the
  Agreement, but may differ in detail to address new problems or concerns. All
  such changes will be effective immediately. Your continued use of the Research
  Materials after any modification to this Agreement constitutes your agreement
  to such modification. Except as provided in this Agreement, no modification or
  addition to any provision of this Agreement will be binding unless it is in
  writing and signed by an authorized representative of both you and Meta.


  FAIR Acceptable Use Policy 

  The Fundamental AI Research (FAIR) team at Meta seeks to further understanding
  of new and existing research domains with the mission of advancing the
  state-of-the-art in artificial intelligence through open research for the
  benefit of all. 

  As part of this mission, Meta makes certain research materials available for
  noncommercial research use. Meta is committed to promoting the safe and
  responsible use of such research materials.  

  Prohibited Uses

  You agree you will not use, or allow others to use, Research Materials to:

  Violate the law or others’ rights, including to: Engage in, promote, generate,
  contribute to, encourage, plan, incite, or further illegal or unlawful
  activity or content, such as: Violence or terrorism Exploitation or harm to
  children, including the solicitation, creation, acquisition, or dissemination
  of child exploitative content or failure to report Child Sexual Abuse Material
  Human trafficking, exploitation, and sexual violence The illegal distribution
  of information or materials to minors, including obscene materials, or failure
  to employ legally required age-gating in connection with such information or
  materials. Sexual solicitation Any other criminal activity

  Engage in, promote, incite, or facilitate the harassment, abuse, threatening,
  or bullying of individuals or groups of individuals

  Engage in, promote, incite, or facilitate discrimination or other unlawful or
  harmful conduct in the provision of employment, employment benefits, credit,
  housing, other economic benefits, or other essential goods and services

  Engage in the unauthorized or unlicensed practice of any profession including,
  but not limited to, financial, legal, medical/health, or related professional
  practices

  Collect, process, disclose, generate, or infer health, demographic, or other
  sensitive personal or private information about individuals without rights and
  consents required by applicable laws

  Engage in or facilitate any action or generate any content that infringes,
  misappropriates, or otherwise violates any third-party rights, including the
  outputs or results of any technology using FAIR research materials

  Create, generate, or facilitate the creation of malicious code, malware,
  computer viruses or do anything else that could disable, overburden, interfere
  with or impair the proper working, integrity, operation or appearance of a
  website or computer system

  2. Engage in, promote, incite, facilitate, or assist in the planning or
  development of activities that present a risk of death or bodily harm to
  individuals, including use of research artifacts related to the following:

  Military, warfare, nuclear industries or applications, espionage, use for
  materials or activities that are subject to the International Traffic Arms
  Regulations (ITAR) maintained by the United States Department of State

  Guns and illegal weapons (including weapon development)

  Illegal drugs and regulated/controlled substances

  Operation of critical infrastructure, transportation technologies, or heavy
  machinery

  Self-harm or harm to others, including suicide, cutting, and eating disorders

  Any content intended to incite or promote violence, abuse, or any infliction
  of bodily harm to an individual

  3. Intentionally deceive or mislead others, including use of FAIR Research
  Materials related to the following:

  Generating, promoting, or furthering fraud or the creation or promotion of
  disinformation

  Generating, promoting, or furthering defamatory content, including the
  creation of defamatory statements, images, or other content

  Generating, promoting, or further distributing spam

  Impersonating another individual without consent, authorization, or legal
  right

  Representing that outputs of FAIR research materials or outputs from
  technology using FAIR research materials are human-generated

  Generating or facilitating false online engagement, including fake reviews and
  other means of fake online engagement

  4. Fail to appropriately disclose to end users any known dangers of your
  Research Materials.

  Please report any violation of this Policy or other problems that could lead
  to a violation of this Policy by submitting a report here
  [https://docs.google.com/forms/d/e/1FAIpQLSeb11cryAopJ7LNrC4nxEUXrHY26hfkXQMf_uH-oFgA3WlYZQ/viewform]. 
extra_gated_fields:
  First Name: text
  Last Name: text
  Date of birth: date_picker
  Country: country
  Affiliation: text
  Job title:
    type: select
    options:
    - Student
    - Research Graduate
    - AI researcher
    - AI developer/engineer
    - Reporter
    - Other
  geo: ip_location
  By clicking Submit below I accept the terms of the license and acknowledge that the information I provide will be collected stored processed and shared in accordance with the Meta Privacy Policy: checkbox
extra_gated_description: >-
  The information you provide will be collected, stored, processed and shared in
  accordance with the [Meta Privacy
  Policy](https://www.facebook.com/privacy/policy/).
extra_gated_button_content: Submit
extra_gated_heading: >-
  Please be sure to provide your full legal name, date of birth, and full
  organization name with all corporate identifiers. Avoid the use of acronyms
  and special characters. Failure to follow these instructions may prevent you
  from accessing this model and others on Hugging Face. You will not have the
  ability to edit this form after submission, so please ensure all information
  is accurate.
language:
- en
library_name: mlx
pipeline_tag: text-generation
tags:
- facebook
- meta
- pytorch
- mobilellm
- mlx
 - apple-mlx
 - runtime
base_model:
- facebook/MobileLLM-R1-950M
---

# MLX Runtime (Apple silicon) — Added Files & Usage

This fork adds a lightweight MLX runtime so you can run the original MobileLLM‑R1‑950M weights with Apple’s MLX on Apple silicon. It keeps the original weights (`model.safetensors`) and tokenizer; only the runtime is added. Additional code is provided to reproduce the (also included) mlx-lm conversion and 4-bit quant, though some manual modifications are needed to mlx-lm at this time to get it to run.

## Technical Documentation

For detailed technical information about this port, see:
- [**MLX Technical Summary**](mlx_technical_summary.md) - Challenges and solutions for porting MobileLLM-R1 to MLX in this PoC conversion.
- [**Conversion Log**](conversion.log) - Details of the model conversion process
- [**Quantization Log**](quantization.log) - Information about quantization procedures and results


What’s included (added files)
- `model.py` — Minimal MLX implementation of the architecture with GQA, optional Q/K norm, RoPE, and output weight tying.
- `inference.py` — Simple text generation CLI with temperature, top‑p, greedy mode, optional chat template, EOS handling, plus boxed‑answer controls for math.
- `test_model.py` — Diagnostics to verify model structure/parameter shapes and key weight presence.
- `check_shape.py` — Heuristic check to inspect the MLP variant from `model.safetensors` and `config.json`.
- `main.py` — Convenience entry for quick manual tests.

Notes
- This is an MLX runtime; it does not change or fine‑tune the weights. The README front‑matter marks this repo as a derivative of `facebook/MobileLLM-R1-950M` via `base_model` so it appears correctly on Hugging Face.
- Tested via `uv` on macOS with Python 3.13; deps are pinned in `uv.lock`/`pyproject.toml`.

Quick start (MLX, local safetensors)
- Install and run with uv: `uv run python inference.py --prompt "What is 2+2?" --temperature 0.0 --max-tokens 64`
- Use chat template (default if `chat_template.jinja` present): `uv run python inference.py --prompt "Explain quicksort in 1–2 sentences." --temperature 0.7 --top-p 0.9`
- Disable chat template: `uv run python inference.py --prompt "Explain quicksort in 1–2 sentences." --disable-chat-template --temperature 0.7 --top-p 0.9`
- Math mode, final answer only: `uv run python inference.py --prompt "Compute 17 * 23. Put your final answer in \\boxed{.}" --temperature 0.0 --final-only --stop-at-boxed --extract-boxed --max-tokens 128`

Tips
- If a sampled response stops mid‑sentence, increase `--max-tokens` (e.g., 192–256) or use a lower `--temperature`/`--top-p`.
- For concise answers with the chat template, pass a system prompt: `--system "Be concise. Answer in 1–2 sentences."`.

Diagnostics
- Structure/weights check: `uv run python test_model.py`
- MLP variant heuristic: `uv run python check_shape.py .`

Details
- The loader maps HF weight names to MLX module names and detects the MLP variant from weight keys to ensure correct layer wiring.
- Attention uses standard `1/sqrt(d)` scaling for best generation quality.

## Installation

This project uses `uv` for dependency management.

### Using uv (recommended)
```bash
# 1. Clone the repo
git clone <your-repo>
cd <your-repo>

# 2. Sync all dependencies (includes the default set)
uv sync

# 3. (Optional) Add the torch group if you plan to customize/train models
uv sync --extra torch
```


### Without uv
If you prefer pip/venv, a `requirements.txt` is provided:
```bash
python -m venv .venv
source .venv/bin/activate  # Windows: .venv\Scripts\activate
pip install -r requirements.txt
```

> The `torch` extra is only required if you intend to fine-tune or swap model back-ends; the default installation already supports inference.

## MLX Inference Examples (safetensors)

- Basic greedy generation:
  - `uv run python inference.py --prompt "MobileLLM-R1 runs on MLX." --temperature 0 --max-tokens 64`
- Chat-style with template:
  - `uv run python inference.py --prompt "Briefly summarize quicksort." --temperature 0.7 --top-p 0.9`
- Disable the chat template:
  - `uv run python inference.py --prompt "Briefly summarize quicksort." --disable-chat-template --temperature 0.7 --top-p 0.9`
- Math/coding “final answer only”:
  - `uv run python inference.py --prompt "Solve: 128 / 8. Put final answer in \\boxed{.}" --temperature 0 --final-only --stop-at-boxed --extract-boxed`

## Design Choices (why not a trivial block)

This runtime mirrors the functional details of the released weights so they load 1:1 and generate well in MLX. A minimal “one size fits all” block hides critical differences and leads to poor output quality. Key choices:

- Attention layout and features
  - Grouped-Query Attention (GQA): separate `num_attention_heads` vs `num_key_value_heads` with head_dim from config. We implement a custom `Attention` so K/V can be repeated across groups and still match the HF weight layout.
  - Q/K normalization: optional RMSNorm applied to per-head Q and K, controlled by `use_qk_norm`.
  - RoPE: MLX `nn.RoPE` with the model’s `rope_theta` (8e6 here), and a per-layer toggle via `no_rope_layers`. We gate RoPE per block, with a safe fallback if the list disables all layers.
  - Scaling: we use standard `1/sqrt(d)` for SDPA. Some configs expose an `attn_scale` used for training tricks; applying it at inference severely degraded outputs, so it’s not multiplied into SDPA.

- MLP variant detection
  - MobileLLM variants use either standard SwiGLU (gate_proj/up_proj/down_proj) or a dual-branch dense MLP. We detect the variant from weight keys in `model.safetensors` and instantiate the correct module so shapes and semantics match.

- Weight tying and mapping
  - Tie output logits to the token embedding matrix when `tie_word_embeddings` is true, matching HF behavior and saving memory.
  - Map HF names to MLX names during load: `model.embed_tokens`→`tok_embeddings`, layer/attn/norm renames, `mlp.`→`feed_forward.`, `model.norm`→`norm`.

- Template and decoding
  - The provided Jinja chat template is supported for parity with HF chat usage, but allow `--disable-chat-template` for raw prompting. Multiple EOS IDs are supported.
  - Sampling: temperature, top‑p, and greedy; optional repetition/frequency penalties; math helpers `--final-only/--stop-at-boxed/--extract-boxed` to keep answers concise.

# Model Details

We present MobileLLM-R1, a new series of efficient reasoning models in the MobileLLM family. The release includes two categories of models:

Base models: 
- [MobileLLM-R1-140M-base](https://huggingface.co/facebook/MobileLLM-R1-140M-base/) 
- [MobileLLM-R1-360M-base](https://huggingface.co/facebook/MobileLLM-R1-360M-base/) 
- [MobileLLM-R1-950M-base](https://huggingface.co/facebook/MobileLLM-R1-950M-base/)

Final models: 
- [MobileLLM-R1-140M](https://huggingface.co/facebook/MobileLLM-R1-140M/) 
- [MobileLLM-R1-360M](https://huggingface.co/facebook/MobileLLM-R1-360M/) 
- [MobileLLM-R1-950M](https://huggingface.co/facebook/MobileLLM-R1-950M/)

> **Note**: These models are not general-purpose chat models. They are Supervised Fine-Tuned (SFT) models, specifically trained to address mathematical, programming (Python, C++), and scientific problems.

In addition to the models, we release the complete training recipes and data sources to ensure reproducibility and support further research.

Remarkably, the MobileLLM-R1 950M, pre-trained on only **~2T high-quality tokens** and with fewer than 5T total training tokens, achieves comparable or superior performance to Qwen3 0.6B, which was trained on 36T tokens, across MATH, GSM8K, MMLU, and LiveCodeBench benchmarks. 

Compared to existing fully open-source models, MobileLLM-R1 950M model achieves **~5× higher accuracy on MATH** compared to the Olmo 1.24B model and **~2× higher accuracy** relative to the SmolLM2 1.7B model, despite being substantially smaller in parameter scale. In addition, MobileLLM-R1 950M outperforms both Olmo 1.24B and SmolLM2 1.7B **by a wide margin on coding benchmarks**, establishing a new state-of-the-art among fully open-source models.

# Highlights


### Pretrained Model
![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/660f893bae89429c07a32cdb/b9rg8yZTxeWhRWus_tJR_.jpeg)

### Token efficiency comparison across pretrained models
![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/660f893bae89429c07a32cdb/dJtdh5dmVTdowP1gMR5qQ.jpeg)

### Post-trained Model
![image/png](https://cdn-uploads.huggingface.co/production/uploads/660f893bae89429c07a32cdb/0MxKBLDfb8xRwg-uVi1WQ.png)



**Model Architecture**:

|  | # Layers | # Attnetion Heads | # KV Heads | Dim | Hidden Dim | Params | 
| --- | --- | --- | --- | --- | --- | --- |
| MobileLLM-R1-140M | 15 | 9 | 3 | 576 | 2048 | 140M | 
| MobileLLM-R1-360M | 15 | 16 | 4 | 1024 | 4096 | 359M | 
| MobileLLM-R1-950M | 22 | 24 | 6 | 1536 | 6144 | 949M | 

|  | Input modalities | Output modalities | Context Length | Vocaburary Size | Shared Embeddings | 
| --- | --- | --- | --- | --- | --- |
| [MobileLLM-R1-140M-base](https://huggingface.co/facebook/MobileLLM-R1-140M-base) | Text | Text | 4k | 128k | Yes | 
| [MobileLLM-R1-360M-base](https://huggingface.co/facebook/MobileLLM-R1-360M-base) | Text | Text | 4k | 128k | Yes | 
| [MobileLLM-R1-950M-base](https://huggingface.co/facebook/MobileLLM-R1-950M-base) | Text | Text | 4k | 128k | Yes | 
| [MobileLLM-R1-140M](https://huggingface.co/facebook/MobileLLM-R1-140M) | Text | Text | 32k | 128k | Yes | 
| [MobileLLM-R1-360M](https://huggingface.co/facebook/MobileLLM-R1-360M) | Text | Text | 32k | 128k | Yes | 
| [MobileLLM-R1-950M](https://huggingface.co/facebook/MobileLLM-R1-950M) | Text | Text | 32k | 128k | Yes | 

# How to use

To load the pretrained model for further finetuning or evaluation:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("facebook/MobileLLM-R1-950M")
model = AutoModelForCausalLM.from_pretrained("facebook/MobileLLM-R1-950M")
```

# Inference examples

## Inference (MLX)

Use the MLX runtime provided in this repo to run the local `model.safetensors` on Apple silicon.

- Basic: `uv run python inference.py --prompt "Hello MLX" --temperature 0.7 --top-p 0.9`
- Deterministic: `uv run python inference.py --prompt "Hello MLX" --temperature 0 --max-tokens 64`

Flags in `inference.py`
- `--model-path`: path to model directory (default: `.`)
- `--prompt`: input text
- `--max-tokens`: number of tokens to generate
- `--temperature`: 0 for greedy, >0 for sampling
- `--top-p`: nucleus sampling cutoff
- `--system`: optional system message when using chat template
- `--final-only`: instructs model to output only a final boxed answer
- `--stop-at-boxed`: stop generation after closing `}` following `\boxed{`
- `--extract-boxed`: print the last `\boxed{...}` content
- `--disable-chat-template`: bypass `chat_template.jinja` and send raw prompt (with BOS)
- `--repetition-penalty`: discourage previously generated tokens (>1.0)
- `--frequency-penalty`: subtract alpha * token frequency from logits

See also: the “MLX Runtime (Apple silicon) — Added Files & Usage” section above for more examples and notes.

## Inference (MLX-LM)

Two mlx-lm models are also provided, a conversion and a dynamic 4 bit quantization. code to reproduce and a handy inference runtime are provided in custom_mlx_lm/. After installation the following examples should work (I am forgetting, you may need to first copy the model into mlx_lm/ as `llama4_text.py`)

```bash
mobilellm-infer --model-path MobileLLM-R1-950M-mixed-4bit-mlx --prompt "What is the nearest prime to 9^2?

mobilellm-infer --model-path MobileLLM-R1-950M-mlx/ --prompt "What is the nearest prime to 9^2?"
```

## Transformers

```py
from transformers import pipeline
import torch

model_id = "facebook/MobileLLM-R1-950M"

pipe = pipeline(
    "text-generation",
    model=model_id,
    torch_dtype="auto",
    device_map="auto",
)

# Math problem / default scenario
messages = [
    {
        "role": "system",
        "content": "Please reason step by step, and put your final answer within \\boxed{}."
    },
    {"role": "user", "content": "Compute: $1-2+3-4+5- \\dots +99-100$."},
]

# C++ coding scenario
messages = [
    {
        "role": "system",
        "content": (
            "\nYou are a helpful and harmless assistant. You should think step-by-step before responding to the instruction below.\n\n"
            "Please use c++ programming language only.\n"
            "You must use ```cpp for just the final solution code block with the following format:\n"
            "```cpp\n# Your code here\n```\n"
        )
    },
    {"role": "user", "content": "Write a C++ program that prints 'Hello, World!'."},
]

# Python coding scenario
messages = [
    {
        "role": "system",
        "content": (
            "\nYou are a helpful and harmless assistant. You should think step-by-step before responding to the instruction below.\n\n"
            "Please use python programming language only.\n"
            "You must use ```python for just the final solution code block with the following format:\n"
            "```python\n# Your code here\n```\n"
        )
    },
    {"role": "user", "content": "Write a Python function that returns the square of a number."},
]

outputs = pipe(
    messages,
    max_new_tokens=8192,
)
print(outputs[0]["generated_text"][-1])
```

You can also run inference with vLLM. You only need to register the model architecture Llama4ForCausalLM with the vLLM ModelRegistry.
```bash
from vllm.model_executor.models.llama4 import Llama4ForCausalLM
from vllm.model_executor.models.registry import ModelRegistry
ModelRegistry.register_model("Llama4ForCausalLM", Llama4ForCausalLM)
```


# Evaluation

## MobileLLM-R1 base model
| Model | Size | MATH500 | GSM8K | MBPP | HumanEval | CommonSense Avg. | MMLU |
| --- | --- | --- | --- | --- | --- | --- | --- |
|  |  | 4-shot <br> em | 8-shot <br> em | 3-shot <br> pass@1 | 0-shot <br> pass@1 | 0-shot <br> accuracy | 5-shot <br> accuracy |
|  | 
| *<150M* |  |  |  |  |  |  |  | 
| SmolLM2-135M-base | 135M | 0.4 | 1.8 | 3.8 | 0.0 | **50.7** | -- |
| **MobileLLM-R1-140M-base** | 140M | **4.6** | **16.3** | **5.4** | **15.9** | 44.3 | -- |
|  | 
| *150M - 400M* |  |  |  |  |  |  |  | 
| Gemma-3-270M-pt | 268M | 0.6 | 1.1 | 2.0 | 3.1 | 48.4 | 26.5 |
| SmolLM2-360M-base | 362M | 1.8 | 5.0 | **19.4** | 0.0 | **56.6** | 24.7 |
| **MobileLLM-R1-360M-base** | 359M | **13.4** | **39.4** | **20.8** | **32.9** | 51.0 | **26.8** |
|  | 
| *400M - 1B* |  |  |  |  |  |  |  | 
| Qwen2.5-0.5B-base | 494M | 14.8 | 41.8 | 29.6 | 28.1 | 52.3 | 47.5 |
| Qwen3-0.6B-base | 596M | **29.8** | 60.9 | **39.0** | 30.5 | 55.3 | **52.4** |
| **MobileLLM-R1-950M-base** | 949M | 26.8 | **61.6** | **39.2** | **46.3** | **58.6** | 47.4 |
|  | 
| *> 1B* |  |  |  |  |  |  |  | 
| Gemma-3-1B-pt | 1.0B | 0.6 | 2.4 | 9.4 | 6.1 | 57.3 | 26.1 |
| LLaMA3.2-1B-base | 1.24B | 1.6 | 6.8 | 26.6 | 17.1 | 58.4 | 32.0 |
| OLMo-2-0425-1B-base | 1.48B | 5.2 | 39.8 | 7.8 | 6.7 | 61.0 | 42.4 |
| Qwen2.5-1.5B-base | 1.54B | 31.0 | 68.4 | 44.6 | 36.6 | 58.7 | 61.2 |
| SmolLM2-1.7B-base | 1.71B | 11.6 | 31.8 | 35.4 | 0.6 | 62.9 | 50.0 |
| Qwen3-1.7B-base | 2.03B | 38.5 | 76.2 | 56.4 | 47.6 | 60.9 | 62.1 |


Here, CommonSense Avg. denotes an average of 8 tasks in CommonSense Reasoning benchmarks including ARC-easy, ARC-challenge, BoolQ, PIQA, SIQA, HellaSwag, OBQA, and WinoGrand. Models with fewer than 150M parameters do not yield reliable MMLU scores and are therefore denoted as '—'. 
 
## MobileLLM-R1 post-trained model

 | Model | Size | MATH500 | GSM8K | AIME'24 | AIME'25 | LiveCodeBench-v6 | 
 | --- | --- | --- | --- | --- | --- | --- |
 |  |  | 0-shot <br> pass@1 | 0-shot <br> pass@1 | 0-shot <br> pass@1, n=64 | 0-shot <br> pass@1, n=64 | 0-shot <br> pass@1, n=16 |
 |  |
 | *<150M* |  |  |  |  |  |  | 
 | SmolLM2-135M-Instruct | 135M | 3.0 | 2.4 | -- | -- | 0.0 | 
 | **MobileLLM-R1-140M** | 140M | **7.4** | **3.0** | -- | -- | **1.0** | 
 |  |
 | *150M - 400M* |  |  |  |  |  |  | 
 | Gemma-3-270m-it | 268M | 6.8 | 8.4 | -- | -- | 0.0 | 
 | SmolLM2-360M-Instruct | 362M | 3.4 | 8.1 | -- | -- | 0.7 | 
 | **MobileLLM-R1-360M** | 359M | **26.6** | **22.7** | -- | -- | **4.8** | 
 |  |
 | *400M - 1B* |  |  |  |  |  |  | 
 | Qwen2.5-0.5B-Instruct | 494M | 31.2 | 48.1 | 0.1 | 0.3 | 3.6 | 
 | Qwen3-0.6B | 596M | 73.0 | **79.2** | 11.3 | **17.0** | 14.9 | 
 | **MobileLLM-R1-950M** | 949M | **74.0** | 67.5 | **15.5** | 16.3 | **19.9** | 
 |  |
 | *> 1B* |  |  |  |  |  |  | 
 | Gemma-3-1B-it | 1.0B | 45.4 | 62.9 | 0.9 | 0.0 | 2.0 | 
 | LLaMA3.2-1B-Instruct | 1.24B | 24.8 | 38.8 | 1.1 | 0.2 | 4.1 | 
 | OLMo-2-0425-1B-Instruct | 1.48B | 19.2 | 69.7 | 0.6 | 0.1 | 0.0 | 
 | OpenReasoning-Nemotron-1.5B | 1.54B | 83.4 | 76.7 | 49.7 | 40.4 | 28.3 | 
 | DeepSeek-R1-Distill-Qwen-1.5B | 1.54B | 83.2 | 77.3 | 29.1 | 23.4 | 19.9 | 
 | Qwen2.5-1.5B-Instruct | 1.54B | 54.0 | 70.0 | 2.5 | 0.9 | 7.9 | 
 | SmolLM2-1.7B-Instruct | 1.71B | 19.2 | 41.8 | 0.3 | 0.1 | 4.4 | 
 | Qwen3-1.7B | 2.03B | 89.4 | 90.3 | 47.0 | 37.0 | 29.8 | 

For AIME, we evaluate models across 64 runs and report the average accuracy. For LiveCodeBench, results are reported as the average accuracy across 16 runs. Models with fewer than 400M parameters do not produce reliable AIME scores and are therefore denoted as '—'.


# Training

## Training Process
![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/660f893bae89429c07a32cdb/ThVFzsaaGa4gQ3iha5CKM.jpeg)

### Training stages and hyperparameter details

In the pretraining phase, MobileLLM-R1 models are randomly initialized and optimized using the Adam optimizer with hyperparameters (β_1, β_2, ε) = (0.9, 0.95, 1e-8), coupled with a weight decay coefficient of 0.1. The learning rate follows a 2k-step warmup schedule and then decays linearly from its peak to 10\% of the maximum. 

In the mid-training phase, we use Adam optimizer with learning rate linearly decays from its maximum value to zero. We employ knowledge distillation with Llama-3.1-8B-Instruct model as the teacher, where the student is trained via minimizing the KL divergence between its output logits and the teacher logits.

In the post-training phase, we use the Adam optimizer with zero weight decay. The learning rate warmup ratio is set to 0.03 for general-purpose SFT and 0.1 for reasoning-specific SFT, and it linearly decays from its maximum value to zero. Full training hyperparameters are provided in the table below.

| Stage | Phase | Tokens / Samples | BS | Sequence Length | Steps | LR | #GPUs | Training Time |
| --- | --- | --- | --- | --- | --- | --- | --- | --- |
| Pre-training | Phase1 | 2T tokens | 16 | 2k | 500k | 4.00E-03 | 16 x 8 | 4-5 days |
|  | Phase2 | 2T tokens  | 16 | 2k | 500k | 4.00E-03 | 16 x 8 | 4-5 days |
| Mid-training | Phase1 | 100B tokens  | 4 | 4k | 50K | 3.60E-04 | 16 x 8 | 1-2 days |
|  | Phase2 | 100B tokens | 4 | 4k | 50K | 3.60E-04 | 16 x 8 | 1-2 days |
| Post-training | General SFT | 866K samples | 4 | 4k | 2 epochs | 5.00E-06 | 16 x 8 | ~2h |
|  | Reasoning SFT | 6.2M samples | 8 | 32k | 4 epochs | 8.00E-05 | 16 x 8 | ~2.5days |

## Data Mix

### Pre-training

| Dataset | Rows | Tokens (B) | Phase1 Mix Ratio | Phase2 Mix Ratio |
| --- | --- | --- | --- | --- |
| [StarCoder](https://huggingface.co/datasets/bigcode/starcoderdata) | 206,640,114 | 263.8 | 10.66% | 0.52% |
| [OpenWebMath](https://huggingface.co/datasets/open-web-math/open-web-math) | 6,117,786 | 12.6 | 6.93% | 23.33% |
| [FineWeb-Edu](https://huggingface.co/datasets/HuggingFaceFW/fineweb-edu) | 1,279,107,432 | 1300 | 63.75% | 54.83% |
| [Wiki](https://huggingface.co/datasets/allenai/dolmino-mix-1124/tree/main/data/wiki) | 7,222,303 | 3.7 | 5.03% | 0.14% |
| [Arxiv](https://huggingface.co/datasets/togethercomputer/RedPajama-Data-1T/blob/main/urls/arxiv.txt) | 1,533,917 | 28 | 6.36% | 1.32% |
| [StackExchange](https://data.together.xyz/redpajama-data-1T/v1.0.0/stackexchange/stackexchange.jsonl) | 29,249,120 | 19.6 | 5.03% | 0.86% |
| [Algebraic stack](https://huggingface.co/datasets/EleutherAI/proof-pile-2/tree/main/algebraic-stack) | 3,404,331 | 12.6 | 2.25% | 1.26% |
| [Nemotron science](https://huggingface.co/datasets/nvidia/Llama-Nemotron-Post-Training-Dataset/blob/main/SFT/science/science.jsonl) | 708,920 | 2 | -- | 0.03% |
| [Nemotron code](https://huggingface.co/datasets/nvidia/Llama-Nemotron-Post-Training-Dataset/blob/main/SFT/code/code_v1.1.jsonl) | 10,108,883 | 16 | -- | 0.72% |
| [Nemotron math](https://huggingface.co/datasets/nvidia/Llama-Nemotron-Post-Training-Dataset/blob/main/SFT/math/math_v1.1.jsonl) | 22,066,397 | 15 | -- | 3.01% |
| [Cosmopedia](https://huggingface.co/datasets/HuggingFaceTB/cosmopedia) | 31,064,744 | 25 | -- | 2.70% |
| [Facebook natural reasoning](https://huggingface.co/datasets/facebook/natural_reasoning) | 1,145,824 | 1.8 | -- | 3.18% |
| [FineMath](https://huggingface.co/datasets/HuggingFaceTB/finemath/tree/main/finemath-3plus) | 48,283,984 | 34 | -- | 8.01% |
| [peS2o](https://huggingface.co/datasets/allenai/peS2o) | 38,800,000 | 50 | -- | 0.08% |
| **Total** |  |  | 100% | 100% |




### Mid-training


 | Dataset | Subset | Rows (M) | Phase1 Mix Ratio | Phase2 Mix Ratio | 
 | --- | --- | --- | --- | --- |
 | [Dolmino](https://huggingface.co/datasets/allenai/dolmino-mix-1124) | DCLM Baseline | 606 | 37.03% | 6.51% | 
 |  | FLAN | 57.3 | 4.10% | 0.72% | 
 |  | peS2o | 38.8 | 11.41% | 2.01% | 
 |  | Wiki | 6.17 | 2.66% | 0.47% | 
 |  | StackExchange | 2.48 | 2.12% | 2.00% | 
 |  | Math | 21 | 11.63% | 29.10% | 
 | Nemotron | [Nemotron-Pretraining-Code-v1](https://huggingface.co/datasets/nvidia/Nemotron-Pretraining-Code-v1) | 882 | 20.69% | 29.10% | 
 |  | [Nemotron-CC-Math-v1](https://huggingface.co/datasets/nvidia/Nemotron-CC-Math-v1) | 144 | 3.45% | 19.40% | 
 | StarCoder | [StarCoder](https://huggingface.co/datasets/bigcode/starcoderdata) | 206 | 6.90% | 9.70% | 
 | Benchmark training set | [TriviaQA (train)](https://huggingface.co/datasets/mandarjoshi/trivia_qa/tree/main/rc) <br> [OBQA (train)](https://huggingface.co/datasets/allenai/openbookqa/blob/main/main/train-00000-of-00001.parquet) <br> [NaturalQuestions (train)](https://github.com/google-research-datasets/natural-questions/blob/master/nq_open/NQ-open.train.jsonl) <br> [PIQA (train)](https://github.com/ybisk/ybisk.github.io/blob/master/piqa/data/train.jsonl) <br> [GSM8K (train)](https://huggingface.co/datasets/openai/gsm8k/blob/main/main/train-00000-of-00001.parquet) <br> [BoolQ (train)](https://huggingface.co/datasets/google/boolq/blob/main/data/train-00000-of-00001.parquet) <br> [ARC-Easy (train)](https://huggingface.co/datasets/allenai/ai2_arc/blob/main/ARC-Easy/train-00000-of-00001.parquet) <br> [ARC-Challenge (train)](https://huggingface.co/datasets/allenai/ai2_arc/blob/main/ARC-Challenge/train-00000-of-00001.parquet) | ~0.01 | -- | 0.97% | 
 | Total |  |  | 100.00% | 100.00% | 
 
### Post-training 
 | Phase | Dataset | Rows | 
 | --- | --- | --- |
 | General SFT | [Tulu-3-sft-olmo-2-mixture-0225](https://huggingface.co/datasets/allenai/tulu-3-sft-olmo-2-mixture-0225) | 866K samples |
 | Reasoning SFT | [OpenMathReasoning](https://huggingface.co/datasets/nvidia/OpenMathReasoning) | 3.2M samples |
 | | [OpenScienceReasoning-2](https://huggingface.co/datasets/nvidia/OpenScienceReasoning-2) | 803K samples |
 | | [OpenCodeReasoning-2](https://huggingface.co/datasets/nvidia/OpenCodeReasoning-2) | 2.16M samples |


# Citation

If you find our model useful for your research, please consider citing:

    @misc{mobilellm_r1_2025,
      title={MobileLLM-R1: Model Card},
      author={Zechun Liu*, Ernie Chang*, Changsheng Zhao*, Chia-Jung Chang, Wei Wen, Chen Lai, Rick Cao, Yuandong Tian, Raghuraman Krishnamoorthi, Yangyang Shi, Vikas Chandra},
      year={2025},
      url = {https://huggingface.co/mobilellm-r1}
    }

# Contact
Zechun Liu, Meta Inc (zechunliu at meta dot com)

Ernie Chang, Meta Inc (erniecyc at meta dot com)

Changsheng Zhao, Meta Inc (cszhao at meta dot com)

# License

MobileLLM-R1 is FAIR NC licensed as of now