Improve model card: Add paper abstract for Phi-4-mini-flash-reasoning
Browse filesHi team,
This PR aims to improve the model card for `microsoft/Phi-4-mini-flash-reasoning` by adding the paper's abstract directly into the Markdown content. This provides users with a more immediate and comprehensive understanding of the model's research context and contributions without needing to click through to external links.
The current model card already includes links to the paper and the GitHub repository, as well as clear sample usage and correct metadata. This change focuses on enriching the "paper" information as specified in our model card guidelines.
Please let me know if any adjustments are needed!
Thanks,
Niels
README.md
CHANGED
|
@@ -32,6 +32,10 @@ The model belongs to the Phi-4 model family and supports 64K token context lengt
|
|
| 32 |
🎉**Phi-4 models**: [[Phi-4-mini-reasoning](https://huggingface.co/microsoft/Phi-4-mini-reasoning)] | [[Phi-4-reasoning](https://huggingface.co/microsoft/Phi-4-reasoning)] | [[multimodal-instruct](https://huggingface.co/microsoft/Phi-4-multimodal-instruct) | [onnx](https://huggingface.co/microsoft/Phi-4-multimodal-instruct-onnx)];
|
| 33 |
[[mini-instruct](https://huggingface.co/microsoft/Phi-4-mini-instruct) | [onnx](https://huggingface.co/microsoft/Phi-4-mini-instruct-onnx)]
|
| 34 |
|
|
|
|
|
|
|
|
|
|
|
|
|
| 35 |
## Intended Uses
|
| 36 |
|
| 37 |
### Primary Use Cases
|
|
|
|
| 32 |
🎉**Phi-4 models**: [[Phi-4-mini-reasoning](https://huggingface.co/microsoft/Phi-4-mini-reasoning)] | [[Phi-4-reasoning](https://huggingface.co/microsoft/Phi-4-reasoning)] | [[multimodal-instruct](https://huggingface.co/microsoft/Phi-4-multimodal-instruct) | [onnx](https://huggingface.co/microsoft/Phi-4-multimodal-instruct-onnx)];
|
| 33 |
[[mini-instruct](https://huggingface.co/microsoft/Phi-4-mini-instruct) | [onnx](https://huggingface.co/microsoft/Phi-4-mini-instruct-onnx)]
|
| 34 |
|
| 35 |
+
## Abstract
|
| 36 |
+
|
| 37 |
+
Recent advances in language modeling have demonstrated the effectiveness of State Space Models (SSMs) for efficient sequence modeling. While hybrid architectures such as Samba and the decoder-decoder architecture, YOCO, have shown promising performance gains over Transformers, prior works have not investigated the efficiency potential of representation sharing between SSM layers. In this paper, we introduce the Gated Memory Unit (GMU), a simple yet effective mechanism for efficient memory sharing across layers. We apply it to create SambaY, a decoder-hybrid-decoder architecture that incorporates GMUs in the cross-decoder to share memory readout states from a Samba-based self-decoder. SambaY significantly enhances decoding efficiency, preserves linear pre-filling time complexity, and boosts long-context performance, all while eliminating the need for explicit positional encoding. Through extensive scaling experiments, we demonstrate that our model exhibits a significantly lower irreducible loss compared to a strong YOCO baseline, indicating superior performance scalability under large-scale compute regimes. Our largest model enhanced with Differential Attention, Phi4-mini-Flash-Reasoning, achieves significantly better performance than Phi4-mini-Reasoning on reasoning tasks such as Math500, AIME24/25, and GPQA Diamond without any reinforcement learning, while delivering up to 10x higher decoding throughput on 2K-length prompts with 32K generation length under the vLLM inference framework. We release our training codebase on open-source data at this https URL .
|
| 38 |
+
|
| 39 |
## Intended Uses
|
| 40 |
|
| 41 |
### Primary Use Cases
|