Spaces:
Running
Running
Update README.md
Browse files
README.md
CHANGED
|
@@ -13,7 +13,7 @@ Satori (悟り) is a Japanese term meaning "sudden enlightenment" or "awakening.
|
|
| 13 |
Along this journey, the Satori team has released two major research contributions:
|
| 14 |
|
| 15 |
|
| 16 |
-
- **Satori**: Released concurrently with DeepSeek-R1, we propose a novel post-training paradigm that enables LLMs to performs an extended reasoning process with self-reflection: 1) a small-scale format tuning (FT) stage to internalize certain reasoning format and 2) a large-scale self-improvement
|
| 17 |
stage leveraging reinforcement learning (RL). Our approach results in Satori, a 7B LLM that achieves state-of-the-art reasoning performance.
|
| 18 |
- **Satori-SWE**: This work addresses a particularly challenging domain for LLMs: real-world software engineering (SWE) task. We propose Evolutionary Test-Time Scaling (EvoScale) that treats LLM generation as an evolutionary process. By combining reinforcement learning (RL) training and EvoScale test-time scaling, our 32B model, Satori-SWE-32B, achieves performance comparable to models exceeding 100B parameters, while requiring only a small number of samples.
|
| 19 |
|
|
|
|
| 13 |
Along this journey, the Satori team has released two major research contributions:
|
| 14 |
|
| 15 |
|
| 16 |
+
- **Satori (ICML 2025)**: Released concurrently with DeepSeek-R1, we propose a novel post-training paradigm that enables LLMs to performs an extended reasoning process with self-reflection: 1) a small-scale format tuning (FT) stage to internalize certain reasoning format and 2) a large-scale self-improvement
|
| 17 |
stage leveraging reinforcement learning (RL). Our approach results in Satori, a 7B LLM that achieves state-of-the-art reasoning performance.
|
| 18 |
- **Satori-SWE**: This work addresses a particularly challenging domain for LLMs: real-world software engineering (SWE) task. We propose Evolutionary Test-Time Scaling (EvoScale) that treats LLM generation as an evolutionary process. By combining reinforcement learning (RL) training and EvoScale test-time scaling, our 32B model, Satori-SWE-32B, achieves performance comparable to models exceeding 100B parameters, while requiring only a small number of samples.
|
| 19 |
|