Spaces:
Running
Running
Update README.md
Browse files
README.md
CHANGED
|
@@ -7,18 +7,22 @@ sdk: static
|
|
| 7 |
pinned: false
|
| 8 |
---
|
| 9 |
|
| 10 |
-
# **
|
| 11 |
-
|
| 12 |
-
|
| 13 |
-
|
| 14 |
-
|
| 15 |
-
|
| 16 |
-
-
|
|
|
|
|
|
|
|
|
|
| 17 |
|
| 18 |
# **Resources**
|
| 19 |
-
|
| 20 |
- [Blog](https://satori-reasoning.github.io/blog/satori/)
|
| 21 |
-
- [
|
|
|
|
| 22 |
|
| 23 |
# **Citation**
|
| 24 |
If you find our model and data helpful, please cite our paper:
|
|
|
|
| 7 |
pinned: false
|
| 8 |
---
|
| 9 |
|
| 10 |
+
# **About US**
|
| 11 |
+
Satori (悟り) is a Japanese term meaning "sudden enlightenment" or "awakening." The Satori team is dedicated to the pursuit of Artificial General Intelligence (AGI), with a particular focus on enhancing the reasoning capabilities of large language models (LLMs)—a crucial step toward this ultimate goal.
|
| 12 |
+
|
| 13 |
+
Along this journey, the Satori team has released two major research contributions:
|
| 14 |
+
|
| 15 |
+
|
| 16 |
+
- **Satori**: Released concurrently with DeepSeek-R1, we propose a novel post-training paradigm that enables LLMs to performs an extended reasoning process with self-reflection: 1) a small-scale format tuning (FT) stage to internalize certain reasoning format and 2) a large-scale self-improvement
|
| 17 |
+
stage leveraging reinforcement learning (RL). Our approach results in Satori, a 7B LLM that achieves state-of-the-art reasoning performance.
|
| 18 |
+
- **Satori-SWE**: This work addresses a particularly challenging domain for LLMs: real-world software engineering (SWE) task. We propose Evolutionary Test-Time Scaling (EvoScale) that treats LLM generation as an evolutionary process. By combining reinforcement learning (RL) training and EvoScale test-time scaling, our 32B model, Satori-SWE-32B, achieves performance comparable to models exceeding 100B parameters, while requiring only a small number of samples.
|
| 19 |
+
|
| 20 |
|
| 21 |
# **Resources**
|
| 22 |
+
If you are interested in our work, please refer to our blog and research paper for more technical details!
|
| 23 |
- [Blog](https://satori-reasoning.github.io/blog/satori/)
|
| 24 |
+
- [Satori](https://arxiv.org/pdf/2502.02508)
|
| 25 |
+
- [Satori-SWE](https://satori-reasoning.github.io)
|
| 26 |
|
| 27 |
# **Citation**
|
| 28 |
If you find our model and data helpful, please cite our paper:
|