maohaos2 commited on
Commit
4dac3ae
·
verified ·
1 Parent(s): 501a263

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +13 -9
README.md CHANGED
@@ -7,18 +7,22 @@ sdk: static
7
  pinned: false
8
  ---
9
 
10
- # **Introduction**
11
- We aim to advance LLM reasoning to enable LLMs with autoregressive search capabilities, where a single LLM performs an extended reasoning process with self-reflection and self-exploration of new strategies.
12
- We achieve this through our proposed Chain-of-Action-Thought (COAT) reasoning and a new post-training paradigm: 1) a small-scale format tuning (FT) stage to internalize the COAT reasoning format and 2) a large-scale self-improvement
13
- stage leveraging reinforcement learning (RL). Our approach results in Satori, a 7B LLM trained on open-source model (Qwen-2.5-Math-7B) and open-source data (OpenMathInstruct-2 and NuminaMath). Key features of Satori include:
14
- - Capable of self-reflection and self-exploration without external guidance.
15
- - Achieve state-of-the-art reasoning performance mainly through self-improvement (RL).
16
- - Exhibit transferability of reasoning capabilities on unseen domains beyond math.
 
 
 
17
 
18
  # **Resources**
19
- Please refer to our blog and research paper for more technical details of Satori.
20
  - [Blog](https://satori-reasoning.github.io/blog/satori/)
21
- - [Paper](https://arxiv.org/pdf/2502.02508)
 
22
 
23
  # **Citation**
24
  If you find our model and data helpful, please cite our paper:
 
7
  pinned: false
8
  ---
9
 
10
+ # **About US**
11
+ Satori (悟り) is a Japanese term meaning "sudden enlightenment" or "awakening." The Satori team is dedicated to the pursuit of Artificial General Intelligence (AGI), with a particular focus on enhancing the reasoning capabilities of large language models (LLMs)—a crucial step toward this ultimate goal.
12
+
13
+ Along this journey, the Satori team has released two major research contributions:
14
+
15
+
16
+ - **Satori**: Released concurrently with DeepSeek-R1, we propose a novel post-training paradigm that enables LLMs to performs an extended reasoning process with self-reflection: 1) a small-scale format tuning (FT) stage to internalize certain reasoning format and 2) a large-scale self-improvement
17
+ stage leveraging reinforcement learning (RL). Our approach results in Satori, a 7B LLM that achieves state-of-the-art reasoning performance.
18
+ - **Satori-SWE**: This work addresses a particularly challenging domain for LLMs: real-world software engineering (SWE) task. We propose Evolutionary Test-Time Scaling (EvoScale) that treats LLM generation as an evolutionary process. By combining reinforcement learning (RL) training and EvoScale test-time scaling, our 32B model, Satori-SWE-32B, achieves performance comparable to models exceeding 100B parameters, while requiring only a small number of samples.
19
+
20
 
21
  # **Resources**
22
+ If you are interested in our work, please refer to our blog and research paper for more technical details!
23
  - [Blog](https://satori-reasoning.github.io/blog/satori/)
24
+ - [Satori](https://arxiv.org/pdf/2502.02508)
25
+ - [Satori-SWE](https://satori-reasoning.github.io)
26
 
27
  # **Citation**
28
  If you find our model and data helpful, please cite our paper: