christian-muertz commited on
Commit
df8d819
·
verified ·
1 Parent(s): 1b4eb50

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -4
README.md CHANGED
@@ -9,7 +9,6 @@ base_model:
9
  ---
10
 
11
  # SWE-Star-32B
12
-
13
  ## Introduction
14
 
15
  [SWE-Star](https://huggingface.co/collections/LogicStar/swe-star) is a family of language models based on the [Qwen2.5-Coder](https://huggingface.co/collections/Qwen/qwen25-coder) family and trained on the [SWE-Star](https://huggingface.co/datasets/LogicStar/SWE-Star) dataset. The dataset contains approximately 250k agentic coding trajectories distilled from [Devstral-2-Small](https://huggingface.co/mistralai/Devstral-Small-2-24B-Instruct-2512) using [SWE-Smith](https://swesmith.com/) tasks.
@@ -37,7 +36,7 @@ It is worth noting that the agent was run on [MN5](https://www.bsc.es/marenostru
37
  Our models also achieve very high Pass@16 rates, making them strong candidates for further reinforcement learning. Our 32B model reaches a Pass@16 score of 75.5%:
38
 
39
  <picture>
40
- <source srcset="https://cdn-uploads.huggingface.co/production/uploads/689f18d4da73199a8848954b/9mxYRYcmicls_3fSoR__B.png" media="(prefers-color-scheme: dark)">
41
- <source srcset="https://cdn-uploads.huggingface.co/production/uploads/689f18d4da73199a8848954b/XR-t5Jbtv_AWTkZSDR8fn.png" media="(prefers-color-scheme: light)">
42
- <img src="https://cdn-uploads.huggingface.co/production/uploads/689f18d4da73199a8848954b/XR-t5Jbtv_AWTkZSDR8fn.png" width="1000">
43
  </picture>
 
9
  ---
10
 
11
  # SWE-Star-32B
 
12
  ## Introduction
13
 
14
  [SWE-Star](https://huggingface.co/collections/LogicStar/swe-star) is a family of language models based on the [Qwen2.5-Coder](https://huggingface.co/collections/Qwen/qwen25-coder) family and trained on the [SWE-Star](https://huggingface.co/datasets/LogicStar/SWE-Star) dataset. The dataset contains approximately 250k agentic coding trajectories distilled from [Devstral-2-Small](https://huggingface.co/mistralai/Devstral-Small-2-24B-Instruct-2512) using [SWE-Smith](https://swesmith.com/) tasks.
 
36
  Our models also achieve very high Pass@16 rates, making them strong candidates for further reinforcement learning. Our 32B model reaches a Pass@16 score of 75.5%:
37
 
38
  <picture>
39
+ <source srcset="https://cdn-uploads.huggingface.co/production/uploads/689f18d4da73199a8848954b/s6wolafS7nAEWnXMj_Sw5.png" media="(prefers-color-scheme: dark)">
40
+ <source srcset="https://cdn-uploads.huggingface.co/production/uploads/689f18d4da73199a8848954b/0uaPi5mUsuNnyCksENTvw.png" media="(prefers-color-scheme: light)">
41
+ <img src="https://cdn-uploads.huggingface.co/production/uploads/689f18d4da73199a8848954b/0uaPi5mUsuNnyCksENTvw.png" width="1000">
42
  </picture>