Safetensors
English
qwen3
Suu commited on
Commit
8ef8d60
Β·
verified Β·
1 Parent(s): bd25c8d

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +9 -9
README.md CHANGED
@@ -3,10 +3,10 @@ license: apache-2.0
3
  language:
4
  - en
5
  base_model:
6
- - Suu/Klear-Reasoner-8B-SFT
7
  datasets:
8
- - Suu/KlearReasoner-MathSub-30K
9
- - Suu/KlearReasoner-CodeSub-15K
10
  metrics:
11
  - accuracy
12
  ---
@@ -19,10 +19,10 @@ We present Klear-Reasoner, a model with long reasoning capabilities that demonst
19
  |---|---|
20
  | πŸ“ Preprints | [Paper](https://arxiv.org/pdf/2508.07629) |
21
  | πŸ€— Daily Paper | [Paper](https://huggingface.co/papers/2508.07629) |
22
- | πŸ€— Model Hub | [Klear-Reasoner-8B](https://huggingface.co/Suu/Klear-Reasoner-8B) |
23
- | πŸ€— Dataset Hub | [Math RL](https://huggingface.co/datasets/Suu/KlearReasoner-MathSub-30K) |
24
- | πŸ€— Dataset Hub | [Code RL](https://huggingface.co/datasets/Suu/KlearReasoner-CodeSub-15K) |
25
- | πŸ› Issues & Discussions | [GitHub Issues](https://github.com/suu990901/KlearReasoner/issues) |
26
  | πŸ“§ Contact | suzhenpeng13@163.com |
27
 
28
  ## πŸ“Œ Overview
@@ -72,7 +72,7 @@ When we expand the inference budget to 64K and adopt the YaRN method with a scal
72
  ## πŸ§ͺ Training
73
  ### Configure the experimental environment
74
  ```bash
75
- git clone https://github.com/suu990901/Klear_Reasoner
76
  cd Klear_Reasoner
77
  pip install -r requirements.txt
78
  ```
@@ -124,7 +124,7 @@ For LiveCodeBench, please download the data from the official website.
124
 
125
  You can run the following commands to perform inference and evaluation:
126
  ```bash
127
- git clone https://github.com/suu990901/KlearReasoner
128
  cd KlearReasoner/benchmarks
129
  python inference.py --model <KlearReasoner-8B_path> --n 64 --dataset_path ./benchmarks/aime24.qs.jsonl
130
  python judge_math.py <path_to_inference_results>
 
3
  language:
4
  - en
5
  base_model:
6
+ - Kwai-Klear/Klear-Reasoner-8B-SFT
7
  datasets:
8
+ - Kwai-Klear/KlearReasoner-MathSub-30K
9
+ - Kwai-Klear/KlearReasoner-CodeSub-15K
10
  metrics:
11
  - accuracy
12
  ---
 
19
  |---|---|
20
  | πŸ“ Preprints | [Paper](https://arxiv.org/pdf/2508.07629) |
21
  | πŸ€— Daily Paper | [Paper](https://huggingface.co/papers/2508.07629) |
22
+ | πŸ€— Model Hub | [Klear-Reasoner-8B](https://huggingface.co/Kwai-Klear/Klear-Reasoner-8B) |
23
+ | πŸ€— Dataset Hub | [Math RL](https://huggingface.co/datasets/Kwai-Klear/KlearReasoner-MathSub-30K) |
24
+ | πŸ€— Dataset Hub | [Code RL](https://huggingface.co/datasets/Kwai-Klear/KlearReasoner-CodeSub-15K) |
25
+ | πŸ› Issues & Discussions | [GitHub Issues](https://github.com/Kwai-Klear990901/KlearReasoner/issues) |
26
  | πŸ“§ Contact | suzhenpeng13@163.com |
27
 
28
  ## πŸ“Œ Overview
 
72
  ## πŸ§ͺ Training
73
  ### Configure the experimental environment
74
  ```bash
75
+ git clone https://github.com/Kwai-Klear990901/Klear_Reasoner
76
  cd Klear_Reasoner
77
  pip install -r requirements.txt
78
  ```
 
124
 
125
  You can run the following commands to perform inference and evaluation:
126
  ```bash
127
+ git clone https://github.com/Kwai-Klear990901/KlearReasoner
128
  cd KlearReasoner/benchmarks
129
  python inference.py --model <KlearReasoner-8B_path> --n 64 --dataset_path ./benchmarks/aime24.qs.jsonl
130
  python judge_math.py <path_to_inference_results>