lewtun HF Staff commited on
Commit
df5b499
·
verified ·
1 Parent(s): 9d7933b

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -11
README.md CHANGED
@@ -9,7 +9,7 @@ tags:
9
  datasets:
10
  - AI-MO/NuminaMath-CoT
11
  model-index:
12
- - name: sft_deepseek-math-7b_aimo_v53.24
13
  results: []
14
  language:
15
  - en
@@ -46,7 +46,6 @@ NuminaMath 7B CoT is the model from Stage 1 and was fine-tuned on [AI-MO/NuminaM
46
  Here's how you can run the model using the `pipeline()` function from 🤗 Transformers:
47
 
48
  ```python
49
- import re
50
  import torch
51
  from transformers import pipeline
52
 
@@ -67,20 +66,13 @@ gen_config = {
67
  outputs = pipe(prompt, **gen_config)
68
  text = outputs[0]["generated_text"]
69
  print(text)
70
-
71
- # WARNING: This code will execute the python code in the string. We show this for eductional purposes only.
72
- # Please refer to our full pipeline for a safer way to execute code.
73
- python_code = re.findall(r"```python(.*?)```", text, re.DOTALL)[0]
74
- exec(python_code)
75
- ```
76
-
77
- The above executes a single step of Python code - for more complex problems, you will want to run the logic for several steps to obtain the final solution.
78
 
79
  ## Bias, Risks, and Limitations
80
 
81
  <!-- This section is meant to convey both technical and sociotechnical limitations. -->
82
 
83
- NuminaMath 7B TIR was created to solve problems in the narrow domain of competition-level mathematics. As a result, the model should not be used for general chat applications. With greedy decoding, we find the model is capable of solving problems at the level of [AMC 12](https://artofproblemsolving.com/wiki/index.php/2023_AMC_12A_Problems), but often struggles generate a valid solution on harder problems at the AIME and Math Olympiad level. The model also struggles to solve geometry problems, likely due to it's limited capacity and lack of other modalities like vision.
84
 
85
 
86
  ## Training procedure
 
9
  datasets:
10
  - AI-MO/NuminaMath-CoT
11
  model-index:
12
+ - name: AI-MO/NuminaMath-7B-CoT
13
  results: []
14
  language:
15
  - en
 
46
  Here's how you can run the model using the `pipeline()` function from 🤗 Transformers:
47
 
48
  ```python
 
49
  import torch
50
  from transformers import pipeline
51
 
 
66
  outputs = pipe(prompt, **gen_config)
67
  text = outputs[0]["generated_text"]
68
  print(text)
69
+ ```
 
 
 
 
 
 
 
70
 
71
  ## Bias, Risks, and Limitations
72
 
73
  <!-- This section is meant to convey both technical and sociotechnical limitations. -->
74
 
75
+ NuminaMath 7B CoT was created to solve problems in the narrow domain of competition-level mathematics. As a result, the model should not be used for general chat applications. With greedy decoding, we find the model is capable of solving problems at the level of [AMC 12](https://artofproblemsolving.com/wiki/index.php/2023_AMC_12A_Problems), but often struggles generate a valid solution on harder problems at the AIME and Math Olympiad level. The model also struggles to solve geometry problems, likely due to it's limited capacity and lack of other modalities like vision.
76
 
77
 
78
  ## Training procedure