Update README.md
Browse files
README.md
CHANGED
|
@@ -13,7 +13,10 @@ language:
|
|
| 13 |
- en
|
| 14 |
---
|
| 15 |
|
| 16 |
-
|
|
|
|
|
|
|
|
|
|
| 17 |
|
| 18 |
### Model Description
|
| 19 |
|
|
@@ -24,7 +27,7 @@ language:
|
|
| 24 |
- **Parameters:** 1.7 billion
|
| 25 |
|
| 26 |
|
| 27 |
-
## Model
|
| 28 |
|
| 29 |
The palmyra-mini-thinking-a model demonstrates exceptional performance in advanced mathematical reasoning and competitive programming. Its capabilities are highlighted by an outstanding score of 0.886 on the 'MATH500' benchmark, showcasing a robust ability to solve complex mathematical problems. The strength of the model in quantitative challenges is further confirmed by its score of 0.8287 on 'gsm8k (strict-match)', which demonstrates proficiency in multi-step arithmetic reasoning. Additionally, the model proves its aptitude for high-level problem-solving with a score of 0.8 on 'AMC23'. The model also shows strong potential in the coding domain, achieving a score of 0.5631 on 'Codeforces (pass_rate)' and 0.5481 on 'Olympiadbench (extractive_match)', indicating competence in generating correct solutions for programming challenges.
|
| 30 |
|
|
@@ -135,7 +138,7 @@ As with any language model, there is a potential for generating biased or inaccu
|
|
| 135 |
|
| 136 |
To cite this model:
|
| 137 |
```
|
| 138 |
-
@misc{Palmyra-mini,
|
| 139 |
author = {Writer Engineering team},
|
| 140 |
title = {{Palmyra-mini: A powerful LLM designed for math and coding}},
|
| 141 |
howpublished = {\url{https://dev.writer.com}},
|
|
|
|
| 13 |
- en
|
| 14 |
---
|
| 15 |
|
| 16 |
+
<div align="center">
|
| 17 |
+
<h1>Palmyra-mini</h1>
|
| 18 |
+
|
| 19 |
+
</div>
|
| 20 |
|
| 21 |
### Model Description
|
| 22 |
|
|
|
|
| 27 |
- **Parameters:** 1.7 billion
|
| 28 |
|
| 29 |
|
| 30 |
+
## Model Details
|
| 31 |
|
| 32 |
The palmyra-mini-thinking-a model demonstrates exceptional performance in advanced mathematical reasoning and competitive programming. Its capabilities are highlighted by an outstanding score of 0.886 on the 'MATH500' benchmark, showcasing a robust ability to solve complex mathematical problems. The strength of the model in quantitative challenges is further confirmed by its score of 0.8287 on 'gsm8k (strict-match)', which demonstrates proficiency in multi-step arithmetic reasoning. Additionally, the model proves its aptitude for high-level problem-solving with a score of 0.8 on 'AMC23'. The model also shows strong potential in the coding domain, achieving a score of 0.5631 on 'Codeforces (pass_rate)' and 0.5481 on 'Olympiadbench (extractive_match)', indicating competence in generating correct solutions for programming challenges.
|
| 33 |
|
|
|
|
| 138 |
|
| 139 |
To cite this model:
|
| 140 |
```
|
| 141 |
+
@misc{Palmyra-mini-thinking-a,
|
| 142 |
author = {Writer Engineering team},
|
| 143 |
title = {{Palmyra-mini: A powerful LLM designed for math and coding}},
|
| 144 |
howpublished = {\url{https://dev.writer.com}},
|