aashish1904 commited on
Commit
df14e83
·
verified ·
1 Parent(s): f117758

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +195 -0
README.md ADDED
@@ -0,0 +1,195 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+ ---
3
+
4
+ tags:
5
+ - Coder
6
+ - Math
7
+ - qwen2
8
+ - thinking
9
+ - reasoning
10
+ model-index:
11
+ - name: Palmyra-mini-thinking-b
12
+ results: []
13
+ license: apache-2.0
14
+ language:
15
+ - en
16
+
17
+ ---
18
+
19
+ [![QuantFactory Banner](https://lh7-rt.googleusercontent.com/docsz/AD_4nXeiuCm7c8lEwEJuRey9kiVZsRn2W-b4pWlu3-X534V3YmVuVc2ZL-NXg2RkzSOOS2JXGHutDuyyNAUtdJI65jGTo8jT9Y99tMi4H4MqL44Uc5QKG77B0d6-JfIkZHFaUA71-RtjyYZWVIhqsNZcx8-OMaA?key=xt3VSDoCbmTY7o-cwwOFwQ)](https://hf.co/QuantFactory)
20
+
21
+
22
+ # QuantFactory/palmyra-mini-thinking-b-GGUF
23
+ This is quantized version of [Writer/palmyra-mini-thinking-b](https://huggingface.co/Writer/palmyra-mini-thinking-b) created using llama.cpp
24
+
25
+ # Original Model Card
26
+
27
+
28
+
29
+ <div align="center">
30
+ <h1>Palmyra-mini-thinking-b</h1>
31
+
32
+ </div>
33
+
34
+ <p align="center">
35
+ <img src="https://huggingface.co/Writer/palmyra-mini-thinking-b/resolve/main/logo-mini-b%20benchmark-performance.png?download=true" width="800"/>
36
+ </p>
37
+
38
+ ### Model Description
39
+
40
+ - **Language(s) (NLP):** English
41
+ - **License:** Apache-2.0
42
+ - **Finetuned from model:** Qwen/Qwen2.5-1.5B
43
+ - **Context window:** 131,072 tokens
44
+ - **Parameters:** 1.7 billion
45
+
46
+ ## Introduction
47
+
48
+ Palmyra-mini-thinking-b represents a significant step forward in generative AI, demonstrating exceptional capabilities in complex reasoning and problem-solving domains. This model excels in mathematical and programming challenges, showcasing a robust understanding of abstract concepts and logical structures. Its performance is not just a measure of its power but a testament to its specialized training, which has honed its ability to tackle tasks that demand deep, multi-step thinking.
49
+
50
+ ## Mathematical Prowess
51
+
52
+ The model's mathematical abilities are particularly noteworthy. It achieves an impressive score of 0.925 on the AMC23 benchmark, indicating a strong grasp of advanced high school mathematics. This is further complemented by its performance on MATH500, where it scores 0.882, proving its proficiency across a wide range of mathematical problems. The model also shows its strength in competitive mathematics, scoring 0.6 on AIME24(pass@1)(avg-of-1) and 0.5733 on Olympiadbench (extractive_match). These scores highlight the model's capacity for sophisticated mathematical reasoning, making it a powerful tool for both educational and research applications.
53
+
54
+ ## Excellence in Competitive Programming
55
+
56
+ Beyond mathematics, Palmyra-mini-thinking-b demonstrates strong performance in the competitive programming arena. Its score of 0.6343 on the Codeforces (pass_rate) benchmark underscores its ability to understand complex algorithmic problems and generate correct, efficient code. This capability suggests the model is well-suited for tasks involving code generation, debugging, and algorithmic design, making it a valuable asset for software developers and computer science researchers.
57
+
58
+ ## Benchmark Scores (sampling params: temperature:0.6, top_p:0.95)
59
+
60
+ Pass@1(avg-of-64)
61
+
62
+ | Benchmark | Pass@1 (avg-of-64) | Majority@64 |
63
+ | :-------- | :------------------- | :----------- |
64
+ | AIME24 | 59.43% | 71.67% |
65
+ | AIME25 | 49.69% | 60.00% |
66
+ | GPQA | 42.01% | 47.22% |
67
+ | HMMT25 | 27.86% | 30.00% |
68
+ | HLE | 5.22% | N/A |
69
+ | MMLU-PRO | 55.49% | 60.60% |
70
+ | MATH500 | 93.80% | 95.40% |
71
+ | LCB | 34.51% | N/A |
72
+
73
+ LCB here is version v6_2408_2505
74
+
75
+
76
+ Pass@1(avg-of-1)
77
+
78
+ | Benchmark | Score (%) |
79
+ |:-----------------------------------------------------------------|------------:|
80
+ | GSM8K (strict-match) | 42.68% |
81
+ | Minerva Math (exact match) | 7.08% |
82
+ | MMLU-PRO (exact match) | 29.26% |
83
+ | MATH (Hendrycks) | 0.16% |
84
+ | IFEval (inst_level_loose_acc) | 32.97% |
85
+ | MathQA (acc) | 30.45% |
86
+ | HumanEval (pass@1) | 7.32% |
87
+ | BBH (get-answer)(exact match) | 28.80% |
88
+ | MBPP | 16.80% |
89
+ | GPQA (diamond, pass@1: 8 samples) | 39.58% |
90
+ | AIME24 (pass@1)(avg-of-1) | 60.00% |
91
+ | AIME25 (pass@1)(avg-of-1) | 50.00% |
92
+ | Livecodebench-codegen (livecodebench/code_generation_lite v4_v5) | 28.73% |
93
+ | AMC23 | 92.50% |
94
+ | MATH500 | 88.20% |
95
+ | Minerva | 29.41% |
96
+ | Olympiadbench (extractive_match) | 57.33% |
97
+ | Codecontests (pass_rate) | 20.18% |
98
+ | Codeforces (pass_rate) | 63.43% |
99
+ | Taco (pass_rate) | 34.56% |
100
+ | APPS (all_levels) | 5.84% |
101
+ | HMMT (Feb 2025) (extractive_match) | 23.33% |
102
+ | Average | 35.94% |
103
+
104
+ ### Use with transformers
105
+
106
+ You can run conversational inference using the Transformers Auto classes with the `generate()` function. Here's an example:
107
+
108
+ ```py
109
+ import torch
110
+ from transformers import AutoTokenizer, AutoModelForCausalLM
111
+
112
+ model_id = "Writer/palmyra-mini-thinking-b"
113
+
114
+ tokenizer = AutoTokenizer.from_pretrained(model_id)
115
+
116
+ model = AutoModelForCausalLM.from_pretrained(
117
+ model_id,
118
+ torch_dtype=torch.float16,
119
+ device_map="auto",
120
+ attn_implementation="flash_attention_2",
121
+ )
122
+
123
+ messages = [
124
+ {
125
+ "role": "user",
126
+ "content": "You have a 3-liter jug and a 5-liter jug. How can you measure exactly 4 liters of water?"
127
+ }
128
+ ],
129
+
130
+ input_ids = tokenizer.apply_chat_template(
131
+ messages, tokenize=True, add_generation_prompt=True, return_tensors="pt"
132
+ )
133
+
134
+ gen_conf = {
135
+ "max_new_tokens": 256,
136
+ "eos_token_id": tokenizer.eos_token_id,
137
+ "temperature": 0.3,
138
+ "top_p": 0.9,
139
+ }
140
+
141
+ with torch.inference_mode():
142
+ output_id = model.generate(input_ids, **gen_conf)
143
+
144
+ output_text = tokenizer.decode(output_id[0][input_ids.shape[1] :])
145
+
146
+ print(output_text)
147
+ ```
148
+
149
+ ## Running with vLLM
150
+ ```py
151
+ vllm serve Writer/palmyra-mini-thinking-b
152
+ ```
153
+ ```py
154
+ curl -X POST http://localhost:8000/v1/chat/completions \
155
+ -H "Content-Type: application/json" \
156
+ -d '{
157
+ "model": "Writer/palmyra-mini-thinking-b",
158
+ "messages": [
159
+ {
160
+ "role": "user",
161
+ "content": "You have a 3-liter jug and a 5-liter jug. How can you measure exactly 4 liters of water?"
162
+ }
163
+ ],
164
+ "max_tokens": 8000,
165
+ "temperature": 0.2
166
+ }'
167
+ ```
168
+
169
+ ## Ethical Considerations
170
+
171
+ As with any language model, there is a potential for generating biased or inaccurate information. Users should be aware of these limitations and use the model responsibly.
172
+
173
+
174
+ ### Footnotes
175
+
176
+ - Base model: This model builds on NVIDIA's OpenReasoning-Nemotron-1.5B (`https://huggingface.co/nvidia/OpenReasoning-Nemotron-1.5B`).
177
+ - Evaluation methodology:
178
+ - Pass@1 (avg-of-1): computed using `lm_eval` and `lighteval`.
179
+ - Pass@1 (avg-of-64) and Majority@64: computed using `nemoskills`.
180
+
181
+ ### Citation and Related Information
182
+
183
+
184
+ To cite this model:
185
+ ```
186
+ @misc{Palmyra-mini-thinking-b,
187
+ author = {Writer Engineering team},
188
+ title = {{Palmyra-mini: A powerful LLM designed for math and coding}},
189
+ howpublished = {\url{https://dev.writer.com}},
190
+ year = 2025,
191
+ month = Sep
192
+ }
193
+ ```
194
+ Contact Hello@writer.com
195
+