File size: 5,923 Bytes
ff07f06
 
 
 
 
 
 
 
 
 
 
 
 
dbb9894
 
 
 
 
 
18b3238
dbb9894
18b3238
dbb9894
 
 
 
 
 
 
e2f82ed
 
 
 
 
 
 
 
 
 
 
dbb9894
18b3238
dbb9894
18b3238
dbb9894
 
 
 
 
 
 
 
 
 
18b3238
dbb9894
9813e61
18b3238
7a3578f
6ca3413
6dd02ff
abb7f33
981748d
7a3578f
3615d61
a900591
6dd02ff
981748d
7a3578f
dbb9894
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
18b3238
dbb9894
 
 
 
ebd9e67
 
 
 
0035d84
 
 
 
 
 
 
 
 
 
 
 
 
ebd9e67
 
 
dbb9894
 
 
18b3238
dbb9894
18b3238
dbb9894
 
 
 
 
 
 
880e5bc
 
 
dbb9894
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
18b3238
dbb9894
 
18b3238
dbb9894
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
---
license: fair-noncommercial-research-license
language:
- en
- pt
metrics:
      - type: { HumanEval zero-shot pass@1}         # Required. Example: wer. Use metric id from https://hf.co/metrics
        value: {88.41}       # Required. Example: 20.90
base_model:
- Qwen/Qwen2.5-Coder-3B-Instruct
pipeline_tag: text-generation
tags:
- code
---


<!-- Provide a quick summary of what the model is/does. -->

#### Model Details
<p class="justified-text">
<b>Nerdsking-python-coder-3B-i</b> is a 3B parameter partially uncensored model focused in <b> Python</b>, with <b>English</b> as main language. It was massively trained in python, therefore despite the fact it can code in other languages as well, the performance will be not in the same level as the one achieved while using python.
</p>
<i>Key Characteristics:</i>

- Parameter count: 3B
- Primary domain: Python programming
- Secondary capabilities: General coding, technical English
- Training focus: Python logic, standard library usage, algorithmic reasoning
- Alignment: Partially uncensored (developer-oriented)
<br>
<p>

#### Nerdsking Python Coder Family

🧠 <a href="https://huggingface.co/Nerdsking/nerdsking-python-coder-3B-i"> Nerdsking Python Coder 3B-i </a><br>
🧠 <a href="https://huggingface.co/Nerdsking/Nerdsking-python-coder-7B-i"> Nerdsking Python Coder 7B-i </a>
<br>
<p>


#### Benchmark
<p class="justified-text">
After intense refining, <b>Nerdsking-python-coder-3B-i</b> has achieved <b>88.41 in HumanEval (bf16)</b>, ranking it amongst the highest-performing Python-focused 3B models ever reported on HumanEval. Surpassing even much bigger models in that area. 
</p>
<i>Benchmark details (164 tasks):</i>

- official HumanEval execution protocol - test suites executed via `exec()`
- zero-shot pass@1
- dtype == "bfloat16"
- temperature = 0.1
- do_sample = False
- evaluated on fully merged weights
- Prompting: Chat-formatted with a fixed system prompt (“You are an expert Python coding assistant.”)
- Quantization: None (unquantized weights - bf16)
<p class="justified-text">
<i>The configuration above is fully disclosed to support reproducibility and fair comparison.</i>
</p>
<p class="justified-text">
<i> Note: Quantized variants (INT4/INT6) may exhibit lower HumanEval scores due to reduced numerical precision.</i></p>

  <hr>
<br><b>IMPORTANT:</b>  5 "errors" from the model during the benchmark were mere "import errors" (missing imports: reduce, Optional, List, etc ), but the logic was perfect, therefore:
- the model reasoning was right
- the failure is syntactic / boilerplate, not conceptual<br>

We did not considered it for our score, but "if" considered those extra 5 questions as correct, our benchmark would be <b>much higher</b>.
<hr>
  

#### Comparison Table

<table>
  <thead>
    <tr>
      <th>Model name</th>
      <th>Approx. HumanEval Pass@1 (%)</th>
      <th>Notes / Source</th>
    </tr>
  </thead>
  <tbody>
    <tr>
      <td><strong>Nerdsking-python-coder-3B-i</strong></td>
      <td><strong>88.41</strong></td>
      <td>Evaluated score (zero-shot, strict HumanEval pass@1, using unquantized weigths bf16)</td>
    </tr>
    <tr>
      <td>StarCoder2-3B</td>
      <td>~33.6</td>
      <td>Reported in third-party performance overview; may differ by protocol</td>
    </tr>
    <tr>
      <td>Stable Code 3B*</td>
      <td>~32–33 (estimate)</td>
      <td>Indicative proxy from published code-task performance breakdowns (not a strict HumanEval pass@1)</td>
    </tr>
    <tr>
      <td>Wizard Coder 3B*</td>
      <td>~31.6 (estimate)</td>
      <td>Indicative proxy from published code-task performance breakdowns (not a strict HumanEval pass@1)</td>
    </tr>
    <tr>
      <td>StarCoder 3B*</td>
      <td>~21.6 (estimate)</td>
      <td>Indicative proxy from published code-task performance breakdowns (not a strict HumanEval pass@1)</td>
    </tr>
  </tbody>
</table>
<p class="justified-text">
  <em>*Estimated/proxy values where standardized HumanEval pass@1 was not published in those 3 models. Scores can vary with prompt format, decoding params, and harness.</em>
</p>


<hr>

#### Benchmark tool used

https://github.com/nerdskingcom/gguf-humaneval-benchmark

Install it using:

<code>
 pip install gguf-humaneval-benchmark
</code>

Instructions after install:

<code>
 gguf-humaneval-benchmark --help
</code>

<hr>



#### S.o.n.n.
<p class="justified-text">
The model was treated under <b>"s.o.n.n."</b> (<i>single omni neural network</i>), a concept created by IPMN at Nerdsking.com that is both a precise way of fine tunning/altering existing models, as well a foundational concept for a broader AI architecture standard currently under active research and development.
</p>
<i>When applied to pre-existing models, allows:</i>

- parameter-preserving refinement methodology
- focused global behavioral shaping, instead of task-local adapters
- avoidance of fragmentation, common in multi-adapter or task-siloed approaches





#### Quick Start (Inference)

<code>
from transformers import AutoModelForCausalLM, AutoTokenizer

model_id = "Nerdsking/Nerdsking-python-coder-3B-i"

tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    torch_dtype="bfloat16",
    device_map="auto"
)

prompt = "Write a Python function that checks if a number is prime."
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
outputs = model.generate(**inputs, max_new_tokens=200)

print(tokenizer.decode(outputs[0], skip_special_tokens=True))
</code>

#### Ethical & Safety Notes
<p class="justified-text">
This model is intended for technical and research use.
Due to relaxed alignment constraints, outputs should be reviewed before deployment in production or public-facing systems.
</p>

#### Citation

If you use this model in research or benchmarking, please cite:

Nerdsking-python-coder-3B-i,
IPMN / Nerdsking.com