Nerdsking commited on
Commit
ecbef7b
·
verified ·
1 Parent(s): beede5d

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +150 -3
README.md CHANGED
@@ -1,3 +1,150 @@
1
- ---
2
- license: fair-noncommercial-research-license
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: fair-noncommercial-research-license
3
+ language:
4
+ - en
5
+ - pt
6
+ metrics:
7
+ - type: { HumanEval zero-shot pass@1} # Required. Example: wer. Use metric id from https://hf.co/metrics
8
+ value: {88.41} # Required. Example: 20.90
9
+ base_model:
10
+ - Qwen/Qwen2.5-Coder-3B-Instruct
11
+ pipeline_tag: text-generation
12
+ tags:
13
+ - code
14
+ ---
15
+
16
+
17
+ <!-- Provide a quick summary of what the model is/does. -->
18
+
19
+ #### Model Details
20
+ <p class="justified-text">
21
+ <b>Nerdsking-python-coder-7B-i</b> is a 7B parameter partially uncensored model focused in <b> Python</b>, with <b>English</b> as main language. It was massively trained in python, therefore despite the fact it can code in other languages as well, the performance will be not in the same level as the one achieved while using python.
22
+ </p>
23
+ <i>Key Characteristics:</i>
24
+
25
+ - Parameter count: 7B
26
+ - Primary domain: Python programming
27
+ - Secondary capabilities: General coding, technical English
28
+ - Training focus: Python logic, standard library usage, algorithmic reasoning
29
+ - Alignment: Partially uncensored (developer-oriented)
30
+
31
+
32
+ #### Benchmark
33
+ <p class="justified-text">
34
+ After intense refining, <b>Nerdsking-python-coder-7B-i</b> has achieved <b>86.99 in HumanEval (bf16)</b>, ranking it amongst the highest-performing Python-focused 7B models ever reported on HumanEval. Surpassing even much bigger models in that area.
35
+ </p>
36
+ <i>Benchmark details (164 tasks):</i>
37
+
38
+ - official HumanEval execution protocol - test suites executed via `exec()`
39
+ - zero-shot pass@1
40
+ - dtype == "bfloat16"
41
+ - temperature = 0.1
42
+ - do_sample = False
43
+ - evaluated on fully merged weights
44
+ - Prompting: Chat-formatted with a fixed system prompt (“You are an expert Python coding assistant.”)
45
+ - Quantization: None (unquantized weights - bf16)
46
+ <p class="justified-text">
47
+ <i>The configuration above is fully disclosed to support reproducibility and fair comparison.</i>
48
+ </p>
49
+ <p class="justified-text">
50
+ <i> Note: Quantized variants (INT4/INT6) may exhibit lower HumanEval scores due to reduced numerical precision.</i>
51
+ </p>
52
+
53
+
54
+ #### Comparison Table
55
+
56
+ <table>
57
+ <thead>
58
+ <tr>
59
+ <th>Model name</th>
60
+ <th>Approx. HumanEval Pass@1 (%)</th>
61
+ <th>Notes / Source</th>
62
+ </tr>
63
+ </thead>
64
+ <tbody>
65
+ <tr>
66
+ <td><strong>Nerdsking-python-coder-7B-i</strong></td>
67
+ <td><strong>86.99</strong></td>
68
+ <td>Evaluated score (zero-shot, strict HumanEval pass@1, using unquantized weigths bf16)</td>
69
+ </tr>
70
+ <tr>
71
+ <td>Qwen2.5-Coder-7B</td>
72
+ <td>~74–76</td>
73
+ <td>Community evaluation (OpenCompass run); figures vary by harness/settings</td>
74
+ </tr>
75
+ <tr>
76
+ <td>DeepSeek-Coder-6.7B</td>
77
+ <td>~72–73</td>
78
+ <td>Official DeepSeek report and independent replications; close to strict HumanEval protocol</td>
79
+ </tr>
80
+ <tr>
81
+ <td>CodeLlama-7B</td>
82
+ <td>~33–35</td>
83
+ <td>Meta technical report</td>
84
+ </tr>
85
+ <tr>
86
+ <td>Wizard Coder 7B*</td>
87
+ <td>~57–59</td>
88
+ <td>Community benchmarks; strong instruction-following but less consistent zero-shot behavior</td>
89
+ </tr>
90
+ <tr>
91
+ <td>StarCoder 3B*</td>
92
+ <td>~21.6 (estimate)</td>
93
+ <td>Indicative proxy from published code-task performance breakdowns (not a strict HumanEval pass@1)</td>
94
+ </tr>
95
+ </tbody>
96
+ </table>
97
+ <p class="justified-text">
98
+ <em>*Estimated/proxy values where standardized HumanEval pass@1 was not published in those 3 models. Scores can vary with prompt format, decoding params, and harness.</em>
99
+ </p>
100
+
101
+
102
+
103
+
104
+ #### S.o.n.n.
105
+ <p class="justified-text">
106
+ The model was treated under <b>"s.o.n.n."</b> (<i>single omni neural network</i>), a concept created by IPMN at Nerdsking.com that is both a precise way of fine tunning/altering existing models, as well a foundational concept for a broader AI architecture standard currently under active research and development.
107
+ </p>
108
+ <i>When applied to pre-existing models, allows:</i>
109
+
110
+ - parameter-preserving refinement methodology
111
+ - focused global behavioral shaping, instead of task-local adapters
112
+ - avoidance of fragmentation, common in multi-adapter or task-siloed approaches
113
+
114
+
115
+
116
+ #### Quick Start (Inference)
117
+
118
+ <code>
119
+ from transformers import AutoModelForCausalLM, AutoTokenizer
120
+
121
+ model_id = "Nerdsking/Nerdsking-python-coder-3B-i"
122
+
123
+ tokenizer = AutoTokenizer.from_pretrained(model_id)
124
+ model = AutoModelForCausalLM.from_pretrained(
125
+ model_id,
126
+ torch_dtype="bfloat16",
127
+ device_map="auto"
128
+ )
129
+
130
+ prompt = "Write a Python function that checks if a number is prime."
131
+ inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
132
+ outputs = model.generate(**inputs, max_new_tokens=200)
133
+
134
+ print(tokenizer.decode(outputs[0], skip_special_tokens=True))
135
+ </code>
136
+
137
+ #### Ethical & Safety Notes
138
+ <p class="justified-text">
139
+ This model is intended for technical and research use.
140
+ Due to relaxed alignment constraints, outputs should be reviewed before deployment in production or public-facing systems.
141
+ </p>
142
+
143
+ #### Citation
144
+
145
+ If you use this model in research or benchmarking, please cite:
146
+
147
+ Nerdsking-python-coder-3B-i,
148
+ IPMN / Nerdsking.com
149
+
150
+