goodgoals commited on
Commit
ab8d458
·
verified ·
1 Parent(s): 28c4c06

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +23 -0
README.md ADDED
@@ -0,0 +1,23 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ license_link: https://huggingface.co/goodgoals/Neuron-NxT-Step/blob/main/LICENSE
4
+ datasets:
5
+ - UCSC-VLAA/STAR-1
6
+ language:
7
+ - en
8
+ base_model:
9
+ - Qwen/QwQ-32B
10
+ pipeline_tag: text-generation
11
+ library_name: transformers
12
+ ---
13
+ # Welcome to NeuronNXTStep
14
+ **NeuronNXTStep** is our newest, smartest reasoning Language Model yet. It has much better benchmarks compared to our older models, for instance, the Natarajan Response Engine v1.02, which was our most popular model yet, was based on GPT-OSS-20b, which was an exellent model, but unmatched compared to NeuronNxTStep whose base is the exellent Qwen QwQ 32b.
15
+ **NeuronNxtStep** has exellent benchmarks, rivaling heavier, more resource hungry LLMs like OpenAI's o1 mini, Deepseek R1 671b, Llama models, and Deepseek R1 Distills! It brings an end to the dillema between smarter or efficient!
16
+ For a more specific overview, here are the benchmarks:
17
+ <p align="center">
18
+ <img width="100%" src="png/benchmarks.png">
19
+ </p>
20
+
21
+ # Introduction
22
+ NeuronNxTStep also addresses a big issue in the constantly growing world of LLMs. As LLMs get smarter and are forced to get the right answer, they find ways to manipulate and cheat. That also includes the manipulation of humans which introduces new dangers. To reduce risk of safety misalignment, we trained this model using the effective **UCSC-VLAA/STAR1** dataset.
23
+