Hexa-2B β NEF Serialization Prototype
Founder: Madhab β Engineering Student, Cox's Bazar, Bangladesh
Organization: Hexa Innovate
Format: NEF (Neural Essence Format)
Purpose: Infrastructure validation prototype β not a production inference model
What This Is
Hexa-2B is a 2-billion parameter language model built as a technical proof-of-concept for the NEF serialization framework. The goal of this release is singular: demonstrate that NEF can correctly serialize, store, and load a billion-scale model on accessible hardware without dependency on standard bloated AI libraries.
This is not a general-purpose chat model. Inference quality is intentionally deferred to the production training run. What this prototype proves is the infrastructure layer β and that is the point.
NEF β Neural Essence Format
NEF is a custom serialization framework built from scratch to replace the overhead of standard formats (safetensors, GGUF, Pickle) for open-weight model loading.
| Property | Detail |
|---|---|
| Layout | Flat binary, memory-mapped tensor access |
| Runtime deps | None |
| Target | Fast loading on mid-range and edge hardware |
| Status | Active development |
Repository: github.com/Hexa08/NEF
Technical Specs
| Property | Detail |
|---|---|
| Architecture | Mixture OF Expart |
| Parameters | 2 Billion (0.27B active via MoE) |
| Serialization | NEF (Neural Essence Format) |
| Training hardware | Dual NVIDIA Tesla T4 (cloud compute credits) |
| Languages | English |
Benchmark Results
Early checkpoint evaluation (step 40,000) on standard zero-shot benchmarks against GPT-2 124M baseline:
| Task | Hexa 2B (MoE) | GPT-2 124M | Delta |
|---|---|---|---|
| ARC Easy | 26.5% | 43.2% | -16.7% |
| ARC Challenge | 27.0% | 22.4% | +4.6% |
| OpenBookQA | 25.0% | 14.2% | +10.8% |
| WinoGrande | 47.9% | 51.3% | -3.4% |
| Average | 31.6% | 32.8% | -1.2% |
Zero-shot evaluation using EleutherAI lm-evaluation-harness v0.4.2 at training step 40,000. 2 out of 4 tasks already exceed GPT-2 124M. Full evaluation pending production training run.
Prototype Scope
This release validates the following:
- NEF correctly serializes 2.1B parameters to disk
- NEF correctly deserializes and loads the full model into memory
- The full pipeline runs on accessible hardware without enterprise infrastructure
Inference benchmarks and model quality evaluations are reserved for the next training run, which uses a larger, high-diversity multilingual corpus and a production-grade training configuration.
Founder
I am a Diploma in Engineering student from Cox's Bazar, Bangladesh. Every component of this project β the HexaDense architecture, the NEF serialization format, and the training pipeline β was engineered solo, with no external funding and no institutional backing.
Most billion-parameter models come from large teams with large budgets. This one did not. The constraint was the design brief.
Hexa-2B is the foundation. The production model is next.
About Hexa Innovate
Hexa Innovate is a student-led AI startup based in Bangladesh, focused on building efficient AI execution and serialization infrastructure for open-weight models at the edge.
GitHub: github.com/Hexa08
