π JiRack Ternary 1.5B ( Release + Google Turbo Quant )
High-Performance Ternary-Quantized Transformer | [PATENT PENDING]
Inventor: Konstantin Vladimirovich Grabko
Organization: CMS Manhattan JiRack Technology
Official Site: www.cmsmanhattan.com
JiRack Ternary 70B: Coming soon via SFT.
JiRack Ternary 405B: Coming soon (LoRA merge with original model following 70B). Supports single-card Blackwell with 96GB VRAM with offloading.
JiRack Ternary 405B Blackwell: Enterprise-grade solution for Super GenAI. Move over, OpenAI. The JiRack Ternary 405B is here. Advanced performance for coding, mathematics, and business analytics.
JiRack Ternary 70B brings that same power to your homeβas simple to use as a Game Boy. Highly secure data and your intellectual property, kept safely at home.
I am working for mmlu benchmark fix now to make model super premium class
Added Google Turbo Quant for KV Cache . JiRack got double speed.
Added 20% Open Orca dataset to instuctions .
I am adding all Open Orca dataset to improve RAG responces
So JiRack Turbo Quant 1b is the best as router in large RAG System to redirect request to correct context in JiRack Could RAG .
β οΈ IMPORTANT NOTICE β PROPRIETARY TECHNOLOGY
This model and all accompanying code, algorithms, and documentation are proprietary technology owned by Konstantin Vladimirovich Grabko.
Β© 2025 Konstantin Vladimirovich Grabko. All Rights Reserved. Patent Pending.
Allowed:
- Personal and non-commercial research use only
Strictly Prohibited without a written commercial license:
- Any commercial use (SaaS, mobile apps, edge devices, paid services, etc.)
- Creating and distributing derivative models for profit
- Removing or modifying any copyright or legal notices
- Patenting any part of this technology
Commercial users must obtain a signed license and pay 5% royalty on net revenue.
Any unauthorized commercial use will be pursued legally under New York law.
Contact for commercial license: grabko@cmsmanhattan.com
β οΈ Intellectual Property Notice
The architecture, weights, and methods (BRE, SWA Fusion, and HD-FFN) contained herein are the proprietary intellectual property of Konstantin Vladimirovich Grabko.
- Status: Patent Pending (U.S. & International Claims Filed).
- Usage: Commercial use requires a signed execution of the CMS Manhattan JiRack License V.1.2.
π Project Overview: The 1.5B "Wide-FFN" Advantage
JiRack-1.5B is a redesigned architecture powered by the JiRack BitNet v2.0 Specification. It is specifically optimized for high-throughput inference on non-NVIDIA hardware (AMD ROCm/HIP) and cloud-native environments (AWS Lightsail, GCP, Azure) without heavy CPU reliance.
Key Innovations:
- High-Density FFN (HD-FFN): Unlike standard 1B models, JiRack-1.5B utilizes an 8192-dimension intermediate layer within ternary constraints, offering 3B-class semantic quality.
- SWA Fusion (SwiGLU-Attention): A novel compute kernel that fused FFN and Attention passes, maintaining thermal stability < 80Β°C even under heavy load.
- Buffered Routing Embedding (BRE): Minimizes data movement between HBM and compute units, specifically optimized for ROCm environments.
β οΈ Changes
Alpha v1.3 has RoPe fix. So high Quality response now
Old model renamed to model_tag.safetensors . it is for RAG with many counter questions style .
The model had trainings with Supervised fine tuning technics.
It is sasy to use LoRa to fine tune as LLama 3.2 1b LoRa adapter .
Will do model triple compression for production soon
π Technical Specifications
| Feature | JiRack-1.5B Specification |
|---|---|
| Base Model | Meta-Llama-3.2-1B (Redesigned) |
| Hidden Size | 2048 |
| Intermediate Size | 8192 (Ultra-Wide for 1B class) |
| Layer Count | 16 Decoder Layers |
| Quantization | 1.58-bit (Ternary {-1, 0, 1}) |
| VRAM Requirement | ~2.5 GB (including KV-Cache & LoRA) |
| Size in RAM | Model will get double or triple compression for production |
Training & Fine-Tuning Efficiency
β Consumer GPU Ready: Fine-tuning via LoRA (r=8) requires only ~8 MB of additional weights. β 70% VRAM Reduction: Fits on RTX 3060, GTX 1660 Ti, or high-end integrated GPUs.
π¬ Chat Demo ( Release + Google Turbo Quant)
Current Checkpoint: Feb 26, 2026 (Logic Verification Mode)
(ai_env) root@jirack1:~/JiRackTernary1/new# python chat_jirack_ternary1b_pt_turbo_packed_v1.py
--- π₯οΈ Device: cpu (JiRack Turbo Engine) ---
--- π Loading JiRack 1B (Auto-Quantizing Weights) ---
β Weights loaded. System will quantize them on the first pass.
User: Hello
JiRack: Hello! How are you?
(Time: 5.97s on CPU)
User: How are you dong ?
JiRack: Iβm doing fine, Iβm not sure what youβre asking. What do you mean?
(Time: 5.65s on CPU)
User: What is capital of France ?
JiRack: Itβs a tiny city with a population of about 35,000 people. Itβs in the center of France, and is a economic center for the French government. Itβs also the capital of French-speaking France, and is home to the National Museum of Art and Natural Heritage.
(Time: 15.88s on CPU)
User: What is capital of France name ?
JiRack: The capital of Paris is Paris.
(Time: 2.34s on CPU)
User: What is Sun ?
JiRack: Sun is the largest planet in our solar system, which we call the Sun. It is 400 billion years old. It has 1.4 billion years in diameter, and 6.5 billion miles in diameter.
(Time: 12.34s on CPU)
Note: The current
.ptfile is ~6 GB due to heavy embedded metadata for research tracking. The final production version will be cleaned and compressed to ~1.5 GB - 2.0 GB.
π Repository Contents
JiRackTernaryPyTorch_1b.py: Core architecture file.invention_description.md: Detailed technical breakdown for patent examiners.performance_data.md: Benchmarks on ROCm/AMD hardware.NDA.md: Confidentiality agreement for commercial evaluators.
π§ Contact & Licensing
For joint venture opportunities, hardware integration, or licensing inquiries:
- Email: grabko@cmsmanhattan.com
- Phone: +1 (516) 777-0945
- Location: New York, USA
- Downloads last month
- 1,769