| --- |
| viewer: false |
| license: other |
| license_name: cms-manhattan-jirack-v1.2 |
| license_link: LICENSE |
| language: |
| - en |
| - ru |
| - fr |
| - de |
| - cn |
| - jp |
| tags: |
| - llama |
|
|
| pipeline_tag: text-generation |
| --- |
| |
| # 💎 JiRack Base dataset for 1.5B model |
|
|
| **Dataset:** the dataset formated for JiRack tokenizer . I recommend initializing the model with a 4K context window for initial stability, followed by scaling to 8K context using specialized JiRack 8K datasets. This two-stage approach ensures robust positional encoding before extending the model's long-range dependency. |
|
|
| **Time:** JiRack 1.5B: High-Efficiency Financial Modeling |
| - We are training a compact 1.5B parameter model on an extensive 11 billion token corpus. By training on a token-to-parameter ratio of nearly 7:1, we achieve exceptional knowledge density and reasoning capabilities in a lightweight architecture. |
| - Performance: JiRack Ternary Pro 1.5b about 28–36 hours per epoch on NVIDIA BlackWell 96 Gb VRAM |
| - Performance: JiRack Ternary Pro 10b about 7-9 days per epoch on NVIDIA BlackWell 96 Gb VRAM |
| - Optimization: Optimized for secure, low-latency banking applications. |
|
|
| **Inventor:** Konstantin Vladimirovich Grabko |
| **Organization:** CMS Manhattan JiRack Technology |
| **Official Site:** [www.cmsmanhattan.com](http://www.cmsmanhattan.com) |
|
|
|
|
| Designed for Banking and Fintech Institutions |
|
|
| **Banks and Fintech** Build secure, internal models tailored for the banking sector. We provide end-to-end solutions to pre-train models for fraud prevention, spam filtering, risk assessment, and Anti-Money Laundering (AML) detectio |
| - This is the base checkpoint, evaluated prior to fine-tuning on domain-specific datasets. The primary objective is to validate RoPE (Rotary Positional Embeddings) stability and coherence following the initial pre-training phase. |
|
|
|
|
| ⚠️ **IMPORTANT NOTICE — PROPRIETARY TECHNOLOGY** |
|
|
| **Allowed:** |
| - Personal and non-commercial research use only |
|
|
| **Strictly Prohibited without a written commercial license:** |
| - Any commercial use (SaaS, mobile apps, edge devices, paid services, etc.) |
| - Creating and distributing derivative models for profit |
| - Removing or modifying any copyright or legal notices |
| - Patenting any part of this technology |
|
|
| Commercial users **must** obtain a signed license and pay **5% royalty** on net revenue. |
|
|
| Any unauthorized commercial use will be pursued legally under New York law. |
|
|
| Contact for commercial license: grabko@cmsmanhattan.com |
| There is fix price for FinTech |
|
|
| ## ⚠️ Finch tech AL solution |
|
|
| Custom AI Solutions with JiRack |
|
|
| - Deploy your own secure, high-performance model from scratch. I specialize in delivering the JiRack modern architecture on NVIDIA Clusters, fully optimized for your private datasets. |
| - Let's build your sovereign AI today. DM for inquiries. |
| - Please contact to CMS Manhttan for the solution |
|
|
|
|
| - # Tesr Tokenizer size ! |
| (venv_ji) root@jirack2:# python -c ' |
| from transformers import AutoTokenizer |
| tok = AutoTokenizer.from_pretrained("./jirack_code_tokenizer_fixed") |
| print("Vocab size:", len(tok)) |
| print("pad_token_id:", tok.pad_token_id) |
| print("eos_token_id:", tok.eos_token_id) |
| ' |
| - Vocab size: 128259 |
| - pad_token_id: 128001 |
| - eos_token_id: 128001 |
| |
| |
| |
| |