File size: 3,190 Bytes
f48ad18 d3fac58 f48ad18 5901066 b3b3f43 5901066 f48ad18 5901066 831e88a 99b6d4d 023875e a86f1d8 b64f50f a5b88c7 a86f1d8 48fc9bc 5901066 023875e 5901066 3fa4adc bd94fba 3fa4adc 5901066 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 | ---
viewer: false
license: other
license_name: cms-manhattan-jirack-v1.2
license_link: LICENSE
language:
- en
- ru
- fr
- de
- cn
- jp
tags:
- llama
pipeline_tag: text-generation
---
# 💎 JiRack Base dataset for 1.5B model
**Dataset:** the dataset formated for JiRack tokenizer . I recommend initializing the model with a 4K context window for initial stability, followed by scaling to 8K context using specialized JiRack 8K datasets. This two-stage approach ensures robust positional encoding before extending the model's long-range dependency.
**Time:** JiRack 1.5B: High-Efficiency Financial Modeling
- We are training a compact 1.5B parameter model on an extensive 11 billion token corpus. By training on a token-to-parameter ratio of nearly 7:1, we achieve exceptional knowledge density and reasoning capabilities in a lightweight architecture.
- Performance: JiRack Ternary Pro 1.5b about 28–36 hours per epoch on NVIDIA BlackWell 96 Gb VRAM
- Performance: JiRack Ternary Pro 10b about 7-9 days per epoch on NVIDIA BlackWell 96 Gb VRAM
- Optimization: Optimized for secure, low-latency banking applications.
**Inventor:** Konstantin Vladimirovich Grabko
**Organization:** CMS Manhattan JiRack Technology
**Official Site:** [www.cmsmanhattan.com](http://www.cmsmanhattan.com)
Designed for Banking and Fintech Institutions
**Banks and Fintech** Build secure, internal models tailored for the banking sector. We provide end-to-end solutions to pre-train models for fraud prevention, spam filtering, risk assessment, and Anti-Money Laundering (AML) detectio
- This is the base checkpoint, evaluated prior to fine-tuning on domain-specific datasets. The primary objective is to validate RoPE (Rotary Positional Embeddings) stability and coherence following the initial pre-training phase.
⚠️ **IMPORTANT NOTICE — PROPRIETARY TECHNOLOGY**
**Allowed:**
- Personal and non-commercial research use only
**Strictly Prohibited without a written commercial license:**
- Any commercial use (SaaS, mobile apps, edge devices, paid services, etc.)
- Creating and distributing derivative models for profit
- Removing or modifying any copyright or legal notices
- Patenting any part of this technology
Commercial users **must** obtain a signed license and pay **5% royalty** on net revenue.
Any unauthorized commercial use will be pursued legally under New York law.
Contact for commercial license: grabko@cmsmanhattan.com
There is fix price for FinTech
## ⚠️ Finch tech AL solution
Custom AI Solutions with JiRack
- Deploy your own secure, high-performance model from scratch. I specialize in delivering the JiRack modern architecture on NVIDIA Clusters, fully optimized for your private datasets.
- Let's build your sovereign AI today. DM for inquiries.
- Please contact to CMS Manhttan for the solution
- # Tesr Tokenizer size !
(venv_ji) root@jirack2:# python -c '
from transformers import AutoTokenizer
tok = AutoTokenizer.from_pretrained("./jirack_code_tokenizer_fixed")
print("Vocab size:", len(tok))
print("pad_token_id:", tok.pad_token_id)
print("eos_token_id:", tok.eos_token_id)
'
- Vocab size: 128259
- pad_token_id: 128001
- eos_token_id: 128001
|