JiRack_GPT5_236b / TechnicalSpecificationSheet.md
kgrabko's picture
Upload TechnicalSpecificationSheet.md
6734879 verified

TECHNICAL SPECIFICATION: JiRack 236B Frontier Cluster

Document ID: CMS-JR-236B-PROC-2025
Project: JiRack Balanced Frontier Deployment
Lead Architect: Konstantin Vladimirovich Grabko


1. COMPUTE SYSTEM (High-Density GPU Nodes)

The deployment requires a minimum of two (2) fully populated 8-GPU nodes to achieve the necessary VRAM and throughput for the 108-layer dense architecture.

Component Minimum Specification Required Feature for JiRack
Chassis 6U - 8U Rackmount (e.g., NVIDIA HGX H100) Support for 700W SXM5 GPUs
Accelerator 16x NVIDIA H100 80GB (SXM5) 1.28 TB aggregate HBM3 VRAM
GPU Interconnect NVLink + NVSwitch (900GB/s) High-width Tensor Parallelism (TP)
Processor (CPU) 2x Intel Xeon Platinum 8480+ (per node) 224 total cores for BRE routing
System Memory 2TB DDR5-4800MHz ECC Weight-caching for 108 layers
Local Storage 8x 3.84TB NVMe U.2 (RAID 0) Sequential read for fast model loading

2. NETWORKING & FABRIC (Low-Latency Backplane)

Because the 236B model requires frequent cross-node communication (Pipeline Parallelism), standard Ethernet is unacceptable.

  • Primary Fabric: NVIDIA Quantum-2 InfiniBand (NDR 400Gb/s).
  • Adapters: 8x ConnectX-7 NDR InfiniBand cards per node (1:1 GPU-to-NIC ratio).
  • Switching: 1x QM9700 Quantum-2 Switch (64 ports of NDR).
  • Features: Must support GPUDirect RDMA and SHARPv3 (Scalable Hierarchical Aggregation and Reduction Protocol) to offload 14:1 GQA reduction tasks.

3. POWER & THERMAL MANAGEMENT

Operating a 236B Frontier model at 100% duty cycle generates extreme thermal load.

Parameter Specification
Peak Power Draw 10.2 kW per server (~22kW total including switching)
Voltage 200V - 240V AC (3-Phase Delta/Wye)
Cooling Type Rear Door Heat Exchanger (RDHx) or Liquid Cooling (D2C)
Ambient Ops 18°C – 24°C (64°F – 75°F) with N+1 Redundancy
Rack Rating 48U Heavy-Duty (Static Load: 1500kg+)

4. SOFTWARE STACK & COMPLIANCE

  • OS: Ubuntu 22.04 LTS (NVIDIA-Certified).
  • Driver: NVIDIA Data Center Driver (Current 5xx.x branch).
  • Middleware: CUDA 12.x, NCCL 2.x, and cuDNN.
  • Security: TPM 2.0 and Secure Boot (Mandatory for BRE architecture protection).

5. LICENSING & ATTRIBUTION (MANDATORY)

  • Vendors and Integrators: Notified that this hardware is intended to host the JiRack 236B Architecture.
  • IP Ownership: All architectural logic remains the property of Konstantin Vladimirovich Grabko.
  • Compliance: The system must allow the proof_of_authorship buffer to be verified upon request.
  • Commercial Terms: Deployment is subject to the Commercial License Agreement V.1.2 (5% Net Revenue Royalty).