openNemo
Pure-PyTorch drop-in replacement for NVIDIA's Nemotron-H architecture.
Removes all external CUDA kernel dependencies (mamba-ssm, causal-conv1d) and replaces them with native PyTorch operations, making the model fully compatible with bitsandbytes quantization (4-bit / 8-bit) and QLoRA fine-tuning on consumer GPUs.
By Empero AI
Why?
NVIDIA's Nemotron-H is a hybrid Mamba2 + Transformer architecture — one of the most promising open model designs. But the original implementation depends on mamba-ssm and causal-conv1d, which ship pre-compiled Triton/CUDA kernels that:
- Break bitsandbytes quantization — the kernels call
F.lineardirectly, which collides with bnb's__torch_function__hook on quantized weights (4-bit weights are stored as flat 1D blobs, causing shape mismatches) - Require specific CUDA versions — kernel compilation failures are common on consumer setups
- Cannot be pip-installed cleanly on many systems without manual builds
This means you can't load Nemotron-H in 4-bit, you can't use QLoRA, and you can't train it efficiently on a single GPU. openNemo fixes all of that.
What Changed
| Component | Original (NVIDIA) | openNemo |
|---|---|---|
rmsnorm_fn |
mamba_ssm.ops.triton.layer_norm |
Pure PyTorch group-wise RMSNorm + SiLU gating |
mamba_split_conv1d_scan_combined |
mamba_ssm.ops.triton.ssd_combined |
Removed — replaced by chunked torch_forward |
selective_state_update |
mamba_ssm.ops.triton.selective_state_update |
Pure PyTorch SSM step |
causal_conv1d_fn / causal_conv1d_update |
causal_conv1d package |
nn.Conv1d with causal padding / manual cache update |
| Forward routing | Fast path (kernels) vs slow path | Always uses optimized torch path |
.model accessor |
Only .backbone |
.model property alias (PEFT/LoRA compatible) |
All weight names are preserved — load original NVIDIA checkpoints directly with zero conversion.
Quickstart
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig
bnb_config = BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_quant_type="nf4",
bnb_4bit_compute_dtype=torch.bfloat16,
bnb_4bit_use_double_quant=True,
)
model = AutoModelForCausalLM.from_pretrained(
"empero-ai/openNemo-9B",
quantization_config=bnb_config,
trust_remote_code=True,
device_map="auto",
)
tokenizer = AutoTokenizer.from_pretrained("empero-ai/openNemo-9B")
No mamba-ssm install needed. Just pip install transformers bitsandbytes and go.
QLoRA Fine-Tuning
from peft import LoraConfig, get_peft_model
lora_config = LoraConfig(
r=64,
lora_alpha=32,
target_modules=["q_proj", "k_proj", "v_proj", "o_proj",
"gate_proj", "up_proj", "down_proj"],
lora_dropout=0.05,
bias="none",
task_type="CAUSAL_LM",
)
model = get_peft_model(model, lora_config)
model.print_trainable_parameters()
Requirements
torch>=2.1
transformers>=4.40
bitsandbytes>=0.43 # for 4-bit quantization
peft>=0.10 # for LoRA/QLoRA
That's it. No mamba-ssm. No causal-conv1d. No CUDA kernel compilation.
Architecture
Nemotron-H is a 52-layer hybrid model with three block types defined by the pattern:
M-M-M-M*-M-M-M-M-M*-M-M-M-M-M*-M-M-M-M-M*-M-M-M-M-M-
- M — Mamba2 SSM block (majority of layers)
- * — Grouped Query Attention block (5 layers total)
- - — MLP block (feed-forward)
openNemo preserves this exact architecture. The Mamba2 blocks use a chunked structured state-space duality (SSD) scan implemented in pure PyTorch, with the same algorithmic approach as the original torch_forward path.
Files
| File | Description |
|---|---|
modeling_nemotron_h.py |
Full model implementation — all Mamba2/Attention/MLP blocks |
configuration_nemotron_h.py |
Model config (unchanged from NVIDIA's original) |
__init__.py |
Module exports |
License
Apache 2.0 — same as the original NVIDIA release.
Acknowledgments
Based on NVIDIA's Nemotron-H architecture. Original Mamba2 by Albert Gu and Tri Dao.
- Downloads last month
- 1,870
Model tree for empero-ai/openNemo-9B
Base model
nvidia/NVIDIA-Nemotron-Nano-12B-v2-Base