Nemotron-Cascade β GGUF converter
Converts the Hugging Face model nvidia/Nemotron-Cascade-8B to GGUF, then quantizes.
Quick start
# Python 3.11 recommended (newer Python should also work)
python3.11 -m venv .venv
source .venv/bin/activate
pip install -U pip
pip install -r requirements.txt
python scripts/convert_nemotron_to_gguf.py --q8 --q4
# or
python scripts/convert_nemotron_to_gguf.py --all
# preview what would happen (no downloads/build)
python scripts/convert_nemotron_to_gguf.py --all --dry-run
# if you specifically want BF16 as an output (not enabled by --all)
python scripts/convert_nemotron_to_gguf.py --quant BF16
Notes / prerequisites
- This script uses
llama.cppto do the actual GGUF conversion + quantization. - You need build tools to compile
llama.cpp(CMake + a C++ compiler). - Disk and RAM requirements are significant for 8B-class models.
On Debian/Ubuntu, llama.cpp may require curl dev headers (libcurl4-openssl-dev). If they are missing, the script will automatically retry the build with curl disabled (-DLLAMA_CURL=OFF).
Outputs are written to ./output/ by default.
Quantization recommendation
For most users, prefer Q4_K_M (good quality/size) or Q8_0 (high quality).
Avoid the IQ* quantizations unless you specifically know you need them and you have an imatrix-calibrated workflow. Without an imatrix, IQ* quants can have worse quality-per-bit than K-quants.
The --all flag quantizes to every quant type supported by your local llama-quantize binary, excluding F32 (and COPY).
IQ* quant types are excluded by default; enable them with --iq (includes both IQ and non-IQ), or use --iq-only to generate only the IQ* quants.
MoE-specific quants like MXFP4_MOE are disabled by default; enable them with --moe.
Using an imatrix
If you already have an importance matrix file (imatrix), you can pass it to llama-quantize via:
python scripts/convert_nemotron_to_gguf.py --all --imatrix /path/to/imatrix.dat
This can improve quality for some quant types.
Note: Some quant types may be skipped unless you provide --imatrix, because llama-quantize warns they should not be used without one (e.g. Q2_K_S).
If you plan to upload generated .gguf files to Hugging Face, note that the Hub stores large files via Xet on supported repos.
If python3.11 is not available on your system, install it (or run with a newer python3).
Run python convert_nemotron_to_gguf.py --help for options.
- Downloads last month
- 30
1-bit
2-bit
3-bit
4-bit
5-bit
6-bit
8-bit
16-bit
Model tree for JustACluelessKid2/Nemotron-Cascade-8B-GGUF
Base model
nvidia/Nemotron-Cascade-8B-Thinking