llm.create_chat_completion(
messages = "No input example has been defined for this model task."
)
ferrotorch/serialize-parity-v1
Phase G.3 of ferrotorch's real-artifact-driven development
(#1169). Pins canonical references for ferrotorch-serialize's
four format loaders/exporters so the rust crate's parsers and
emitters can be verified byte-exact against the upstream
toolchains they target.
Targets
.pthload βresnet18-pth/resnet18-5c106cde.pthis the official torchvision checkpoint (https://download.pytorch.org/models/resnet18-5c106cde.pth).reference_state_dict/<key>.bincarries each tensor as[u32 ndim][u32 shape...][f32 bytes]. The rust harness dumps the same per-tensor binaries viaferrotorch_serialize::load_pytorch_state_dictand compares byte-exact (max_abs = 0).SafeTensors round-trip β
safetensors-rt/resnet18.safetensorsis the same resnet18 state_dict re-saved viasafetensors.torch.save_file. References are the same per- tensor binaries as the .pth target. The rust harness compares byte-exact (max_abs = 0).GGUF load β
gguf/SmolLM-135M-Instruct-Q4_K_M.ggufis the upstreamunsloth/SmolLM-135M-Instruct-GGUFcheckpoint.reference_dequant/<name>.bincarries dequantized f32 tensors for a deterministic stride-sampled subset of layers, produced by python'sgguf.GGUFReader. The rust harness reproduces those under max_abs <= 1e-4 (Q4_K group scaling has a known noise floor between implementations).ONNX export β
onnx-mlp/carries:mlp_weights.binβ fixed-seed (torch.manual_seed(42)) weights for aLinear(4 -> 8) + ReLU + Linear(8 -> 2)MLP. The rust side reads these so its in-memory MLP matches torch's bit-for-bit before export.input_{zeros,ones,random}.binβ three fixed inputs.torch_forward_{zeros,ones,random}.binβ reference forward outputs fromtorch.nn.Sequential.
The rust harness builds the same MLP from
mlp_weights.bin, exports it viaferrotorch_serialize::export_onnx, dumps the rust-side ferrotorch forward, and the python verifier loads the rust-emitted .onnx viaonnxruntime.InferenceSessionand asserts cosine_sim >= 0.9999 + max_abs <= 1e-5 across (rust-onnx vs rust-ferrotorch) AND (rust-onnx vs torch).
Provenance
- Pin script:
scripts/pin_pretrained_serialize_fixtures.py. - Verifier:
scripts/verify_serialize_inference.py. - Rust dumps:
ferrotorch-serialize/examples/serialize_{pth,safetensors,gguf,onnx_export}_dump.rs. - Cargo test wrapper:
ferrotorch-serialize/tests/conformance_format_parity.rs. - Tracking issue: https://github.com/dollspace-gay/ferrotorch/issues/1169.
- SHA-256 of
bundle.tar(pinned inferrotorch-hub/src/registry.rs):7c20267db5706421e7367c4d275346114a43ff6d55e6ff1aa11069bc45562296.
Upstream licenses
- resnet18 weights β BSD-3-Clause (torchvision).
- SmolLM2-135M-Instruct-GGUF β Apache-2.0 (upstream
unslothmirror of HuggingFace'sHuggingFaceTB/SmolLM2-135M-Instruct). - ferrotorch fixtures themselves β Apache-2.0 / MIT.
- Downloads last month
- 1
8-bit
# !pip install llama-cpp-python from llama_cpp import Llama llm = Llama.from_pretrained( repo_id="ferrotorch/serialize-parity-v1", filename="gguf/SmolLM2-135M-Instruct-Q8_0.gguf", )