BEN2 β€” Background Erase Network 2 (ONNX, FP32)

FP32 conversion of the BEN2 FP16 ONNX model for CPU and older GPU compatibility.

The official onnx-community repo only provides an FP16 variant, which requires a modern GPU. This FP32 version runs on CPU and older GPUs (e.g. NVIDIA Maxwell/Kepler, Intel HD).

Model Specs

Property Value
File size ~403 MB
Input shape (1, 3, 1024, 1024) float32, ImageNet-normalized
Output shape (1, 1, 1024, 1024) float32
Postprocessing Min-max normalization (not sigmoid)
Original onnx-community/BEN2-ONNX

Preprocessing

Same as BEN2 FP16: resize to 1024x1024, scale to [0,1], apply ImageNet normalization (mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]).

Conversion

Converted from the FP16 model using ONNX Python API β€” all FP16 initializers, inputs, outputs, and intermediate value_info cast to FP32.

License

Apache-2.0 β€” same as the original BEN2 model.

Acknowledgments

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for Pezhgorski/BEN2-FP32-ONNX

Base model

PramaLLC/BEN2
Quantized
(5)
this model