βœ‚οΈ CrispCut β€” AI Background Removal for Designs

Purpose-built background removal for clip art, t-shirt designs, and print-on-demand assets.

Distilled from BiRefNet (220 M params β†’ 6.6 M params) with ~95 % quality retention. Exported as ONNX for browser deployment via ONNX Runtime Web.

Models

File Precision Size WASM (CPU) WebGL (GPU)
onnx/crispcut-fast.onnx INT8 quantized 6.5 MB ~5–10 s ~1–2 s
onnx/crispcut-quality.onnx FP32 25.3 MB ~15–25 s ~3–6 s

Both models:

  • Architecture: MobileNetV2 + UNet (distilled from BiRefNet)
  • Trained at 1024Γ—1024 on design-specific content
  • ONNX opset 17
  • ImageNet normalisation (mean: [0.485, 0.456, 0.406], std: [0.229, 0.224, 0.225])
  • Single input tensor: input β€” shape [1, 3, 1024, 1024] (NCHW, float32)
  • Single output tensor: output β€” shape [1, 1, 1024, 1024] (logits β†’ apply sigmoid)

Distillation Details

Teacher (BiRefNet) Student (CrispCut)
Parameters 220 M 6.6 M
Compression β€” 33Γ— smaller
Quality 100 % ~95 %

The student model uses a MobileNetV2 encoder with a UNet decoder, trained via knowledge distillation from the full BiRefNet teacher on design-specific data.

Usage with the npm package

npm i @crispcut/background-removal
import { cut } from '@crispcut/background-removal';

// Fast mode (default) β€” downloads crispcut-fast.onnx from this repo
const result = await cut(image);
img.src = result.url;

// Quality mode with GPU
const result = await cut(image, { model: 'quality', gpu: true });

Models are fetched automatically from this repo at runtime. No server needed β€” everything runs in the browser.

πŸ“¦ npm: @crispcut/background-removal πŸ’» GitHub: bowespublishing/crispcut

Self-hosting

Download the .onnx files from the onnx/ folder and serve them from your own CDN:

cut(image, { modelUrl: '/models/crispcut-fast.onnx' });

Training Details

  • Teacher: BiRefNet (220 M parameters)
  • Student: MobileNetV2 + UNet (6.6 M parameters)
  • Dataset: Design-specific content (clip art, illustrations, t-shirt graphics, POD assets)
  • Resolution: 1024Γ—1024
  • Distillation method: Knowledge distillation with feature-level and output-level supervision
  • Fast model: INT8 dynamic quantization (via ONNX Runtime)
  • Quality model: Full FP32 precision

License

AGPL-3.0 for open-source and personal use.

Commercial license required for closed-source or commercial products.

πŸ“© Contact: bowespublishing@gmail.com

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support