βοΈ CrispCut β AI Background Removal for Designs
Purpose-built background removal for clip art, t-shirt designs, and print-on-demand assets.
Distilled from BiRefNet (220 M params β 6.6 M params) with ~95 % quality retention. Exported as ONNX for browser deployment via ONNX Runtime Web.
Models
| File | Precision | Size | WASM (CPU) | WebGL (GPU) |
|---|---|---|---|---|
onnx/crispcut-fast.onnx |
INT8 quantized | 6.5 MB | ~5β10 s | ~1β2 s |
onnx/crispcut-quality.onnx |
FP32 | 25.3 MB | ~15β25 s | ~3β6 s |
Both models:
- Architecture: MobileNetV2 + UNet (distilled from BiRefNet)
- Trained at 1024Γ1024 on design-specific content
- ONNX opset 17
- ImageNet normalisation (mean:
[0.485, 0.456, 0.406], std:[0.229, 0.224, 0.225]) - Single input tensor:
inputβ shape[1, 3, 1024, 1024](NCHW, float32) - Single output tensor:
outputβ shape[1, 1, 1024, 1024](logits β apply sigmoid)
Distillation Details
| Teacher (BiRefNet) | Student (CrispCut) | |
|---|---|---|
| Parameters | 220 M | 6.6 M |
| Compression | β | 33Γ smaller |
| Quality | 100 % | ~95 % |
The student model uses a MobileNetV2 encoder with a UNet decoder, trained via knowledge distillation from the full BiRefNet teacher on design-specific data.
Usage with the npm package
npm i @crispcut/background-removal
import { cut } from '@crispcut/background-removal';
// Fast mode (default) β downloads crispcut-fast.onnx from this repo
const result = await cut(image);
img.src = result.url;
// Quality mode with GPU
const result = await cut(image, { model: 'quality', gpu: true });
Models are fetched automatically from this repo at runtime. No server needed β everything runs in the browser.
π¦ npm: @crispcut/background-removal π» GitHub: bowespublishing/crispcut
Self-hosting
Download the .onnx files from the onnx/ folder and serve them from your own CDN:
cut(image, { modelUrl: '/models/crispcut-fast.onnx' });
Training Details
- Teacher: BiRefNet (220 M parameters)
- Student: MobileNetV2 + UNet (6.6 M parameters)
- Dataset: Design-specific content (clip art, illustrations, t-shirt graphics, POD assets)
- Resolution: 1024Γ1024
- Distillation method: Knowledge distillation with feature-level and output-level supervision
- Fast model: INT8 dynamic quantization (via ONNX Runtime)
- Quality model: Full FP32 precision
License
AGPL-3.0 for open-source and personal use.
Commercial license required for closed-source or commercial products.
π© Contact: bowespublishing@gmail.com