RMBG-2-CoreML / README.md
VincentGOURBIN's picture
Upload folder using huggingface_hub
0da071b verified
metadata
license: cc-by-nc-4.0
library_name: coreml
pipeline_tag: image-segmentation
tags:
  - background-removal
  - image-segmentation
  - coreml
  - apple-neural-engine
  - birefnet
  - rmbg
language:
  - en
base_model: briaai/RMBG-2.0

RMBG-2 CoreML - Background Removal Model

CoreML conversion of BRIA AI's RMBG-2.0 background removal model, optimized for Apple Neural Engine (ANE).

Model Description

This is a native ML Program format CoreML model converted from RMBG-2.0 (BiRefNet architecture) for high-quality background removal on Apple devices.

Key Features

  • Native CoreML ML Program format - Full ANE compatibility
  • INT8 weight quantization - Reduced model size (233MB vs 461MB)
  • No custom layers - Uses only native CoreML operations
  • 1024x1024 input resolution - High-quality segmentation
  • Fast inference - ~3-5 seconds on Apple Silicon

Usage

With RMBG2Swift Package

import RMBG2Swift

// Simple one-liner
let rmbg = try await RMBG2()
let result = try await rmbg.removeBackground(from: image)

// Access the result
let outputImage = result.image  // Image with transparent background
let mask = result.mask          // Grayscale segmentation mask

Swift Package: github.com/VincentGourbin/RMBG2Swift

Manual Integration

import CoreML

// Load the model
let config = MLModelConfiguration()
config.computeUnits = .all  // Enable ANE

let model = try await MLModel.load(contentsOf: modelURL, configuration: config)

// Prepare input (1024x1024, NCHW format with ImageNet normalization)
let input = MLDictionaryFeatureProvider(dictionary: ["input": inputArray])

// Run inference
let output = try model.prediction(from: input)

// Get mask from output_3 (full resolution)
let mask = output.featureValue(for: "output_3")?.multiArrayValue

Model Details

Property Value
Architecture BiRefNet
Input Size 1024 x 1024
Input Format RGB, NCHW [1, 3, 1024, 1024]
Normalization ImageNet (mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225])
Output 4 scales (output_0 to output_3), use output_3 for full resolution
Format ML Program (.mlpackage)
Weight Quantization INT8 (symmetric linear)
Model Size ~233 MB
Minimum OS macOS 13+ / iOS 16+
Compute Units All (CPU, GPU, ANE)

Performance

Tested on Apple Silicon:

Device Compute Units Inference Time
M1 Pro .all (ANE) ~5s
M1 Pro .cpuAndGPU ~3s

Note: ANE performance varies by device. The INT8 quantization improves memory efficiency and model size while maintaining quality.

Files

Two model versions are available:

File Size Description
RMBG-2-native.mlpackage/ 461 MB Original FP32 model
RMBG-2-native-int8.mlpackage/ 233 MB INT8 quantized model (recommended)

Recommended: Use the INT8 version for better memory efficiency and smaller download size with equivalent quality.

License

Creative Commons Attribution-NonCommercial 4.0 (CC BY-NC 4.0)

  • Free for non-commercial use: Research, personal projects, education
  • Commercial use requires a separate license from BRIA AI

For commercial licensing, contact: bria.ai/contact-us

Attribution

This CoreML conversion is based on the RMBG-2.0 model by BRIA AI:

Related Links