Image Segmentation
Transformers
PyTorch
ONNX
Safetensors
Transformers.js
remove background
background
background-removal
Pytorch
vision
legal liability
custom_code
Instructions to use briaai/RMBG-2.0 with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use briaai/RMBG-2.0 with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("image-segmentation", model="briaai/RMBG-2.0", trust_remote_code=True)# Load model directly from transformers import AutoModelForImageSegmentation model = AutoModelForImageSegmentation.from_pretrained("briaai/RMBG-2.0", trust_remote_code=True, dtype="auto") - Transformers.js
How to use briaai/RMBG-2.0 with Transformers.js:
// npm i @huggingface/transformers import { pipeline } from '@huggingface/transformers'; // Allocate pipeline const pipe = await pipeline('image-segmentation', 'briaai/RMBG-2.0'); - Inference
- Notebooks
- Google Colab
- Kaggle
Possible to include onnx model that supports dynamic batching ?
#16
by a0g04p1 - opened
Hi,
I was looking at your onnx model files, it looks like only one input image can be passed at a time. Is it possible to have dynamic batching enabled while creating the onnx model ?
It would be great if you can provide the onnx generation code, or the onnx model with batching support.
Thanks!
I did the conversion a long time ago, but if I remember correctly, I ran into some issues when trying to export with dynamic batch size. Maybe someone else can give it a go :)
It was adapted from the notebook in the BiRefNet readme: https://github.com/ZhengPeng7/BiRefNet