| | --- |
| | license: other |
| | tags: |
| | - vision |
| | - image-segmentation |
| | datasets: |
| | - coco |
| | widget: |
| | - src: http://images.cocodataset.org/val2017/000000039769.jpg |
| | example_title: Cats |
| | --- |
| | |
| | # Mask2Former |
| |
|
| | Mask2Former model trained on COCO panoptic segmentation (large-sized version, Swin backbone). It was introduced in the paper [Masked-attention Mask Transformer for Universal Image Segmentation |
| | ](https://arxiv.org/abs/2112.01527) and first released in [this repository](https://github.com/facebookresearch/Mask2Former/). |
| |
|
| | Disclaimer: The team releasing Mask2Former did not write a model card for this model so this model card has been written by the Hugging Face team. |
| |
|
| | ## Model description |
| |
|
| | Mask2Former addresses instance, semantic and panoptic segmentation with the same paradigm: by predicting a set of masks and corresponding labels. Hence, all 3 tasks are treated as if they were instance segmentation. Mask2Former outperforms the previous SOTA, |
| | [MaskFormer](https://arxiv.org/abs/2107.06278) both in terms of performance an efficiency by (i) replacing the pixel decoder with a more advanced multi-scale deformable attention Transformer, (ii) adopting a Transformer decoder with masked attention to boost performance without |
| | without introducing additional computation and (iii) improving training efficiency by calculating the loss on subsampled points instead of whole masks. |
| |
|
| |  |
| |
|
| | ## Intended uses & limitations |
| |
|
| | You can use this particular checkpoint for panoptic segmentation. See the [model hub](https://huggingface.co/models?search=mask2former) to look for other |
| | fine-tuned versions on a task that interests you. |
| |
|
| | ### How to use |
| |
|
| | Here is how to use this model: |
| |
|
| | ```python |
| | import torch |
| | from PIL import Image |
| | from transformers import AutoImageProcessor, Mask2FormerForUniversalSegmentation |
| | |
| | |
| | # load Mask2Former fine-tuned on COCO panoptic segmentation |
| | processor = AutoImageProcessor.from_pretrained("facebook/mask2former-swin-large-coco-panoptic") |
| | model = Mask2FormerForUniversalSegmentation.from_pretrained("facebook/mask2former-swin-large-coco-panoptic") |
| | |
| | url = "http://images.cocodataset.org/val2017/000000039769.jpg" |
| | image = Image.open(requests.get(url, stream=True).raw) |
| | inputs = processor(images=image, return_tensors="pt") |
| | |
| | with torch.no_grad(): |
| | outputs = model(**inputs) |
| | |
| | # model predicts class_queries_logits of shape `(batch_size, num_queries)` |
| | # and masks_queries_logits of shape `(batch_size, num_queries, height, width)` |
| | class_queries_logits = outputs.class_queries_logits |
| | masks_queries_logits = outputs.masks_queries_logits |
| | |
| | # you can pass them to processor for postprocessing |
| | result = processor.post_process_panoptic_segmentation(outputs, target_sizes=[image.size[::-1]])[0] |
| | # we refer to the demo notebooks for visualization (see "Resources" section in the Mask2Former docs) |
| | predicted_panoptic_map = result["segmentation"] |
| | ``` |
| |
|
| | For more code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/master/en/model_doc/mask2former). |