Image Segmentation
Transformers
PyTorch
ONNX
Safetensors
Transformers.js
English
segformer
vision
nvidia/mit-b5
Instructions to use jonathandinu/face-parsing with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use jonathandinu/face-parsing with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("image-segmentation", model="jonathandinu/face-parsing")# Load model directly from transformers import AutoImageProcessor, SegformerForSemanticSegmentation processor = AutoImageProcessor.from_pretrained("jonathandinu/face-parsing") model = SegformerForSemanticSegmentation.from_pretrained("jonathandinu/face-parsing") - Transformers.js
How to use jonathandinu/face-parsing with Transformers.js:
// npm i @huggingface/transformers import { pipeline } from '@huggingface/transformers'; // Allocate pipeline const pipe = await pipeline('image-segmentation', 'jonathandinu/face-parsing'); - Inference
- Notebooks
- Google Colab
- Kaggle
How to use this model
#2
by cenahwang - opened
Hi, I'm doubt how to inference this model? thx
Hi, i wonder this too, did you find a solution?
from transformers import SegformerImageProcessor, AutoModelForSemanticSegmentation
from PIL import Image
import requests
import matplotlib.pyplot as plt
import torch.nn as nn
import numpy as np
processor = SegformerImageProcessor.from_pretrained("jonathandinu/face-parsing")
model = AutoModelForSemanticSegmentation.from_pretrained("jonathandinu/face-parsing")
url = "https://plus.unsplash.com/premium_photo-1673210886161-bfcc40f54d1f?ixlib=rb-4.0.3&ixid=MnwxMjA3fDB8MHxzZWFyY2h8MXx8cGVyc29uJTIwc3RhbmRpbmd8ZW58MHx8MHx8&w=1000&q=80"
image = Image.open(requests.get(url, stream=True).raw)
inputs = processor(images=image, return_tensors="pt")
outputs = model(**inputs)
logits = outputs.logits.cpu()
upsampled_logits = nn.functional.interpolate(
logits,
size=image.size[::-1],
mode="bilinear",
align_corners=False,
)
pred_seg = upsampled_logits.argmax(dim=1)[0]
pred_seg_np = pred_seg.detach().numpy()
plt.imshow(pred_seg_np)
plt.axis('off')
plt.show()
Thank you very much. I am able to get the output.
I am wondering that how to get segmentation parts seperately with their names as running on Huggingface.
Such as left eye , nose etc.
@Senem check config.json in files, there you can find labels definition in id2label prop.
Then just filter output by specific number.
finally got around to adding a proper README, thanks for bearing with me and the examples/help here π
jonathandinu changed discussion status to closed