wuchendi commited on
Commit
22b45c9
Β·
1 Parent(s): da8a9cc

docs(README): Add MODNet model usage instructions and example code

Browse files

- Added complete example code for portrait segmentation using Transformers.js
- Displayed example results of input images and output masks

Files changed (1) hide show
  1. README.md +81 -1
README.md CHANGED
@@ -10,4 +10,84 @@ pipeline_tag: image-segmentation
10
 
11
  # wuchendi/MODNet (Matting Objective Decomposition Network)
12
 
13
- Trimap-Free Portrait Matting in Real Time
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
10
 
11
  # wuchendi/MODNet (Matting Objective Decomposition Network)
12
 
13
+ > Trimap-Free Portrait Matting in Real Time
14
+
15
+ - Repository: <https://huggingface.co/wuchendi/MODNet>
16
+ - SwanLab/MODNet: <https://swanlab.cn/@wudi/MODNet/overview>
17
+
18
+ ### πŸ“¦ Usage with [Transformers.js](https://www.npmjs.com/package/@huggingface/transformers)
19
+
20
+ First, install the `@huggingface/transformers` library from NPM:
21
+
22
+ ```bash
23
+ pnpm add @huggingface/transformers
24
+ ```
25
+
26
+ Then, use the following code to perform **portrait matting** with the `wuchendi/MODNet` model:
27
+
28
+ ```ts
29
+ /* eslint-disable no-console */
30
+ import { AutoModel, AutoProcessor, RawImage } from '@huggingface/transformers'
31
+
32
+ async function main() {
33
+ try {
34
+ console.log('πŸš€ Initializing MODNet...')
35
+
36
+ // Load model
37
+ console.log('πŸ“¦ Loading model...')
38
+ const model = await AutoModel.from_pretrained('wuchendi/MODNet', {
39
+ dtype: 'fp32',
40
+ progress_callback: (progress) => {
41
+ // @ts-ignore
42
+ if (progress.progress) {
43
+ // @ts-ignore
44
+ console.log(`Model loading progress: ${(progress.progress).toFixed(2)}%`)
45
+ }
46
+ }
47
+ })
48
+ console.log('βœ… Model loaded successfully')
49
+
50
+ // Load processor
51
+ console.log('πŸ”§ Loading processor...')
52
+ const processor = await AutoProcessor.from_pretrained('wuchendi/MODNet', {})
53
+ console.log('βœ… Processor loaded successfully')
54
+
55
+ // Load image from URL
56
+ const url = 'https://res.cloudinary.com/dhzm2rp05/image/upload/samples/logo.jpg'
57
+ console.log('πŸ–ΌοΈ Loading image:', url)
58
+ const image = await RawImage.fromURL(url)
59
+ console.log('βœ… Image loaded successfully', `Dimensions: ${image.width}x${image.height}`)
60
+
61
+ // Pre-process image
62
+ console.log('πŸ”„ Preprocessing image...')
63
+ const { pixel_values } = await processor(image)
64
+ console.log('βœ… Image preprocessing completed')
65
+
66
+ // Generate alpha matte
67
+ console.log('🎯 Generating alpha matte...')
68
+ const startTime = performance.now()
69
+ const { output } = await model({ input: pixel_values })
70
+ const inferenceTime = performance.now() - startTime
71
+ console.log('βœ… Alpha matte generated', `Time: ${inferenceTime.toFixed(2)}ms`)
72
+
73
+ // Save output mask
74
+ console.log('πŸ’Ύ Saving output...')
75
+ const mask = await RawImage.fromTensor(output[0].mul(255).to('uint8')).resize(image.width, image.height)
76
+ await mask.save('src/assets/mask.png')
77
+ console.log('βœ… Output saved to assets/mask.png')
78
+
79
+ } catch (error) {
80
+ console.error('❌ Error during processing:', error)
81
+ throw error
82
+ }
83
+ }
84
+
85
+ main().catch(console.error)
86
+
87
+ ```
88
+
89
+ ### πŸ–ΌοΈ Example Result
90
+
91
+ | Input Image | Output Mask |
92
+ | ----------------------------------- | ---------------------------------- |
93
+ | ![](https://cdn.jsdelivr.net/gh/WuChenDi/MODNet/examples/src/assets/Input.jpg) | ![](https://cdn.jsdelivr.net/gh/WuChenDi/MODNet/examples/src/assets/mask.png) |