Files changed (4) hide show
  1. README.md +57 -1
  2. config.json +23 -0
  3. gitattributes.txt +35 -0
  4. preprocessor_config.json +27 -0
README.md CHANGED
@@ -1,3 +1,59 @@
1
  ---
2
- license: unknown
 
 
 
 
 
 
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ library_name: transformers
3
+ pipeline_tag: image-segmentation
4
+ tags:
5
+ - image-segmentation
6
+ - mask-generation
7
+ - transformers.js
8
+ - vision
9
+ - background-removal
10
+ - portrait-matting
11
+ license: apache-2.0
12
+ language:
13
+ - en
14
  ---
15
+ # U-2-Netp
16
+
17
+ ## Model Description
18
+ U-2-Netp is a lightweight version of the U2Net model designed for efficient and effective image segmentation tasks, especially for generating masks. It retains the core architectural design of U2Net while being optimized for faster inference times and reduced memory usage.
19
+
20
+ ## Usage
21
+ Perform mask generation with `BritishWerewolf/U-2-Netp`.
22
+
23
+ ### Example
24
+ ```javascript
25
+ import { AutoModel, AutoProcessor, RawImage } from '@huggingface/transformers';
26
+
27
+ const img_url = 'https://huggingface.co/ybelkada/segment-anything/resolve/main/assets/car.png';
28
+ const image = await RawImage.read(img_url);
29
+
30
+ const processor = await AutoProcessor.from_pretrained('BritishWerewolf/U-2-Netp');
31
+ const processed = await processor(image);
32
+
33
+ const model = await AutoModel.from_pretrained('BritishWerewolf/U-2-Netp', {
34
+ dtype: 'fp32',
35
+ });
36
+
37
+ const output = await model({ input: processed.pixel_values });
38
+ // {
39
+ // mask: Tensor {
40
+ // dims: [ 1, 320, 320 ],
41
+ // type: 'uint8',
42
+ // data: Uint8Array(102400) [ ... ],
43
+ // size: 102400
44
+ // }
45
+ // }
46
+ ```
47
+
48
+ ## Model Architecture
49
+ The U-2-Netp model is based on a simplified version of the original U2Net architecture, designed to be more lightweight while still achieving high performance in segmentation tasks. The model consists of several stages with down-sampling and up-sampling paths, using Residual U-blocks (RSU) for enhanced feature representation.
50
+
51
+ ### Inference
52
+ To use the model for inference, you can follow the example provided above. The `AutoProcessor` and `AutoModel` classes from the `transformers` library make it easy to load the model and processor.
53
+
54
+ ## Credits
55
+ * [`rembg`](https://github.com/danielgatis/rembg) for the ONNX model.
56
+ * The authors of the original U-2-Net model can be credited at https://github.com/xuebinqin/U-2-Net.
57
+
58
+ ## Licence
59
+ This model is licensed under the Apache License 2.0 to match the original U-2-Net model.
config.json ADDED
@@ -0,0 +1,23 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "u2netp",
3
+ "model_type": "u2net",
4
+ "architectures": [
5
+ "U2NetModel"
6
+ ],
7
+ "transformers.js_config": {
8
+ "dtype": "fp32"
9
+ },
10
+ "input_name": "input.1",
11
+ "input_shape": [1, 3, 320, 320],
12
+ "output_composite": "1959",
13
+ "output_names": [
14
+ "1959",
15
+ "1960",
16
+ "1961",
17
+ "1962",
18
+ "1963",
19
+ "1964",
20
+ "1965"
21
+ ],
22
+ "output_shape": [1, 320, 320]
23
+ }
gitattributes.txt ADDED
@@ -0,0 +1,35 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.7z filter=lfs diff=lfs merge=lfs -text
2
+ *.arrow filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.bz2 filter=lfs diff=lfs merge=lfs -text
5
+ *.ckpt filter=lfs diff=lfs merge=lfs -text
6
+ *.ftz filter=lfs diff=lfs merge=lfs -text
7
+ *.gz filter=lfs diff=lfs merge=lfs -text
8
+ *.h5 filter=lfs diff=lfs merge=lfs -text
9
+ *.joblib filter=lfs diff=lfs merge=lfs -text
10
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
+ *.mlmodel filter=lfs diff=lfs merge=lfs -text
12
+ *.model filter=lfs diff=lfs merge=lfs -text
13
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
14
+ *.npy filter=lfs diff=lfs merge=lfs -text
15
+ *.npz filter=lfs diff=lfs merge=lfs -text
16
+ *.onnx filter=lfs diff=lfs merge=lfs -text
17
+ *.ot filter=lfs diff=lfs merge=lfs -text
18
+ *.parquet filter=lfs diff=lfs merge=lfs -text
19
+ *.pb filter=lfs diff=lfs merge=lfs -text
20
+ *.pickle filter=lfs diff=lfs merge=lfs -text
21
+ *.pkl filter=lfs diff=lfs merge=lfs -text
22
+ *.pt filter=lfs diff=lfs merge=lfs -text
23
+ *.pth filter=lfs diff=lfs merge=lfs -text
24
+ *.rar filter=lfs diff=lfs merge=lfs -text
25
+ *.safetensors filter=lfs diff=lfs merge=lfs -text
26
+ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
27
+ *.tar.* filter=lfs diff=lfs merge=lfs -text
28
+ *.tar filter=lfs diff=lfs merge=lfs -text
29
+ *.tflite filter=lfs diff=lfs merge=lfs -text
30
+ *.tgz filter=lfs diff=lfs merge=lfs -text
31
+ *.wasm filter=lfs diff=lfs merge=lfs -text
32
+ *.xz filter=lfs diff=lfs merge=lfs -text
33
+ *.zip filter=lfs diff=lfs merge=lfs -text
34
+ *.zst filter=lfs diff=lfs merge=lfs -text
35
+ *tfevents* filter=lfs diff=lfs merge=lfs -text
preprocessor_config.json ADDED
@@ -0,0 +1,27 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "processor_class": "U2NetProcessor",
3
+ "image_processor_type": "U2NetImageProcessor",
4
+ "do_convert_rgb": true,
5
+ "do_normalize": true,
6
+ "do_pad": true,
7
+ "do_rescale": true,
8
+ "do_resize": true,
9
+ "keep_aspect_ratio": true,
10
+ "image_mean": [
11
+ 0.485,
12
+ 0.456,
13
+ 0.406
14
+ ],
15
+ "image_std": [
16
+ 0.229,
17
+ 0.224,
18
+ 0.225
19
+ ],
20
+ "pad_size": {
21
+ "width": 320,
22
+ "height": 320
23
+ },
24
+ "size": {
25
+ "longest_edge": 320
26
+ }
27
+ }