File size: 1,535 Bytes
2a69503 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 |
---
library_name: pytorch
---

MobileNetV2 introduces an efficient convolutional architecture based on inverted residual blocks and linear bottlenecks, enabling high accuracy at very low computational cost for mobile and embedded vision applications.
Original paper: [MobileNetV2: Inverted Residuals and Linear Bottlenecks](https://arxiv.org/abs/1801.04381)
# MobileNetV2
This model uses the MobileNetV2 architecture with width multiplier = 1.0, providing a balanced trade-off between accuracy and efficiency. It is optimized for low-latency, low-power inference and is commonly used for on-device image classification as well as a backbone for mobile-friendly detection and segmentation models.
Model Configuration:
- Reference implementation: [MobileNetV2](https://pytorch.org/vision/stable/models/generated/torchvision.models.mobilenet_v2.html)
- Original Weight: [MobileNet_V2_Weights.IMAGENET1K_V2](https://download.pytorch.org/models/mobilenet_v2-7ebf99e0.pth)
- Resolution: 3x224x224
- Support Cooper version:
- Cooper SDK: [2.5.2]
- Cooper Foundry: [2.2]
| Model | Device | Model Link |
| :-----: | :-----: | :-----: |
| MobileNetV2 | N1-655 | [Model_Link](https://huggingface.co/Ambarella/MobileNetV2/blob/main/n1-655_mobilenet_v2.bin) |
| MobileNetV2 | CV72 | [Model_Link](https://huggingface.co/Ambarella/MobileNetV2/blob/main/cv72_mobilenet_v2.bin) |
| MobileNetV2 | CV75 | [Model_Link](https://huggingface.co/Ambarella/MobileNetV2/blob/main/cv75_mobilenet_v2.bin) |
|