MobileNetV2 introduces an efficient convolutional architecture based on inverted residual blocks and linear bottlenecks, enabling high accuracy at very low computational cost for mobile and embedded vision applications.
Original paper: MobileNetV2: Inverted Residuals and Linear Bottlenecks
MobileNetV2
This model uses the MobileNetV2 architecture with width multiplier = 1.0, providing a balanced trade-off between accuracy and efficiency. It is optimized for low-latency, low-power inference and is commonly used for on-device image classification as well as a backbone for mobile-friendly detection and segmentation models.
Model Configuration:
- Reference implementation: MobileNetV2
- Original Weight: MobileNet_V2_Weights.IMAGENET1K_V2
- Resolution: 3x224x224
- Support Cooper version:
- Cooper SDK: [2.5.2]
- Cooper Foundry: [2.2]
| Model | Device | Model Link |
|---|---|---|
| MobileNetV2 | N1-655 | Model_Link |
| MobileNetV2 | CV72 | Model_Link |
| MobileNetV2 | CV75 | Model_Link |
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support
