mobilenet_logo

MobileNetV2 introduces an efficient convolutional architecture based on inverted residual blocks and linear bottlenecks, enabling high accuracy at very low computational cost for mobile and embedded vision applications.

Original paper: MobileNetV2: Inverted Residuals and Linear Bottlenecks

MobileNetV2

This model uses the MobileNetV2 architecture with width multiplier = 1.0, providing a balanced trade-off between accuracy and efficiency. It is optimized for low-latency, low-power inference and is commonly used for on-device image classification as well as a backbone for mobile-friendly detection and segmentation models.

Model Configuration:

Model Device Model Link
MobileNetV2 N1-655 Model_Link
MobileNetV2 CV72 Model_Link
MobileNetV2 CV75 Model_Link
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support