fbnetv3_g_pruned_75 / README.md
michal-sokolski-tcl's picture
Update README.md
08c195d
---
license: apache-2.0
datasets:
- imagenet-1k
metrics:
- accuracy
pipeline_tag: image-classification
tags:
- pytorch
- torch-dag
---
# Model Card for fbnetv3_g_pruned_75
This is a prunned version of the [timm/fbnetv3_g.ra2_in1k](https://huggingface.co/timm/fbnetv3_g.ra2_in1k) model in a [toch-dag](https://github.com/TCLResearchEurope/torch-dag) format.
This model has a roughly 75% of the original model FLOPs with minimal metrics drop.
| Model | KMAPPs* | M Parameters | Accuracy (224x224) | Accuracy (240x240) |
| ----------- | ----------- | ----------- | ------------------ | ----------------- |
| **timm/fbnetv3_g.ra2_in1k (baseline)** | 42.75 | 16.6 | 80.61% | 81.32% |
| **fbnetv3_g_pruned_75 (ours)** | 32.15 **(75%)** | 13.1 **(79%)** | 80.30% **(↓ 0.31%)** | 80.25% **(↓ 1.107%)** |
\***KMAPPs** thousands of FLOPs per input pixel
`KMAPPs(model) = FLOPs(model) / (H * W * 1000)`, where `(H, W)` is the input resolution.
The accuracy was calculated on the ImageNet-1k validation dataset. For details about image pre-processing, please refer to the original repository.
## Model Details
### Model Description
- **Developed by:** [TCL Research Europe](https://github.com/TCLResearchEurope/)
- **Model type:** Classification / feature backbone
- **License:** Apache 2.0
- **Finetuned from model:** [timm/fbnetv3_g.ra2_in1k](https://huggingface.co/timm/fbnetv3_g.ra2_in1k)
### Model Sources
- **Repository:** [timm/fbnetv3_g.ra2_in1k](https://huggingface.co/timm/fbnetv3_g.ra2_in1k)
## How to Get Started with the Model
To load the model, You have to install [torch-dag](https://github.com/TCLResearchEurope/torch-dag#3-installation) library, which can be done using `pip` by
```
pip install torch-dag
```
then, clone this repository
```
# Make sure you have git-lfs installed (https://git-lfs.com)
git lfs install
git clone https://huggingface.co/TCLResearchEurope/fbnetv3_g_pruned_75
```
and now You are ready to load the model:
```
import torch_dag
import torch
model = torch_dag.io.load_dag_from_path('./fbnetv3_g_pruned_75')
model.eval()
out = model(torch.ones(1, 3, 224, 224))
print(out.shape)
```