efficientnet_b2 / README.md
akashvverma1995's picture
updated the README.md
c3fffbf
|
raw
history blame
4.02 kB
metadata
library_name: litert
pipeline_tag: image-classification
tags:
  - vision
  - image-classification
  - google
  - computer-vision
datasets:
  - imagenet-1k
base_model:
  - google/efficientnet-b2
model-index:
  - name: EfficientNet_B2
    results:
      - task:
          type: image-classification
          name: Image Classification
        dataset:
          name: ImageNet-1k
          type: imagenet-1k
          config: default
          split: validation
        metrics:
          - name: Top 1 Accuracy (Full Precision)
            type: accuracy
            value: 0.8061
          - name: Top 5 Accuracy (Full Precision)
            type: accuracy
            value: 0.953
          - name: Top 1 Accuracy (Dynamic Quantized wi8 afp32)
            type: accuracy
            value: 0.8012
          - name: Top 5 Accuracy (Dynamic Quantized wi8 afp32)
            type: accuracy
            value: 0.9501

EfficientNet B2

EfficientNet B2 model pre-trained on ImageNet-1k. Originally introduced by Tan and Le in the influential paper, EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks this model utilizes compound scaling to systematically balance network depth, width, and resolution, enabling superior accuracy with significantly higher efficiency than traditional architectures.

Model description

The model was converted from a checkpoint from PyTorch Vision.

The original model has:
acc@1 (on ImageNet-1K): 80.608%
acc@5 (on ImageNet-1K): 95.31%
num_params: 9,109,994

Intended uses & limitations

The model files were converted from pretrained weights from PyTorch Vision. The models may have their own licenses or terms and conditions derived from PyTorch Vision and the dataset used for training. It is your responsibility to determine whether you have permission to use the models for your use case.

Use

#!/usr/bin/env python3
import argparse, json
import numpy as np
from PIL import Image
from huggingface_hub import hf_hub_download
from ai_edge_litert.compiled_model import CompiledModel


def preprocess(img: Image.Image) -> np.ndarray:
   img = img.convert("RGB")
   w, h = img.size
   s = 288
   if w < h:
       img = img.resize((s, int(round(h * s / w))), Image.BICUBIC)
   else:
       img = img.resize((int(round(w * s / h)), s), Image.BICUBIC)
   left = (img.size[0] - 288) // 2
   top = (img.size[1] - 288) // 2
   img = img.crop((left, top, left + 288, top + 288))


   x = np.asarray(img, dtype=np.float32) / 255.0
   x = (x - np.array([0.485, 0.456, 0.406], dtype=np.float32)) / np.array(
       [0.229, 0.224, 0.225], dtype=np.float32
   )
   return np.transpose(x, (2, 0, 1))


def main():
   ap = argparse.ArgumentParser()
   ap.add_argument("--image", required=True)
   args = ap.parse_args()


   model_path = hf_hub_download("litert-community/efficientnet_b2", "efficientnet_b2.tflite")
   labels_path = hf_hub_download(
       "huggingface/label-files", "imagenet-1k-id2label.json", repo_type="dataset"
   )
   with open(labels_path, "r", encoding="utf-8") as f:
       id2label = {int(k): v for k, v in json.load(f).items()}


   img = Image.open(args.image)
   x = preprocess(img)


   model = CompiledModel.from_file(model_path)
   inp = model.create_input_buffers(0)
   out = model.create_output_buffers(0)


   inp[0].write(x)
   model.run_by_index(0, inp, out)


   req = model.get_output_buffer_requirements(0, 0)
   y = out[0].read(req["buffer_size"] // np.dtype(np.float32).itemsize, np.float32)


   pred = int(np.argmax(y))
   label = id2label.get(pred, f"class_{pred}")


   print(f"Top-1 class index: {pred}")
   print(f"Top-1 label: {label}")
if __name__ == "__main__":
   main()

BibTeX entry and citation info

@article{Tan2019EfficientNetRM,
  title={EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks},
  author={Mingxing Tan and Quoc V. Le},
  journal={ArXiv},
  year={2019},
  volume={abs/1905.11946}
}