grkw commited on
Commit
dc82a3f
·
1 Parent(s): c3d4cfc

Add ipynb and more info on the app

Browse files
Files changed (2) hide show
  1. app.py +1 -1
  2. flower_classifier.ipynb +0 -0
app.py CHANGED
@@ -20,7 +20,7 @@ def predict(img):
20
  return predictions_with_names
21
 
22
  title = "<h1>Flower Classifier</h1>"
23
- description = "<p>An introductory project for using fastai to fine-tune an image classifier model, Gradio to demo it on a web interface, and HuggingFace Spaces for deployment to production. The Oxford Flowers 102 dataset, created by the University of Oxford’s Visual Geometry Group, consists of 8,189 images spanning 102 flower species, designed to challenge fine-grained image classification models. With varying lighting, backgrounds, and an uneven class distribution, it serves as a benchmark for testing model robustness and optimizing classification accuracy, making it popular for transfer learning experiments with models like VGG16, ResNet, and EfficientNet.</p>"
24
  labels_table = """<p>Classes included in training:<p>
25
  <table>
26
  <tr>
 
20
  return predictions_with_names
21
 
22
  title = "<h1>Flower Classifier</h1>"
23
+ description = "<p>An introductory project using fastai for transfer learning using an image classification model, Gradio to demo it on a web app, and HuggingFace Spaces for deployment. I used the ResNet34 architecture on the Oxford Flowers 102 dataset, with a random 80%/20% train/test split, input resizing to 224x224x3, batch data augmentation, a learning rate found by `lr_find()`, only 2 training epochs, and the rest of the hyperparameters as fastai defaults. As someone who's learned neural networks from the bottom up with a strong theoretical foundation, it was fun to see how \"easy\" ML can be for simpler tasks, as the model achieves 91% test accuracy (while a random guess would yield 1% accuracy)!</p><p>Feel free to browse the example images below (10 are from the test set, and 2 are my own out-of-distribution images) or upload your own image of a flower. The model may have overfit to the training distribution, as it doesn't generalize well to images with cluttered backgrounds (see my dahlia photo and my tulip photo) and has 100% certainty of correct guesses for some examples in the test set.</p><p>The Oxford Flowers 102 dataset, created by the University of Oxford’s Visual Geometry Group, consists of 8,189 images spanning 102 flower species, designed to challenge fine-grained image classification models. With varying lighting, backgrounds, and an uneven class distribution, it serves as a benchmark for testing model robustness and optimizing classification accuracy, making it popular for transfer learning experiments with models like VGG16, ResNet, and EfficientNet."
24
  labels_table = """<p>Classes included in training:<p>
25
  <table>
26
  <tr>
flower_classifier.ipynb ADDED
The diff for this file is too large to render. See raw diff