sdtemple commited on
Commit
d0f24d5
·
verified ·
1 Parent(s): 2cd38a9

Push model using huggingface_hub.

Browse files
Files changed (3) hide show
  1. README.md +6 -65
  2. config.json +7 -8
  3. model.safetensors +1 -1
README.md CHANGED
@@ -1,69 +1,10 @@
1
  ---
2
- license: mit
3
- datasets:
4
- - sdtemple/colored-shapes
5
- language:
6
- - en
7
- metrics:
8
- - accuracy
9
- - precision
10
- - recall
11
- - roc_auc
12
- pipeline_tag: image-classification
13
  tags:
14
- - tutorial
 
15
  ---
16
 
17
- This model predicts the shape (circle, rectangle, diamond, or triangle) of the 1 colored shape (8 colors) in a 224 x 224 x 3 image.
18
-
19
- This model is a part of a how to tutorial on fitting PyTorch models.
20
-
21
- The model is trained on 2000 examples for each color and shape combo (64,000 samples in total) simulated according to [https://github.com/sdtemple/zootopia3](https://github.com/sdtemple/zootopia3).
22
-
23
- The model is tested/evaluated on the dataset [https://huggingface.co/datasets/sdtemple/colored-shapes](https://huggingface.co/datasets/sdtemple/colored-shapes), which has slightly smaller shapes simulated (out of distribution) relative to the training data. The metrics below can be +- a few points depending on random seed.
24
-
25
- - Accuracy: 75%
26
- - Min precision (triangle): 57%
27
- - Max precision (rectangle): 98%
28
- - Min recall (diamond): 66%
29
- - Max recall (triangle): 84%
30
- - AUROC (macro-averaged): 92%
31
- - Min AUROC (diamond): 90%
32
- - Max AUROC (circle): 94%
33
-
34
- Compared to [https://huggingface.co/sdtemple/color-prediction-model](https://huggingface.co/sdtemple/color-prediction-model), it is harder to predict the shape than the color of the object.
35
-
36
- The model architecture is the following. In light experimentation, I found it important to have multiple convolutions and that too many parameters leads to noisy validation losses by epoch.
37
-
38
- ```
39
- MyCNN(
40
- (conv_block): Sequential(
41
- (0): Conv2d(3, 16, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
42
- (1): BatchNorm2d(16, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
43
- (2): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
44
- (3): Conv2d(16, 16, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
45
- (4): BatchNorm2d(16, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
46
- (5): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
47
- (6): Conv2d(16, 16, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
48
- (7): BatchNorm2d(16, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
49
- (8): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
50
- (9): Conv2d(16, 16, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
51
- (10): BatchNorm2d(16, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
52
- (11): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
53
- (12): Conv2d(16, 16, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
54
- (13): BatchNorm2d(16, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
55
- (14): AvgPool2d(kernel_size=2, stride=2, padding=0)
56
- )
57
- (linear_block): Sequential(
58
- (0): Linear(in_features=784, out_features=16, bias=True)
59
- (1): BatchNorm1d(16, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
60
- (2): ReLU()
61
- (3): Dropout(p=0.2, inplace=False)
62
- (4): Linear(in_features=16, out_features=16, bias=True)
63
- (5): BatchNorm1d(16, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
64
- (6): ReLU()
65
- (7): Dropout(p=0.2, inplace=False)
66
- )
67
- (output_block): Linear(in_features=16, out_features=4, bias=True)
68
- )
69
- ```
 
1
  ---
 
 
 
 
 
 
 
 
 
 
 
2
  tags:
3
+ - model_hub_mixin
4
+ - pytorch_model_hub_mixin
5
  ---
6
 
7
+ This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration:
8
+ - Code: [More Information Needed]
9
+ - Paper: [More Information Needed]
10
+ - Docs: [More Information Needed]
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
config.json CHANGED
@@ -1,16 +1,15 @@
1
  {
2
- "model_type": "custom_pytorch_model",
3
- "num_classes": 8,
4
  "height": 224,
5
- "width": 224,
6
- "num_input_channels": 3,
 
7
  "num_cnn_channels": 16,
8
  "num_cnn_layers": 3,
9
- "hidden_dim": 16,
10
  "num_layers": 1,
11
- "kernel_size": 3,
12
- "stride": 1,
13
  "padding": 1,
14
  "pooling": 2,
15
- "dropout": 0.2
 
16
  }
 
1
  {
2
+ "dropout": 0.2,
 
3
  "height": 224,
4
+ "hidden_dim": 16,
5
+ "kernel_size": 3,
6
+ "num_classes": 4,
7
  "num_cnn_channels": 16,
8
  "num_cnn_layers": 3,
9
+ "num_input_channels": 3,
10
  "num_layers": 1,
 
 
11
  "padding": 1,
12
  "pooling": 2,
13
+ "stride": 1,
14
+ "width": 224
15
  }
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:5d306b2d9c887c34cd57c40d7d4071b80ebde619a64b40ca06deac2562d92eba
3
  size 96584
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:fdc3af4f663dd7fc1d86088921aa6d93a75d5db77a419efcb3e97d5329e18886
3
  size 96584