Update README.md
Browse files
README.md
CHANGED
|
@@ -12,17 +12,20 @@ metrics:
|
|
| 12 |
- accuracy
|
| 13 |
---
|
| 14 |
|
| 15 |
-
|
| 16 |
-
|
| 17 |
-
|
| 18 |
-
|
| 19 |
-
|
| 20 |
-
working in
|
| 21 |
-
|
|
|
|
| 22 |
- downsampled images to 32x32, using the mean values of non-overlapping 2x2 grid cells and
|
| 23 |
- trimmed off top row, bottom row, left-most column, and right-most column.
|
| 24 |
|
| 25 |
-
|
|
|
|
|
|
|
| 26 |
|
| 27 |
|
| 28 |
| Model | Training Method | FNN Architecture | Accuracy (%) |
|
|
|
|
| 12 |
- accuracy
|
| 13 |
---
|
| 14 |
|
| 15 |
+
# Towards Errorless Training ImageNet-1k
|
| 16 |
+
This repository host code and models for the manuscript, *Towards Errorless Training ImageNet-1k*, which is available at [ADD LINK to arXiv preprint].
|
| 17 |
+
We give 6 models trained on the ImageNet-1k dataset, which we list in the table below.
|
| 18 |
+
Each model is featured model of archtecture 17x40x2.
|
| 19 |
+
That is, each model is made up of 17x40x2=1360 FNNs, all with homogeneous architecture (900-256-25 or 900-256-77-25),
|
| 20 |
+
working in parallel to produce 1360 predictions which determine a final prediction using the majority voting protocol.
|
| 21 |
+
|
| 22 |
+
We trained the 6 models using the following transformation of the 64x64 downsampled ImageNet-1k dataset:
|
| 23 |
- downsampled images to 32x32, using the mean values of non-overlapping 2x2 grid cells and
|
| 24 |
- trimmed off top row, bottom row, left-most column, and right-most column.
|
| 25 |
|
| 26 |
+
This transformed data results in 30x30 images, hence 900-dimensional input vectors.
|
| 27 |
+
|
| 28 |
+
For a thorough description of our models trained on the ImageNet-1k dataset, please read our preprint linked above.
|
| 29 |
|
| 30 |
|
| 31 |
| Model | Training Method | FNN Architecture | Accuracy (%) |
|