Update README.md
Browse files
README.md
CHANGED
|
@@ -16,13 +16,13 @@ metrics:
|
|
| 16 |
For a thorough description of our models trained on the ImageNet-1k dataset, please read our preprint,
|
| 17 |
*Towards Errorless Training ImageNet-1k*, which is available at [ADD LINK to arXiv preprint].
|
| 18 |
In ../ImageNet-1k/MATLAB, we give parameters for 6 models, which are listed in the table below.
|
| 19 |
-
Each model has the following architecture:
|
| 20 |
working in parrallel to produce 1360 predictions which determine a final prediction using the majority voting protocol.
|
| 21 |
-
We trained models using the following transformation of the
|
| 22 |
-
- downsampled images to
|
| 23 |
- trimmed off top row, bottom row, left-most column, and right-most column.
|
| 24 |
|
| 25 |
-
This transformed data results in
|
| 26 |
|
| 27 |
|
| 28 |
| Model | Training Method | FNN Architecture | Accuracy (%) |
|
|
|
|
| 16 |
For a thorough description of our models trained on the ImageNet-1k dataset, please read our preprint,
|
| 17 |
*Towards Errorless Training ImageNet-1k*, which is available at [ADD LINK to arXiv preprint].
|
| 18 |
In ../ImageNet-1k/MATLAB, we give parameters for 6 models, which are listed in the table below.
|
| 19 |
+
Each model has the following architecture: 17x40x2=1360 FNNs, all with homogeneous architecture (900-256-25 or 900-256-77-25),
|
| 20 |
working in parrallel to produce 1360 predictions which determine a final prediction using the majority voting protocol.
|
| 21 |
+
We trained models using the following transformation of the 64x64 downsampled ImageNet-1k dataset:
|
| 22 |
+
- downsampled images to 32x32, using the mean values of non-overlapping 2x2 grid cells and
|
| 23 |
- trimmed off top row, bottom row, left-most column, and right-most column.
|
| 24 |
|
| 25 |
+
This transformed data results in 30x30 images, hence 900-dimensional input vectors.
|
| 26 |
|
| 27 |
|
| 28 |
| Model | Training Method | FNN Architecture | Accuracy (%) |
|