Levi-Heath commited on
Commit
1e71483
·
verified ·
1 Parent(s): 2b3a9d9

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +11 -8
README.md CHANGED
@@ -12,17 +12,20 @@ metrics:
12
  - accuracy
13
  ---
14
 
15
- ### Description of the ImageNet-1k Featured Model
16
- For a thorough description of our models trained on the ImageNet-1k dataset, please read our preprint,
17
- *Towards Errorless Training ImageNet-1k*, which is available at [ADD LINK to arXiv preprint].
18
- In ../ImageNet-1k/MATLAB, we give parameters for 6 models, which are listed in the table below.
19
- Each model has the following architecture: 17x40x2=1360 FNNs, all with homogeneous architecture (900-256-25 or 900-256-77-25),
20
- working in parrallel to produce 1360 predictions which determine a final prediction using the majority voting protocol.
21
- We trained models using the following transformation of the 64x64 downsampled ImageNet-1k dataset:
 
22
  - downsampled images to 32x32, using the mean values of non-overlapping 2x2 grid cells and
23
  - trimmed off top row, bottom row, left-most column, and right-most column.
24
 
25
- This transformed data results in 30x30 images, hence 900-dimensional input vectors.
 
 
26
 
27
 
28
  | Model | Training Method | FNN Architecture | Accuracy (%) |
 
12
  - accuracy
13
  ---
14
 
15
+ # Towards Errorless Training ImageNet-1k
16
+ This repository host code and models for the manuscript, *Towards Errorless Training ImageNet-1k*, which is available at [ADD LINK to arXiv preprint].
17
+ We give 6 models trained on the ImageNet-1k dataset, which we list in the table below.
18
+ Each model is featured model of archtecture 17x40x2.
19
+ That is, each model is made up of 17x40x2=1360 FNNs, all with homogeneous architecture (900-256-25 or 900-256-77-25),
20
+ working in parallel to produce 1360 predictions which determine a final prediction using the majority voting protocol.
21
+
22
+ We trained the 6 models using the following transformation of the 64x64 downsampled ImageNet-1k dataset:
23
  - downsampled images to 32x32, using the mean values of non-overlapping 2x2 grid cells and
24
  - trimmed off top row, bottom row, left-most column, and right-most column.
25
 
26
+ This transformed data results in 30x30 images, hence 900-dimensional input vectors.
27
+
28
+ For a thorough description of our models trained on the ImageNet-1k dataset, please read our preprint linked above.
29
 
30
 
31
  | Model | Training Method | FNN Architecture | Accuracy (%) |