AbstractPhil's picture
Update README.md
090d495 verified
metadata
license: mit

This is a prototype linear stack meant to benchmark the newest CantorConv layer implementation with standard linear layers replaced with CantorLinear when available.

Preliminary MNIST trains show that CantorLinear embedded cantor-fingerprinted direct neuron learning mask with alpha weights;

increases accuracy by +-4% with commonly +2% over standard linear layers and reduces train time by about 4% worst -4% speed. Preliminary MNIST trains show that CantorConv with cantor-fingerprinted learning mask

Accuracy is nearly identical to traditional +-2% either way depending on noise I will be testing cantor resnet by feeding it imagenet orderly features from my repos to see how it fares.

https://github.com/AbstractEyes/lattice_vocabulary/blob/master/src/geovocab2/train/model/encoder/cantor_resnet.py