Universal mean field upper bound for the generalisation gap of deep neural networks
Abstract
Replica mean field theory combined with statistical learning theory provides a stringent upper bound on the generalization gap of DNNs, showing it approaches zero faster than 2N_out/P as dataset size increases.
Modern deep neural networks (DNNs) represent a formidable challenge for theorists: according to the commonly accepted probabilistic framework that describes their performance, these architectures should overfit due to the huge number of parameters to train, but in practice they do not. Here we employ results from replica mean field theory to compute the generalisation gap of machine learning models with quenched features, in the teacher-student scenario and for regression problems with quadratic loss function. Notably, this framework includes the case of DNNs where the last layer is optimised given a specific realisation of the remaining weights. We show how these results -- combined with ideas from statistical learning theory -- provide a stringent asymptotic upper bound on the generalisation gap of fully trained DNN as a function of the size of the dataset P. In particular, in the limit of large P and N_{out} (where N_out is the size of the last layer) and N_out ll P, the generalisation gap approaches zero faster than 2 N_out/P, for any choice of both architecture and teacher function. Notably, this result greatly improves existing bounds from statistical learning theory. We test our predictions on a broad range of architectures, from toy fully-connected neural networks with few hidden layers to state-of-the-art deep convolutional neural networks.
Models citing this paper 0
No model linking this paper
Datasets citing this paper 1
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper