Method
We use four popular networks for fair comparison, which are ConvNet, ResNet12, ResNet18 and WRN that are used in EGNN [@kim2019edge], MetaOptNet [@lee2019meta], CloserLook [@chen19closerfewshot] and LEO [@rusu2018meta] respectively. ConvNet mainly consists of four Conv-BN-ReLU blocks. The last two blocks also contain a dropout layer [@srivastava2014dropout]. ResNet12 and ResNet18 are the same as the one described in [@he2016deep]. They mainly have four blocks, which include one residual block for ResNet12 and two residual blocks for ResNet18 respectively. WRN was firstly proposed in [@zagoruyko2016wide]. It mainly has three residual blocks and the depth of the network is set to 28 as in [@rusu2018meta]. The last features of all backbone networks are processed by a global average pooling, then followed by a fully-connected layer with batch normalization [@ioffe2015batch] to obtain a 128-dimensions instance embedding.
We perform data augmentation before training, such as horizontal flip, random crop, and color jitter (brightness, contrast, and saturation), which are mentioned in [@gidaris2018dynamic; @ye2018learning]. We randomly sample 28 meta-task episodes in each iteration for meta-training. The Adam optimizer is used in all experiments with the initial learning rate of $10^{-3}$. We decay the learning rate by 0.1 per 15000 iterations and set the weight decay to $10^{-5}$.