FigAgent / 2002.06753 /paper_text /intro_method.md
Eric03's picture
Add files using upload-large-folder tool
3ab388d verified

Method

In the context of few-shot learning, the objective of meta-learning algorithms is to produce a network that quickly adapts to new classes using little data. Concretely stated, meta-learning algorithms find parameters that can be fine-tuned in few optimization steps and on few data points in order to achieve good generalization on a task $\mathcal{T}_i$, consisting of a small number of data samples from a distribution and label space that was not seen during training. The task is characterized as n-way, k-shot if the meta-learning algorithm must adapt to classify data from $\mathcal{T}_i$ after seeing $k$ examples from each of the $n$ classes in $\mathcal{T}_i$.

Meta-learning schemes typically rely on bi-level optimization problems with an inner loop and an outer loop. An iteration of the outer loop involves first sampling a "task," which comprises two sets of labeled data: the support data, $\mathcal{T}_i^s$, and the query data, $\mathcal{T}_i^q$. Then, in the inner loop, the model being trained is fine-tuned using the support data. Finally, the routine moves back to the outer loop, where the meta-learning algorithm minimizes loss on the query data with respect to the pre-fine-tuned weights. This minimization is executed by differentiating through the inner loop computation and updating the network parameters to make the inner loop fine-tuning as effective as possible. Note that, in contrast to standard transfer learning (which uses classical training and simple first-order gradient information to update parameters), meta-learning algorithms differentiate through the entire fine-tuning loop. A formal description of this process can be found in Algorithm [alg:MetaAlgorithm]{reference-type="ref" reference="alg:MetaAlgorithm"}, as seen in [@goldblum2019robust].

:::: algorithm ::: algorithmic Base model, $F_\theta$, fine-tuning algorithm, $A$, learning rate, $\gamma$, and distribution over tasks, $p(\mathcal{T})$. Initialize $\theta$, the weights of $F$;
Sample batch of tasks, ${\mathcal{T}i}{i=1}^n$, where $\mathcal{T}_i \sim p(\mathcal{T})$ and $\mathcal{T}_i = (\mathcal{T}i^s, \mathcal{T}i^q)$.
Fine-tune model on $\mathcal{T}i$ (inner loop). New network parameters are written $\theta{i} = A(\theta, \mathcal{T}i^s)$.
Compute gradient $g_i = \nabla
{\theta} \mathcal{L}(F
{\theta
{i}}, \mathcal{T}_i^{q})$ Update base model parameters (outer loop):
$\theta \leftarrow \theta - \frac{\gamma}{n} \sum_i g_i$ ::: ::::

A variety of meta-learning algorithms exist, mostly differing in how they fine-tune on support data during the inner loop. Some meta-learning approaches, such as MAML, update all network parameters using gradient descent during fine-tuning [@finn2017model]. Because differentiating through the inner loop is memory and computationally intensive, the fine-tuning process consists of only a few (sometimes just 1) SGD steps.

Reptile, which functions as a zero'th-order approximation to MAML, avoids unrolling the inner loop and differentiating through the SGD steps. Instead, after fine-tuning on support data, Reptile moves the central parameter vector in the direction of the fine-tuned parameters during the outer loop [@nichol2018reptile]. In many cases, Reptile achieves better performance than MAML without having to differentiate through the fine-tuning process.

Another class of algorithms freezes the feature extraction layers during the inner loop; only the linear classifier layer is trained during fine-tuning. Such methods include R2-D2 and MetaOptNet [@bertinetto2018meta; @lee2019meta]. The advantage of this approach is that the fine-tuning problem is now a convex optimization problem. Unlike MAML, which simulates the fine-tuning process using only a few gradient updates, last-layer meta-learning methods can use differentiable optimizers to exactly minimize the fine-tuning objective and then differentiate the solution with respect to feature inputs. Moreover, differentiating through these solvers is computationally cheap compared to MAML's differentiation through SGD steps on the whole network. While MetaOptNet relies on an SVM loss, R2-D2 simplifies the process even further by using a quadratic objective with a closed-form solution. R2-D2 and MetaOptNet achieve stronger performance than MAML and are able to harness larger architectures without overfitting.

:::: table* ::: center Model SVM RR ProtoNet MAML


MetaOptNet-M 62.64 $\pm$ 0.31 % 60.50 $\pm$ 0.30 % 51.99 $\pm$ 0.33 % 55.77 $\pm$ 0.32 % MetaOptNet-C 56.18 $\pm$ 0.31 % 55.09 $\pm$ 0.30 % 41.89 $\pm$ 0.32 % 46.39 $\pm$ 0.28 % R2-D2-M 51.80 $\pm$ 0.20 % 55.89 $\pm$ 0.31 % 47.89 $\pm$ 0.32 % 53.72 $\pm$ 0.33 % R2-D2-C 48.39 $\pm$ 0.29 % 48.29 $\pm$ 0.29 % 28.77 $\pm$ 0.24 % 44.31 $\pm$ 0.28 % ::: ::::

Another last-layer method, ProtoNet, classifies examples by the proximity of their features to those of class centroids - a metric learning approach - in its inner loop [@snell2017prototypical]. Again, the feature extractor's parameters are frozen in the inner loop, and the extracted features are used to create class centroids which then determine the network's class boundaries. Because calculating class centroids is mathematically simple, this algorithm is able to efficiently backpropagate through this calculation to adjust the feature extractor.

In this work, "classically trained" models are trained, using cross-entropy loss and SGD, on all classes simultaneously, and the feature extractors are adapted to new tasks using the same fine-tuning procedures as the meta-learned models for fair comparison. This approach represents the industry-standard method of transfer learning using pre-trained feature extractors.

Several datasets have been developed for few-shot learning. We focus our attention on two datasets: mini-ImageNet and CIFAR-FS. Mini-ImageNet is a pruned and downsized version of the ImageNet classification dataset, consisting of 60,000, $84 \times 84$ RGB color images from $100$ classes [@vinyals2016matching]. These 100 classes are split into $64, 16,$ and $20$ classes for training, validation, and testing sets, respectively. The CIFAR-FS dataset samples images from CIFAR-100 [@bertinetto2018meta]. CIFAR-FS is split in the same way as mini-ImageNet with 60,000 $32 \times 32$ RGB color images from $100$ classes divided into $64, 16,$ and $20$ classes for training, validation, and testing sets, respectively.