ACL-OCL / Base_JSON /prefixL /json /lantern /2021.lantern-1.4.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T12:13:55.575819Z"
},
"title": "Large-Scale Zero-Shot Image Classification from Rich and Diverse Textual Descriptions",
"authors": [
{
"first": "Sebastian",
"middle": [],
"last": "Bujwid",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "KTH Royal Institute of Technology Stockholm",
"location": {
"country": "Sweden"
}
},
"email": "bujwid@kth.se"
},
{
"first": "Josephine",
"middle": [],
"last": "Sullivan",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "KTH Royal Institute of Technology Stockholm",
"location": {
"country": "Sweden"
}
},
"email": "sullivan@kth.se"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "We study the impact of using rich and diverse textual descriptions of classes for zero-shot learning (ZSL) on ImageNet. We create a new dataset ImageNet-Wiki that matches each Im-ageNet class to its corresponding Wikipedia article. We show that merely employing these Wikipedia articles as class descriptions yields much higher ZSL performance than prior works. Even a simple model using this type of auxiliary data outperforms state-of-theart models that rely on standard features of word embedding encodings of class names. These results highlight the usefulness and importance of textual descriptions for ZSL, as well as the relative importance of auxiliary data type compared to algorithmic progress. Our experimental results also show that standard zero-shot learning approaches generalize poorly across categories of classes.",
"pdf_parse": {
"paper_id": "2021",
"_pdf_hash": "",
"abstract": [
{
"text": "We study the impact of using rich and diverse textual descriptions of classes for zero-shot learning (ZSL) on ImageNet. We create a new dataset ImageNet-Wiki that matches each Im-ageNet class to its corresponding Wikipedia article. We show that merely employing these Wikipedia articles as class descriptions yields much higher ZSL performance than prior works. Even a simple model using this type of auxiliary data outperforms state-of-theart models that rely on standard features of word embedding encodings of class names. These results highlight the usefulness and importance of textual descriptions for ZSL, as well as the relative importance of auxiliary data type compared to algorithmic progress. Our experimental results also show that standard zero-shot learning approaches generalize poorly across categories of classes.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Zero-shot learning (ZSL) relates information from different modalities (e.g., text or attributes with images), and the hope is that a sparsity of information or training data in one modality can be compensated by the other. This is important when the cost of creation or collection of training data greatly differs between the different modalities. Natural language descriptions and existing text repositories can provide this rich and practically accessible information about visual concepts and classes when no accompanying labeled images are available.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Recent works on zero-shot image classification (Xian et al., 2019; Schonfeld et al., 2019; Xian et al., 2018) show the field's substantial progress on many of the standard ZSL benchmarks. However, most of those benchmarks mostly cover either a very small or narrow set of classes (e.g., only bird species), where human-made class attributes are often used as auxiliary data. Unfortunately, on ImageNet, where such attributes are not available, the performance is still very low. Therefore, in this work, rather than focus on algorithmic development, we instead study how the auxiliary data type impacts performance. We evaluate the benefits of using textual descriptions of the ImageNet classes. We collect text from Wikipedia articles describing each class in ImageNet, illustrated in Figure 1 . Throughout the paper, we refer to the dataset of ImageNet classes and their corresponding Wikipedia articles, as well as the extracted text, as ImageNet-Wiki. 1 The use of textual descriptions for ZSL has been studied before on smaller datasets (Reed et al., 2016; Elhoseiny et al., 2016 Elhoseiny et al., , 2017 Zhu et al., 2018) . However, this is the first work to use such textual descriptions on a large dataset, with a broad set of classes. The major differences between commonly used datasets are highlighted in Table 1 . The large-scale setup enables us to study the main challenges of a more realistic and practical zero-shot image classification scenario and study the generalization of models to novel groups of classes (e.g., animal species in general), not only individual classes (e.g., specific animal species).",
"cite_spans": [
{
"start": 47,
"end": 66,
"text": "(Xian et al., 2019;",
"ref_id": "BIBREF22"
},
{
"start": 67,
"end": 90,
"text": "Schonfeld et al., 2019;",
"ref_id": "BIBREF19"
},
{
"start": 91,
"end": 109,
"text": "Xian et al., 2018)",
"ref_id": "BIBREF21"
},
{
"start": 956,
"end": 957,
"text": "1",
"ref_id": null
},
{
"start": 1042,
"end": 1061,
"text": "(Reed et al., 2016;",
"ref_id": "BIBREF16"
},
{
"start": 1062,
"end": 1084,
"text": "Elhoseiny et al., 2016",
"ref_id": "BIBREF5"
},
{
"start": 1085,
"end": 1109,
"text": "Elhoseiny et al., , 2017",
"ref_id": "BIBREF6"
},
{
"start": 1110,
"end": 1127,
"text": "Zhu et al., 2018)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [
{
"start": 786,
"end": 794,
"text": "Figure 1",
"ref_id": null
},
{
"start": 1316,
"end": 1323,
"text": "Table 1",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Our experimental results lead us to two significant observations. First, models using the textual descriptions perform much better than those using class name features. Remarkably, a simple ZSL model based on DeViSE (Frome et al., 2013) and trained on text from ImageNet-Wiki clearly outperforms more recent, state-of-the-art models that use word embedding features of class names as the auxiliary data. For the CADA-VAE model (Schonfeld et al., 2019) , ImageNet-Wiki leads to almost a 12 pp. improvement (38.63% vs. 50.50%) in top-5 accuracy on ImageNet mp500 test split, which is far higher than achieved in any prior work. These White rhinoceros By mean body mass, the white rhinoceros falls behind only the two larger extant species of elephant as the largest land animal and terrestrial mammal alive today (...) It has a massive body and large head, a short neck and broad chest. (...)",
"cite_spans": [
{
"start": 216,
"end": 236,
"text": "(Frome et al., 2013)",
"ref_id": "BIBREF7"
},
{
"start": 427,
"end": 451,
"text": "(Schonfeld et al., 2019)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Class names: carryall, holdall, tote, tote bag",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "?",
"sec_num": null
},
{
"text": "Tote bag A tote bag is a large and often unfastened bag with parallel handles that emerge from the sides of its pouch (...) Training data for unseen classes or or or or or or Auxiliary data Auxiliary data Auxiliary data Auxiliary data Auxiliary data Auxiliary data Figure 1 : ImageNet classes and their corresponding auxiliary data: Wikipedia articles or class names. Above illustrates the difference between class names, the standard source of auxiliary data used on ImageNet, and the textual descriptions we collected. The short, selected article fragments shown contain information that is visually discriminative for the classes. In ZSL, for seen classes, we have access to both auxiliary data and images. However, for unseen classes only auxiliary data is available for training, with no images. animals, plants, objects, other results strongly suggest that algorithmic development on ZSL models is not sufficient for further substantial progress. It is necessary to also consider the quality and type of auxiliary data used. Fortunately, creating or collecting already available textual information about classes is practical, viable, and much less labor-intensive than collecting and annotating images with labels or attributes. Our second main observation is that, regardless of the type of auxiliary data used, ZSL models generalize poorly from one large class category to another. For example, excluding all animal classes from the training set leads to large performance drops on animal species in the test set. Therefore, the field's algorithmic progress measured on small datasets with few categories (Table 1) might not be well aligned with the end goal of using ZSL to scale image classifiers to diverse sets of unseen classes from many diverse categories. Though textual descriptions are already available for smaller datasets (CUB, NABirds) we believe ImageNet-Wiki will facilitate further research in this area. The textual descriptions for ZSL lead to significant performance improvements, but they also have several theoretical and practical advantages over other types of auxiliary data, as we detail in Table 2 . Moreover, we believe a large-scale zero-shot learning setup with rich and diverse textual descriptions can be a useful for studying general multimodal machine learning, specifically on the interaction between language and vision. This is because the task is challenging and requires effective modeling of the interaction between modalities for better-thanrandom accuracy (unlike say VQA where relying on language priors may be possible (Zhang et al., 2016; Goyal et al., 2017) ). Additionally, ImageNet- Text descriptions + Rich in information: likely to be discriminative + Easy to collect: many existing resources + Easy aggregation: allows conceptually easy aggregation of text from multiple sources \u2212 High noise: contains non-relevant information",
"cite_spans": [
{
"start": 118,
"end": 123,
"text": "(...)",
"ref_id": null
},
{
"start": 2571,
"end": 2591,
"text": "(Zhang et al., 2016;",
"ref_id": "BIBREF23"
},
{
"start": 2592,
"end": 2611,
"text": "Goyal et al., 2017)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [
{
"start": 265,
"end": 273,
"text": "Figure 1",
"ref_id": null
},
{
"start": 1614,
"end": 1623,
"text": "(Table 1)",
"ref_id": "TABREF0"
},
{
"start": 2125,
"end": 2132,
"text": "Table 2",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Wikipedia title & article:",
"sec_num": null
},
{
"text": "Wiki covers a broad range of many classes, with Wikipedia articles that are long, complex, diverse, and written by many authors. We expect that this experimental setting will lead to more progress as methods become more tailored to relating text language and vision.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Wikipedia title & article:",
"sec_num": null
},
{
"text": "We demonstrate that simply using better auxiliary data in the form of Wikipedia description of classes outperforms all prior works on zero-shot learning on the mp500 test split of ImageNet. Additionally, we show ZSL models generalize very poorly to novel super-categories of classes. Finally, we introduce ImageNet-Wiki, which provides text from Wikipedia articles of ImageNet classes, which we hope will facilitate the research on ZSL and the interaction between language and vision.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Summary of the contributions",
"sec_num": null
},
{
"text": "Zero-shot learning The problem of zero-shot learning (ZSL) was initially defined and introduced by Larochelle et al. (2008) . A significant fraction of the recent works on ZSL uses attributes as auxiliary data, and small or medium-sized datasets (Xian et al., 2018; Romera-Paredes and Torr, 2015; Changpinyo et al., 2016; Schonfeld et al., 2019) . The datasets generally used contain either animal species (CUB, AWA1, AWA2), flower species (Oxford Flower Dataset), or a broader set of classes (SUN, aPY). Some of the works experiment on Im-ageNet, but due to the lack of attribute annotations, they use word2vec embeddings of class names instead (Xian et al., 2018; Changpinyo et al., 2016; Schonfeld et al., 2019) . Creating discriminative attribute annotations becomes more challenging as the number of classes grows, and the differences between classes become more nuanced. We would like the ImageNet-Wiki dataset to stimulate future research on large-scale ZSL as it provides a semantically rich source of auxiliary data, shown by our experiments, for ImageNet.",
"cite_spans": [
{
"start": 99,
"end": 123,
"text": "Larochelle et al. (2008)",
"ref_id": "BIBREF10"
},
{
"start": 246,
"end": 265,
"text": "(Xian et al., 2018;",
"ref_id": "BIBREF21"
},
{
"start": 266,
"end": 296,
"text": "Romera-Paredes and Torr, 2015;",
"ref_id": "BIBREF17"
},
{
"start": 297,
"end": 321,
"text": "Changpinyo et al., 2016;",
"ref_id": "BIBREF0"
},
{
"start": 322,
"end": 345,
"text": "Schonfeld et al., 2019)",
"ref_id": "BIBREF19"
},
{
"start": 646,
"end": 665,
"text": "(Xian et al., 2018;",
"ref_id": "BIBREF21"
},
{
"start": 666,
"end": 690,
"text": "Changpinyo et al., 2016;",
"ref_id": "BIBREF0"
},
{
"start": 691,
"end": 714,
"text": "Schonfeld et al., 2019)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related works",
"sec_num": "2"
},
{
"text": "Zero-shot learning from text The portion of ZSL research that is more closely related to our work uses text as auxiliary data. Reed et al. (2016) collected a dataset with multiple single-sentence image-level descriptions for two datasets: CUB-2011 containing bird species and Oxford Flowers dataset with flower species. The general idea of utilizing Wikipedia articles for ZSL has already been studied by Elhoseiny et al. (2016 Elhoseiny et al. ( , 2017 on datasets with bird species (CUB-2011, NABirds).",
"cite_spans": [
{
"start": 127,
"end": 145,
"text": "Reed et al. (2016)",
"ref_id": "BIBREF16"
},
{
"start": 405,
"end": 427,
"text": "Elhoseiny et al. (2016",
"ref_id": "BIBREF5"
},
{
"start": 428,
"end": 453,
"text": "Elhoseiny et al. ( , 2017",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related works",
"sec_num": "2"
},
{
"text": "The usefulness of such data was more recently additionally supported by Zhu et al. (2018) and Chen et al. (2020) . These works were, however, limited to relatively small datasets with only closely related classes compared to ImageNet, which is a much larger dataset and covers a more diverse set of classes.",
"cite_spans": [
{
"start": 72,
"end": 89,
"text": "Zhu et al. (2018)",
"ref_id": "BIBREF24"
},
{
"start": 94,
"end": 112,
"text": "Chen et al. (2020)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related works",
"sec_num": "2"
},
{
"text": "ImageNet classes correspond to a subset of synsets representing nouns in WordNet 3.0. Each class is defined by its wnid (WordNet ID), class phrases (which we also refer to as class names in this paper), and a gloss (brief description), for example (n02129165; \"lion, king of beasts, Panthera leo\"; large gregarious predatory feline of Africa and India having a tawny coat with a shaggy mane in the male). The synsets have a hierarchical structure, such that our example class is a descendant of classes like \"big cat\" and \"animal\". We create an automatic matching of ImageNet classes to their corresponding Wikipedia pages. 2 The matching is based on the similarity between the synset words and Wikipedia titles, as well as their ancestor categories. Unfortunately, such matching occasionally produces false-positives, as it is a difficult problem. One reason for this could be that classes with similar names might represent very different concepts -e.g. (n02389026; \"sorrel\"; a horse of a brownish orange to light brown color) with the herb called \"sorrel\". To ensure highquality correspondences in ImageNet-Wiki, we first compared all automatic correspondences from both Niemann and Gurevych (2011) and Matuschek and Gurevych (2013) , then also manually verified all the matches, and modified them if necessary. The quality of the automatic matches often depends more, for example, on how irrelevant articles are filtered out than on the exact matching approach taken. We refer the interested reader to the source code we provide for the full details.",
"cite_spans": [
{
"start": 624,
"end": 625,
"text": "2",
"ref_id": null
},
{
"start": 1206,
"end": 1235,
"text": "Matuschek and Gurevych (2013)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "ImageNet-Wikipedia correspondences",
"sec_num": "3"
},
{
"text": "The text for each class is encoded into a single feature vector. Having features of fixed dimensionality for each class allows us to use standard ZSL approaches. As text encoders we use either ALBERT (Lan et al., 2019) , which is an NLP model based on Transformer (Vaswani et al., 2017) , or word embeddings models: GloVe (Pennington et al., 2014) or word2vec . For ALBERT we consider two official pretrained models: ALBERT-base and ALBERTxxlarge, which have hidden layer sizes of 768 and 4096 respectively. ALBERT (Lan et al., 2019) , similarly to its predecessor BERT (Devlin et al., 2019) , uses learnable positional encoding, which has fixed size. Therefore in order to encode pages that can be longer than the maximal length, we split the text into partially overlapping sequences of 256 tokens with an overlap of 50 tokens. We encode each of the sequences with ALBERT, and average the hidden states from the last layer over each token. Such representations of all the sequences are then also averaged to get the final feature vector. We compared alternative ways to extract features in Appendix E.",
"cite_spans": [
{
"start": 200,
"end": 218,
"text": "(Lan et al., 2019)",
"ref_id": "BIBREF9"
},
{
"start": 264,
"end": 286,
"text": "(Vaswani et al., 2017)",
"ref_id": "BIBREF20"
},
{
"start": 322,
"end": 347,
"text": "(Pennington et al., 2014)",
"ref_id": "BIBREF14"
},
{
"start": 515,
"end": 533,
"text": "(Lan et al., 2019)",
"ref_id": "BIBREF9"
},
{
"start": 570,
"end": 591,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Encoding textual descriptions for zero-shot learning",
"sec_num": "4"
},
{
"text": "For word embedding models, either GloVe (Pennington et al., 2014) or word2vec , we use official pre-trained weights. To encode the text we simply embed each token and then average the embedding features over the whole sequence. Finally, if a class has multiple corresponding articles, we average the representations obtained from each of them.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Encoding textual descriptions for zero-shot learning",
"sec_num": "4"
},
{
"text": "To compare the quality of different types of auxiliary data, we run experiments on two existing ZSL models. We choose to run all experiments on the standard ZSL task, instead of generalized ZSL (GZSL). That means we predict only unseen classes and not the union between seen and unseen classes. Although GZLS is the more practical long-term scenario, the standard ZSL setup better isolates the usefulness and discriminativeness of auxiliary data from the model's level of overfitting to the seen classes. As we introduce no new methods and focus on the importance of auxiliary data instead, the standards ZSL setup is more informative in our case.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "5"
},
{
"text": "We run experiments with a state-of-the-art CADA-VAE (Schonfeld et al., 2019) model. Although it was initially proposed for GZSL, we found that it also works well for the standard ZSL setup. To compare the relative importance of the quality of the model vs. the quality of information contained in auxiliary data, the second model we use is a simple approach based on linear projections. We refer to it as Simple ZSL.",
"cite_spans": [
{
"start": 52,
"end": 76,
"text": "(Schonfeld et al., 2019)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Zero-shot learning models",
"sec_num": "5.1"
},
{
"text": "CADA-VAE (Schonfeld et al., 2019) The model is based on two Variational Auto-Encoders (VAEs) (Diederik et al., 2014) -one for image features and another for auxiliary data. It aligns the latent spaces of the two VAEs, such that the latent features of an image encoded with the image feature VAE and its class features encoded with auxiliary data VAE should be close to each other. After training the VAEs, a linear classifier is trained from the latent space vectors, obtained from the text description of the unseen classes, to the class labels of these unseen classes. 3 At test time the latent space features obtained from the image VAE are passed to the classifier. We use our own implementation of the model which yields similar results to those reported in the original paper. The details can be found in Appendix F, along with a discussion on the small differences between our and the original implementation.",
"cite_spans": [
{
"start": 9,
"end": 33,
"text": "(Schonfeld et al., 2019)",
"ref_id": "BIBREF19"
},
{
"start": 571,
"end": 572,
"text": "3",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Zero-shot learning models",
"sec_num": "5.1"
},
{
"text": "Simple ZSL model We also create a simple zeroshot learning model inspired by DeViSE (Frome et al., 2013) . Unlike the original work, we do not fine-tune the visual model and use two separate linear transformations for image features and auxiliary data to map the different modalities to a joint embedding space. The transformations are trained using a hinge loss which is defined: Assume there are K classes and x is the visual feature representation of a sample with class label 1 \u2264 y \u2264 K then the loss is",
"cite_spans": [
{
"start": 84,
"end": 104,
"text": "(Frome et al., 2013)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Zero-shot learning models",
"sec_num": "5.1"
},
{
"text": "K i=1 i =y max 0, m \u2212 \u03c6 x (x) T \u03c6 t (t y ) + \u03c6 x (x) T \u03c6 t (t i ) (1)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Zero-shot learning models",
"sec_num": "5.1"
},
{
"text": "where t i is the feature vector for the auxiliary data of class i, m \u2265 0 is a scalar corresponding to the margin, \u03c6",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Zero-shot learning models",
"sec_num": "5.1"
},
{
"text": "x (x) = W x x + b x and \u03c6 t (t i ) = W t t i + b t . Then W x , W t , b",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Zero-shot learning models",
"sec_num": "5.1"
},
{
"text": "x and b t are the trainable parameters in the model. At test time, to classify samples, we use the nearest neighbors between projected visual features and text labels.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Zero-shot learning models",
"sec_num": "5.1"
},
{
"text": "Our textual descriptions are assessed by comparing them to using word2vec embeddings of class names as the auxiliary data provided by Changpinyo et al. (2016) . The latter is a standard feature used in zero-shot learning on ImageNet (Xian et al., 2018 (Xian et al., , 2019 Schonfeld et al., 2019) . These word2vec representations were also learned on Wikipedia data. However, this is different from encoding explicit textual descriptions of classes.",
"cite_spans": [
{
"start": 134,
"end": 158,
"text": "Changpinyo et al. (2016)",
"ref_id": "BIBREF0"
},
{
"start": 233,
"end": 251,
"text": "(Xian et al., 2018",
"ref_id": "BIBREF21"
},
{
"start": 252,
"end": 272,
"text": "(Xian et al., , 2019",
"ref_id": "BIBREF22"
},
{
"start": 273,
"end": 296,
"text": "Schonfeld et al., 2019)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental setup",
"sec_num": "5.2"
},
{
"text": "Data We use standard splits between ImageNet classes proposed by Xian et al. (2018) . The train and val splits consist of 750 and 250 classes, which contain images from ILSVRC-2012 (Russakovsky et al., 2015) . Even though the original setup consists of multiple sets of separate test classes, we only use one of them: mp500 -the 500 most populated classes. We leave studying other splits for future work since it requires providing the corresponding Wikipedia articles for the additional classes in those splits.",
"cite_spans": [
{
"start": 65,
"end": 83,
"text": "Xian et al. (2018)",
"ref_id": "BIBREF21"
},
{
"start": 181,
"end": 207,
"text": "(Russakovsky et al., 2015)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental setup",
"sec_num": "5.2"
},
{
"text": "Features To represent images we use the 2048dimensional features provided by Xian et al. (2018) , which come from the last layer of a ResNet-101 before pooling. The model was pre-trained on ILSVRC 2012 part of ImageNet, which corresponds to the trainval ZSL split we use. Word2vec vector features of ImageNet class names (refered to as word2vec*) are from Changpinyo et al. (2016) have 500-dimensions and were trained on Wikipedia data. These are standard features used on ImageNet and were used by all the models we compare against. For encoding the article text with word2vec, we instead use standard pre-trained 300dimensional vectors provided by . GloVe features are also 300-dimensional and are from Pennington et al. (2014) . In all the experiments, we keep both image and auxiliary features fixed.",
"cite_spans": [
{
"start": 77,
"end": 95,
"text": "Xian et al. (2018)",
"ref_id": "BIBREF21"
},
{
"start": 356,
"end": 380,
"text": "Changpinyo et al. (2016)",
"ref_id": "BIBREF0"
},
{
"start": 705,
"end": 729,
"text": "Pennington et al. (2014)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental setup",
"sec_num": "5.2"
},
{
"text": "Choice of the hyper-parameters The hyperparameter values of the models are selected independently for each model and type of auxiliary data feature. The models are trained on the set of train classes and their performance evaluated on the val classes. We use both random and manual searches over possible settings, and the same number of runs was made on each model variant. More details are in Appendix F. The val set consists of a subset of classes that were used to train the image feature extractor, which violates ZSL assumptions. Although that is likely to lead to over-optimistic numbers, val performance is used only for the hyperparameter selection.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental setup",
"sec_num": "5.2"
},
{
"text": "Evaluation We evaluate the models using the mean per class top-1 and top-5 accuracy which are standard ZSL metrics. All the hyperparameter values used for evaluation on mp500 set were solely determined based on val performance of the models. For the final evaluation, we train the models on train+val with 1000 classes (750 train + 250 val) and use separate 500 classes from mp500 for measuring performance. Since ImageNet-Wiki contains auxiliary features for 489 out of 500 classes from mp500, unless stated differently, for a fair com-parison, we compute the accuracies of the models using Wikipedia articles assuming 0 accuracy on the remaining classes.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental setup",
"sec_num": "5.2"
},
{
"text": "Comparison of different auxiliary data encodings First, in Table 3 , we compare ways to encode the auxiliary data on the val set only. We observe that using the whole Wiki articles works better than using just the abstracts (first paragraphs of the articles before any section starts). Also, word embedding encoding of the text appears to work better than the more complex ALBERT model (Lan et al., 2019) with GloVe (Pennington et al., 2014) being the best feature extractor. For completeness of the comparison, we also try to encode class names with ALBERT and GloVe, although AL-BERT was designed to work with longer sequences, therefore it can perform poorly with simple class names. Evaluating models and feature types on the test set In Table 4 , we compare zero-shot learning accuracies of various models on the mp500 test split. Our experiments compare two different types of auxiliary information, class names encoded with word2vec, which is a standard approach on Im- ",
"cite_spans": [
{
"start": 386,
"end": 404,
"text": "(Lan et al., 2019)",
"ref_id": "BIBREF9"
},
{
"start": 416,
"end": 441,
"text": "(Pennington et al., 2014)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [
{
"start": 59,
"end": 66,
"text": "Table 3",
"ref_id": "TABREF2"
},
{
"start": 742,
"end": 749,
"text": "Table 4",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "5.3"
},
{
"text": "C M T S A E C O N S E S J E D E V I S E A L E L A T E M E S Z S L S Y N C C A D A -V A E S im p le Z S L C A D A -V",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "5.3"
},
{
"text": "Top-1 accuracy (%) ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "ZSL Method",
"sec_num": null
},
{
"text": "Class names Wiki articles C M T S A E C O N S E S J E D E V I S E A L E L A T E M E S Z S L S Y N C C A D A -V A E S im p le Z S L C A D A -V",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "ZSL Method",
"sec_num": null
},
{
"text": "Top-5 accuracy (%) Figure 2 : Reported results for zero-shot learning on the ImageNet dataset and the mp500 test split. Color indicates the auxiliary data type used. Methods in boldface correspond to the implementations in our paper, the other numbers were reported by Xian et al. (2018) . We chose the best performing variants of our models from Table 4. ageNet, as well as using Wikipedia articles that we extract, encoded with two variants of ALBERT and two variants of word embedding encodings. The results show that models using the textual descriptions consistently achieve higher accuracies. The increased performance is even more prominent on the simple ZSL model. When using textual descriptions, such a simple model outperforms much more complex and generally better CADA-VAE trained with word2vec features of class names. This demonstrates not only the high quality of information the Wikipedia articles convey about the classes but also that the information can be effectively extracted even by simple models. In Figure 2 , we additionally compare the results we achieve with those reported by Xian et al. (2018) of different zero-shot learning methods. The general setup we used is the same and uses the same image features, data splits and word2vec vectors of the class names. The comparison demonstrates that the models using Wikipedia articles outperform all the prior methods, highlighting the relative impor- Changpinyo et al. (2016) and is different from word2vec which are standard pre-trained vectors. We report the mean and standard deviation values from 5 training runs with different random seeds.",
"cite_spans": [
{
"start": 269,
"end": 287,
"text": "Xian et al. (2018)",
"ref_id": "BIBREF21"
},
{
"start": 1106,
"end": 1124,
"text": "Xian et al. (2018)",
"ref_id": "BIBREF21"
},
{
"start": 1427,
"end": 1451,
"text": "Changpinyo et al. (2016)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [
{
"start": 19,
"end": 27,
"text": "Figure 2",
"ref_id": null
},
{
"start": 347,
"end": 355,
"text": "Table 4.",
"ref_id": "TABREF5"
},
{
"start": 1025,
"end": 1033,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "ZSL Method",
"sec_num": null
},
{
"text": "Model Auxiliary data Features top-1 (%) top-5 (%)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Result on mp500",
"sec_num": null
},
{
"text": "CADA-VAE (Schonfeld et al., 2019) Figure 3 : Within and across category generalization of ZSL. Top-5 per class accuracy on CADA-VAE. We show the effect of excluding a category of classes from the training set on ZSL performance for all mp500 test set categories. Specifically we compare excluding all animal classes vs. excluding the same number of random non-animal classes. We repeat the same process for the plant class. Different models trained on different auxiliary data are compared. The performance numbers are from a single run each. The performance on the unseen animal classes drops dramatically when the animal classes are removed from the training set, the red bars in the first grouping of plots. The same trend, though not as pronounced, can be seen for the remaining classes, the gray bars in the last grouping of plots. The number of test classes is slightly higher for models using class name data (marked with *) since Wiki articles are missing for some classes.",
"cite_spans": [
{
"start": 9,
"end": 33,
"text": "(Schonfeld et al., 2019)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [
{
"start": 34,
"end": 42,
"text": "Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Result on mp500",
"sec_num": null
},
{
"text": "tance of the auxiliary data type and algorithmic development.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Result on mp500",
"sec_num": null
},
{
"text": "Generalization outside the seen groups of classes The hierarchical structure of WordNet entities, used as ImageNet classes, allows us to study different groups of classes separately. In Figure 3 , we split the set of both train and test classes into three disjoint partitions: animals, plants, and remaining and compare the test performance on all subgroups when some of them are excluded from the training set. First of all, the performance on plants is generally much lower than the much more populated animals group. Moreover, we see that all the models have poor accuracy on animals when that group is excluded from the training set. These results show the models' inability to generalize to unseen classes that are semantically far from seen classes. Therefore, they also suggest that studying ZSL on ImageNet might be more practical than datasets with a less diverse set of classes.",
"cite_spans": [],
"ref_spans": [
{
"start": 186,
"end": 194,
"text": "Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Result on mp500",
"sec_num": null
},
{
"text": "Analyzing the impact of class similarity Even though ImageNet classes represent distinct concepts, since WordNet and Wikipedia were created independently, naturally, the granularity of the concepts they define can be different. As a consequence, some of the classes can be mapped to multiple articles. For example, class (n02965783, \"car mirror\") was assigned two more detailed articles: Table 5 : Performance on subsets of mp500 that excludes classes that are similar to those in the training set.",
"cite_spans": [],
"ref_spans": [
{
"start": 388,
"end": 395,
"text": "Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Result on mp500",
"sec_num": null
},
{
"text": "We take two CADA-VAE models (single runs), trained to predict all unseen classes and present their mean per-class accuracies of different subsets of the mp500 classes. Test classes are excluded based on their overlap with the train+val classes in either: Wikipedia pages correspondences or WordNet phrases (defining the classes). Each row corresponds to a different subset of excluded classes. Examples of excluded classes are: Row 3: (n03792972, \"mountain tent\") is removed as train+val contains (n02769963, \"backpacking tent\", \"pack tent\") and all are mapped to \"Tent\". Row 4: (n04543772, \"wagon wheel\") is removed as it maps to [\"Wagon\", \"Wheel\"], and class (n02974003, \"car wheel\") in train+val maps to \"Wheel\". Row 5: (n03222516, \"doorbell, bell, buzzer\") is removed as there is (n03017168, \"chime, bell, gong\") in train+val. The numbers shown in bold display the results with the biggest drop in performance. ALBERT \"Rear-view mirror\", \"Wing mirror\". In fact, Word-Net synsets used as ImageNet classes consist of one or more (closely related) concepts that are considered equivalent. On the other hand, two different classes can be assigned the same set of articles. Potential overlap of sources of auxiliary information (Wiki pages or class phrases) can be present due to a high degree of similarity between corresponding classes, or compound classes being defined on other classes (e.g. wagon wheel being a wheel of a wagon). This phenomenon is associated not only with the particular auxiliary data we propose. As we observe in Table 5 , models using both sources of data have lower performance on subsets of test classes which exclude those that are similar to the training classes under various criteria defined for those types of auxiliary data. As shown in the last row, the model using Wikipedia text still outperforms the one using class phrases on the subset of test classes that exclude those that are similar to training classes in either Wikipedia pages or class phrases. However, as demonstrated in our results in Figure 3 , certain degree of similarity between seen and unseen classes is necessary for reasonable performance. Figure 4 : A selected subset of confusion matrices on mp500 test set. All the models are CADA-VAE (Schonfeld et al., 2019) trained on different types of auxiliary data. The rows correspond to true labels, columns to predicted classes -the matrices are rownormalized. The subset of classes was selected by choosing one class and including others that were the most frequently mutually confused by any of the models.",
"cite_spans": [
{
"start": 915,
"end": 921,
"text": "ALBERT",
"ref_id": null
},
{
"start": 2245,
"end": 2269,
"text": "(Schonfeld et al., 2019)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [
{
"start": 1537,
"end": 1544,
"text": "Table 5",
"ref_id": null
},
{
"start": 2034,
"end": 2042,
"text": "Figure 3",
"ref_id": null
},
{
"start": 2147,
"end": 2155,
"text": "Figure 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Result on mp500",
"sec_num": null
},
{
"text": "very few exceptions, this is true for the ImageNet classes we consider. However, it is possible to find cases where this would not be possible due to the different granularities of the classes and Wikipedia pages.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Qualitative evaluation of prediction",
"sec_num": null
},
{
"text": "The ZSL methods we use can utilize the information from our textual descriptions effectively, however, they only rely on features from pretrained, off-the-shelf models. Given that the textual de-scriptions are complex and contain only a sparse amount of visually relevant information, future research is needed to find methods more effective at fully utilizing our textual descriptions. Additionally, there is a potential of using our textual descriptions for different purposes, such as pretraining of image recognition models, similarly to the recent works by Desai and Johnson (2020) and Radford et al. (2021) . Finally, the issue of generalization to groups of classes remains an open challenge.",
"cite_spans": [
{
"start": 562,
"end": 586,
"text": "Desai and Johnson (2020)",
"ref_id": "BIBREF2"
},
{
"start": 591,
"end": 612,
"text": "Radford et al. (2021)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Qualitative evaluation of prediction",
"sec_num": null
},
{
"text": "We demonstrate that simply employing Wikipedia articles as auxiliary data for ZSL on ImageNet yields a state-of-the-art performance, even on a relatively weak model. Additionally, we show that standard ZSL models poorly generalize across categories of classes. Finally, we share the ImageNet-Wiki dataset with Wikipedia articles describing corresponding ImageNet classes to facilitate further the research on ZSL, as well as on the interaction between language and vision more generally.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "7"
},
{
"text": "We anticipate that the dataset we share will make research on multimodal machine learning more accessible, especially to research communities with fewer resources available. More speculatively, we hope that in the future, work on incorporating information from multiple modalities could potentially contribute towards progress on the robustness and flexibility of machine learning models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Broader Impact",
"sec_num": null
},
{
"text": "We would also like to point out the potential risk of the models reinforcing biases present in sources of text, in particular, in the data we use. The authors of Wikipedia generally represent a very nondiverse social group. 4 Moreover, the content of the database has been shown to contain biases, including the selection of topics it covers. 5 This challenge could potentially be mitigated to some degree with 4 https://en.wikipedia.org/wiki/Gender_ bias_on_Wikipedia (Accessed: 16 Nov 2020).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Broader Impact",
"sec_num": null
},
{
"text": "5 https://en.wikipedia.org/wiki/ Wikipedia#Coverage_of_topics_and_ systemic_bias (Accessed: 16 Nov 2020). more research progress on interpretable machine learning and bias reduction methods.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Broader Impact",
"sec_num": null
},
{
"text": "In Figure 5 , we show the relation between the top-5 class accuracy and the length of the textual description of that class. In general, the classes with longer corresponding text tend to yield better accuracy, although the general trend is not strong. However, the impact of the text length appears to be different among different class groups. On animal classes, the relation is negligible. However, on plant classes the relation is much stronger. This observation is especially important since the plant classes generally compose a relatively small fraction of seen classes, and the model has relatively low performance on this group (see Figure 3) . This result indicates that using longer textual descriptions can partially mitigate the effect of a group of classes not being well represented in the seen classes. Figure 5 : The relation between the top-5 accuracy on a given class and the length of its Wikipedia pages, separately for different groups of classes from mp500 test set. The model used is CADA-VAE trained on the Wikipedia articles using ALBERT-xxlarge features. The lengths are expressed in the log 10 of the number of characters. In case a class has more than one page, we sum the lengths of all pages. Note different ranges of axes. Figure 6 : Subsets of confusion matrices on mp500 test set. All the models are CADA-VAE (Schonfeld et al., 2019) trained on different types of auxiliary data. The rows correspond to true labels, columns to predicted classes -the matrices are row-normalized. The subset of classes was selected by choosing one class and including others that were the most frequently mutually confused by any of the models. The general performance differs between different subsets of classes -the pattern is less diagonal on the left than on the right column.",
"cite_spans": [
{
"start": 1343,
"end": 1367,
"text": "(Schonfeld et al., 2019)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [
{
"start": 3,
"end": 11,
"text": "Figure 5",
"ref_id": null
},
{
"start": 642,
"end": 651,
"text": "Figure 3)",
"ref_id": null
},
{
"start": 819,
"end": 827,
"text": "Figure 5",
"ref_id": null
},
{
"start": 1255,
"end": 1263,
"text": "Figure 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "B The effect of text length on accuracy",
"sec_num": null
},
{
"text": "In Figure 6 , we show confusion matrices for more subsets of classes. On one of the subsets (left), the predictions are generally more noisy. However, the mispredictions generally fall within similar classes (different species of insects on the left side). Additionally, we include the full confusion matrices in the supplementary material as standalone files.",
"cite_spans": [],
"ref_spans": [
{
"start": 3,
"end": 11,
"text": "Figure 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "C Additional confusion matrices",
"sec_num": null
},
{
"text": "For completeness, in Figure 7 , we also show the additional results of models trained on datasets that excluded different groups of classes. The main trend is, however, similar to the observed in Figure 3 . We observe that models perform very poorly on animal classes if the training set excluded animals. Additionally, the performance on plant classes is generally lower -likely due to the smaller set of classes in the training set.",
"cite_spans": [],
"ref_spans": [
{
"start": 21,
"end": 29,
"text": "Figure 7",
"ref_id": null
},
{
"start": 196,
"end": 204,
"text": "Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "D Additional across-category generalization results",
"sec_num": null
},
{
"text": "E Different ALBERT encodings Table 6 compares different ways to encode the textual descriptions with ALBERT (Lan et al., 2019) , such as how to extract features from individual pages and aggregate the page features for classes with more than one corresponding Wikipedia article. ALBERT (Lan et al., 2019) Figure 7 : Within and across category generalization of ZSL. We show the effect of excluding a category of classes from the training set on ZSL performance for all mp500 test set categories. Specifically we compare excluding all animal classes vs. excluding the same number of random non-animal classes. We repeat the same process for the plant class. Different models trained on different auxiliary data are compared. The performance numbers are from a single run each. The performance on the unseen animal classes drops dramatically when the animal classes are removed from the training set, the red bars in the first grouping of plots. The same trend, though not as pronounced, can be seen for the remaining classes, the gray bars in the last grouping of plots. The number of test classes is slightly higher for models using class name data (marked with *) since Wiki articles are missing for some classes.",
"cite_spans": [
{
"start": 108,
"end": 126,
"text": "(Lan et al., 2019)",
"ref_id": "BIBREF9"
},
{
"start": 286,
"end": 304,
"text": "(Lan et al., 2019)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [
{
"start": 29,
"end": 36,
"text": "Table 6",
"ref_id": null
},
{
"start": 305,
"end": 313,
"text": "Figure 7",
"ref_id": null
}
],
"eq_spans": [],
"section": "D Additional across-category generalization results",
"sec_num": null
},
{
"text": "output used for classification (\"[CLS]\"), which we additionally evaluate for extracting features. However, averaging features over each tokens seem to work much better. Additionally, averaging features among multiple pages works best. All other our experiments use averaging both the page represen- Table 6 : Validation set comparison of different textual inputs and encodings for zero-shot learning on Im-ageNet. The reported metrics are mean per class top-1 and top-5 accuracies of the validation classes. 750 train classes were used as seen and 250 val classes as unseen. All other hyparameters of Wikipedia models are the same and were chose based on the performance of the model from the first row. The model using class names had hyperparameters tuned independently. [CLS] represents the special output in ALBERT, typically used for classification tasks. tations and for aggregating page features (as in row 1, Table 6 ).",
"cite_spans": [
{
"start": 773,
"end": 778,
"text": "[CLS]",
"ref_id": null
}
],
"ref_spans": [
{
"start": 299,
"end": 306,
"text": "Table 6",
"ref_id": null
},
{
"start": 917,
"end": 924,
"text": "Table 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "D Additional across-category generalization results",
"sec_num": null
},
{
"text": "We individually tune model hyperparameters for three different types of auxiliary data: class names with word2vec features, Wikipedia articles with either ALBERT-base or ALBERT-xxlarge features. We use Adam optimizer and a random search to generate 40 different hyperparameter settings and choose the best setting for each model variant. The ranges of values used for different hyperparameters were as following: batch size \u2208 {32, 128, 256, 512, 1024}, target projection dimension defined by the shapes of W t and W x \u2208 {32, 128, 256, 512, 1024}, margin m \u2208 (0.1, 100.) or with equal probability m = 1, \u03b2 1 (for Adam) \u2208 {0.5, 0.9} learning rate \u2208 (0.00003, 0.01). The values are sampled uniformly in the log space of the ranges. The best configurations of hyperparameter values for each model variant are attached in the supplementary material together with the model weights.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "F Model details F.1 Simple ZSL",
"sec_num": null
},
{
"text": "Implementation details We use our own implementation of CADA-VAE [35] and verify that it performs similarly to the original. However, we there are two modification that we do. The official implementation of CADA-VAE 6 appears to 6 https://github.com/edgarschnfld/ CADA-VAE-PyTorch incorrectly implement reparameterization trick of Variation Auto-Encoders [11] . It misses the factor of 0.5 when transforming log(\u03c3 2 ) into \u03c3. Instead, we use \u03c3 = exp(0.5 \u2022 log(\u03c3 2 )). Also, unlike the original authors, instead of using a sum to aggregate loss function over samples in a batch, we use a mean. However, this difference should not be important as it can be compensated for by different learning rate or scaling factors of the loss function.",
"cite_spans": [
{
"start": 65,
"end": 69,
"text": "[35]",
"ref_id": null
},
{
"start": 355,
"end": 359,
"text": "[11]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "F.2 CADA-VAE",
"sec_num": null
},
{
"text": "Hyperparameters Like for Simple ZSL, the best values of hyperparameters are chosen individually for each of the three different auxiliary data sources. However, for CADA-VAE we use a combination of manual and random search over hyperparameter values.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "F.2 CADA-VAE",
"sec_num": null
},
{
"text": "The random search consisted of 42 runs on each of the model variant using the following ranges: batch size \u2208 {32, 128, 256, 512, 1024}, VAE latent space dimension \u2208 {32, 128, 256, 512, 1024}, VAE image features encoder \u2208 { [1560, 1560] , [2048, 1024] (0.1, 30) or with probability 0.3 fixed \u03b2 = 1, cross reconstruction loss factor \u2208 (0.25, 50.), distributional alignment loss factor \u2208 (0.25, 100.). Same as for Simple ZSL, the values were sampled uniformly in the log space of the ranges. The loss function factors were used for linearly increasing loss function coefficients.",
"cite_spans": [
{
"start": 223,
"end": 229,
"text": "[1560,",
"ref_id": null
},
{
"start": 230,
"end": 235,
"text": "1560]",
"ref_id": null
},
{
"start": 238,
"end": 244,
"text": "[2048,",
"ref_id": null
},
{
"start": 245,
"end": 250,
"text": "1024]",
"ref_id": null
}
],
"ref_spans": [
{
"start": 251,
"end": 260,
"text": "(0.1, 30)",
"ref_id": null
}
],
"eq_spans": [],
"section": "F.2 CADA-VAE",
"sec_num": null
},
{
"text": "The dataset, code and trained models available at: https://bujwid.eu/p/zsl-imagenet-wiki",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "We use enwiki-20200120 dump from https:// dumps.wikimedia.org/backup-index.html (Accessed: 22 Jan 2020).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "This describes the ZSL setting. Originally for GZSL, the latent features from the image samples are also used.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "This work was supported by the Wallenberg AI, Autonomous Systems and Software Program (WASP) funded by the Knut and Alice Wallenberg Foundation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
},
{
"text": "The schedules for the coefficients were the same as in the original work, that is: \u03b2 (for scaling VAE KL-divergence loss) was increased from epoch 0 to 93 (or fixed \u03b2 = 1, cross reconstruction loss from epoch 21 to 75, and distribution alignment from epoch 6 to 22. The rest of the hyperparameters were constant for the random search. We used AMSGrad optimizer, learning rate of VAEs of 0.00015. For the linear classifier we use: learning rate of 0.001, Adam optimizer, batch size of 32 and 200 sampled latent space vectors from each class. Additionally, we tried 58 evaluation runs on the variant using word2vec features, and only a subset of those settings on the models using ALBERT-base (21 runs), or ALBERT-xxlarge (19 runs) features.The best configurations of hyperparameter values for each model variant are attached in the supplementary material together with the model weights.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "annex",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Synthesized classifiers for zeroshot learning",
"authors": [
{
"first": "Soravit",
"middle": [],
"last": "Changpinyo",
"suffix": ""
},
{
"first": "Wei-Lun",
"middle": [],
"last": "Chao",
"suffix": ""
},
{
"first": "Boqing",
"middle": [],
"last": "Gong",
"suffix": ""
},
{
"first": "Fei",
"middle": [],
"last": "Sha",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the Conference on Computer Vision and Pattern Recognition (CVPR)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Soravit Changpinyo, Wei-Lun Chao, Boqing Gong, and Fei Sha. 2016. Synthesized classifiers for zero- shot learning. In Proceedings of the Conference on Computer Vision and Pattern Recognition (CVPR).",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Canzsl: Cycle-consistent adversarial networks for zero-shot learning from natural language",
"authors": [
{
"first": "Zhi",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Jingjing",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Yadan",
"middle": [],
"last": "Luo",
"suffix": ""
},
{
"first": "Zi",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Yang",
"middle": [],
"last": "Yang",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision",
"volume": "",
"issue": "",
"pages": "874--883",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhi Chen, Jingjing Li, Yadan Luo, Zi Huang, and Yang Yang. 2020. Canzsl: Cycle-consistent adversarial networks for zero-shot learning from natural lan- guage. In Proceedings of the IEEE/CVF Winter Con- ference on Applications of Computer Vision, pages 874-883.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Virtex: Learning visual representations from textual annotations",
"authors": [
{
"first": "Karan",
"middle": [],
"last": "Desai",
"suffix": ""
},
{
"first": "Justin",
"middle": [],
"last": "Johnson",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2006.06666"
]
},
"num": null,
"urls": [],
"raw_text": "Karan Desai and Justin Johnson. 2020. Virtex: Learn- ing visual representations from textual annotations. arXiv preprint arXiv:2006.06666.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Bert: Pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "4171--4186",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies, Volume 1 (Long and Short Papers), pages 4171-4186.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Autoencoding variational bayes",
"authors": [
{
"first": "Max",
"middle": [],
"last": "P Kingma Diederik",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Welling",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the International Conference on Learning Representations (ICLR)",
"volume": "1",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "P Kingma Diederik, Max Welling, et al. 2014. Auto- encoding variational bayes. In Proceedings of the International Conference on Learning Representa- tions (ICLR), volume 1.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Write a classifier: Predicting visual classifiers from unstructured text",
"authors": [
{
"first": "Mohamed",
"middle": [],
"last": "Elhoseiny",
"suffix": ""
},
{
"first": "Ahmed",
"middle": [],
"last": "Elgammal",
"suffix": ""
},
{
"first": "Babak",
"middle": [],
"last": "Saleh",
"suffix": ""
}
],
"year": 2016,
"venue": "IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI)",
"volume": "39",
"issue": "",
"pages": "2539--2553",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mohamed Elhoseiny, Ahmed Elgammal, and Babak Saleh. 2016. Write a classifier: Predicting visual classifiers from unstructured text. IEEE Transac- tions on Pattern Analysis and Machine Intelligence (TPAMI), 39(12):2539-2553.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Link the head to the\" beak\": Zero shot learning from noisy text description at part precision",
"authors": [
{
"first": "Mohamed",
"middle": [],
"last": "Elhoseiny",
"suffix": ""
},
{
"first": "Yizhe",
"middle": [],
"last": "Zhu",
"suffix": ""
},
{
"first": "Han",
"middle": [],
"last": "Zhang",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the Conference on Computer Vision and Pattern Recognition (CVPR)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mohamed Elhoseiny, Yizhe Zhu, Han Zhang, and Ahmed Elgammal. 2017. Link the head to the\" beak\": Zero shot learning from noisy text descrip- tion at part precision. In Proceedings of the Confer- ence on Computer Vision and Pattern Recognition (CVPR).",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Devise: A deep visual-semantic embedding model",
"authors": [
{
"first": "Andrea",
"middle": [],
"last": "Frome",
"suffix": ""
},
{
"first": "Greg",
"middle": [
"S"
],
"last": "Corrado",
"suffix": ""
},
{
"first": "Jon",
"middle": [],
"last": "Shlens",
"suffix": ""
},
{
"first": "Samy",
"middle": [],
"last": "Bengio",
"suffix": ""
},
{
"first": "Jeff",
"middle": [],
"last": "Dean",
"suffix": ""
},
{
"first": "Marc'aurelio",
"middle": [],
"last": "Ranzato",
"suffix": ""
},
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
}
],
"year": 2013,
"venue": "Advances in Neural Information Processing Systems (NIPS)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Andrea Frome, Greg S Corrado, Jon Shlens, Samy Ben- gio, Jeff Dean, Marc'Aurelio Ranzato, and Tomas Mikolov. 2013. Devise: A deep visual-semantic em- bedding model. In Advances in Neural Information Processing Systems (NIPS).",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Making the V in VQA matter: Elevating the role of image understanding in Visual Question Answering",
"authors": [
{
"first": "Yash",
"middle": [],
"last": "Goyal",
"suffix": ""
},
{
"first": "Tejas",
"middle": [],
"last": "Khot",
"suffix": ""
},
{
"first": "Douglas",
"middle": [],
"last": "Summers-Stay",
"suffix": ""
},
{
"first": "Dhruv",
"middle": [],
"last": "Batra",
"suffix": ""
},
{
"first": "Devi",
"middle": [],
"last": "Parikh",
"suffix": ""
}
],
"year": 2017,
"venue": "Conference on Computer Vision and Pattern Recognition (CVPR)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yash Goyal, Tejas Khot, Douglas Summers-Stay, Dhruv Batra, and Devi Parikh. 2017. Making the V in VQA matter: Elevating the role of image under- standing in Visual Question Answering. In Confer- ence on Computer Vision and Pattern Recognition (CVPR).",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Albert: A lite bert for self-supervised learning of language representations",
"authors": [
{
"first": "Zhenzhong",
"middle": [],
"last": "Lan",
"suffix": ""
},
{
"first": "Mingda",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "Goodman",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Gimpel",
"suffix": ""
},
{
"first": "Piyush",
"middle": [],
"last": "Sharma",
"suffix": ""
},
{
"first": "Radu",
"middle": [],
"last": "Soricut",
"suffix": ""
}
],
"year": 2019,
"venue": "International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, and Radu Soricut. 2019. Albert: A lite bert for self-supervised learning of language representations. In International Con- ference on Learning Representations.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Zero-data learning of new tasks",
"authors": [
{
"first": "Hugo",
"middle": [],
"last": "Larochelle",
"suffix": ""
},
{
"first": "Dumitru",
"middle": [],
"last": "Erhan",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of the 23rd national conference on Artificial intelligence",
"volume": "2",
"issue": "",
"pages": "646--651",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hugo Larochelle, Dumitru Erhan, and Yoshua Bengio. 2008. Zero-data learning of new tasks. In Proceed- ings of the 23rd national conference on Artificial intelligence-Volume 2, pages 646-651.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Dijkstra-wsa: A graph-based approach to word sense alignment",
"authors": [
{
"first": "Michael",
"middle": [],
"last": "Matuschek",
"suffix": ""
},
{
"first": "Iryna",
"middle": [],
"last": "Gurevych",
"suffix": ""
}
],
"year": 2013,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "151--164",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michael Matuschek and Iryna Gurevych. 2013. Dijkstra-wsa: A graph-based approach to word sense alignment. Transactions of the Association for Computational Linguistics, 1:151-164.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Distributed representations of words and phrases and their compositionality",
"authors": [
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
},
{
"first": "Kai",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Greg",
"middle": [
"S"
],
"last": "Corrado",
"suffix": ""
},
{
"first": "Jeff",
"middle": [],
"last": "Dean",
"suffix": ""
}
],
"year": 2013,
"venue": "Advances in neural information processing systems",
"volume": "",
"issue": "",
"pages": "3111--3119",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Cor- rado, and Jeff Dean. 2013. Distributed representa- tions of words and phrases and their compositional- ity. In Advances in neural information processing systems, pages 3111-3119.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "The people's web meets linguistic knowledge: automatic sense alignment of wikipedia and wordnet",
"authors": [
{
"first": "Elisabeth",
"middle": [],
"last": "Niemann",
"suffix": ""
},
{
"first": "Iryna",
"middle": [],
"last": "Gurevych",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the Ninth International Conference on Computational Semantics",
"volume": "",
"issue": "",
"pages": "205--214",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Elisabeth Niemann and Iryna Gurevych. 2011. The people's web meets linguistic knowledge: automatic sense alignment of wikipedia and wordnet. In Pro- ceedings of the Ninth International Conference on Computational Semantics, pages 205-214. Associa- tion for Computational Linguistics.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Glove: Global vectors for word representation",
"authors": [
{
"first": "Jeffrey",
"middle": [],
"last": "Pennington",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "Christopher D",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "1532--1543",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jeffrey Pennington, Richard Socher, and Christopher D Manning. 2014. Glove: Global vectors for word rep- resentation. In Proceedings of the 2014 conference on empirical methods in natural language process- ing (EMNLP), pages 1532-1543.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Learning deep representations of fine-grained visual descriptions",
"authors": [
{
"first": "Scott",
"middle": [],
"last": "Reed",
"suffix": ""
},
{
"first": "Zeynep",
"middle": [],
"last": "Akata",
"suffix": ""
},
{
"first": "Honglak",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Bernt",
"middle": [],
"last": "Schiele",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition",
"volume": "",
"issue": "",
"pages": "49--58",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Scott Reed, Zeynep Akata, Honglak Lee, and Bernt Schiele. 2016. Learning deep representations of fine-grained visual descriptions. In Proceedings of the IEEE Conference on Computer Vision and Pat- tern Recognition, pages 49-58.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "An embarrassingly simple approach to zero-shot learning",
"authors": [
{
"first": "Bernardino",
"middle": [],
"last": "Romera",
"suffix": ""
},
{
"first": "-",
"middle": [],
"last": "Paredes",
"suffix": ""
},
{
"first": "Philip",
"middle": [],
"last": "Torr",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the International Conference on Machine Learning (ICML)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bernardino Romera-Paredes and Philip Torr. 2015. An embarrassingly simple approach to zero-shot learn- ing. In Proceedings of the International Conference on Machine Learning (ICML).",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Imagenet large scale visual recognition challenge",
"authors": [
{
"first": "Olga",
"middle": [],
"last": "Russakovsky",
"suffix": ""
},
{
"first": "Jia",
"middle": [],
"last": "Deng",
"suffix": ""
},
{
"first": "Hao",
"middle": [],
"last": "Su",
"suffix": ""
},
{
"first": "Jonathan",
"middle": [],
"last": "Krause",
"suffix": ""
},
{
"first": "Sanjeev",
"middle": [],
"last": "Satheesh",
"suffix": ""
},
{
"first": "Sean",
"middle": [],
"last": "Ma",
"suffix": ""
},
{
"first": "Zhiheng",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Andrej",
"middle": [],
"last": "Karpathy",
"suffix": ""
},
{
"first": "Aditya",
"middle": [],
"last": "Khosla",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Bernstein",
"suffix": ""
}
],
"year": 2015,
"venue": "International journal of computer vision",
"volume": "115",
"issue": "3",
"pages": "211--252",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, An- drej Karpathy, Aditya Khosla, Michael Bernstein, et al. 2015. Imagenet large scale visual recognition challenge. International journal of computer vision, 115(3):211-252.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Generalized zero-and few-shot learning via aligned variational autoencoders",
"authors": [
{
"first": "Edgar",
"middle": [],
"last": "Schonfeld",
"suffix": ""
},
{
"first": "Sayna",
"middle": [],
"last": "Ebrahimi",
"suffix": ""
},
{
"first": "Samarth",
"middle": [],
"last": "Sinha",
"suffix": ""
},
{
"first": "Trevor",
"middle": [],
"last": "Darrell",
"suffix": ""
},
{
"first": "Zeynep",
"middle": [],
"last": "Akata",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition",
"volume": "",
"issue": "",
"pages": "8247--8255",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Edgar Schonfeld, Sayna Ebrahimi, Samarth Sinha, Trevor Darrell, and Zeynep Akata. 2019. Gener- alized zero-and few-shot learning via aligned vari- ational autoencoders. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recog- nition, pages 8247-8255.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Attention is all you need",
"authors": [
{
"first": "Ashish",
"middle": [],
"last": "Vaswani",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Shazeer",
"suffix": ""
},
{
"first": "Niki",
"middle": [],
"last": "Parmar",
"suffix": ""
},
{
"first": "Jakob",
"middle": [],
"last": "Uszkoreit",
"suffix": ""
},
{
"first": "Llion",
"middle": [],
"last": "Jones",
"suffix": ""
},
{
"first": "Aidan",
"middle": [
"N"
],
"last": "Gomez",
"suffix": ""
},
{
"first": "\u0141ukasz",
"middle": [],
"last": "Kaiser",
"suffix": ""
},
{
"first": "Illia",
"middle": [],
"last": "Polosukhin",
"suffix": ""
}
],
"year": 2017,
"venue": "Advances in neural information processing systems",
"volume": "",
"issue": "",
"pages": "5998--6008",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, \u0141ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in neural information pro- cessing systems, pages 5998-6008.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Zero-shot learning-a comprehensive evaluation of the good, the bad and the ugly",
"authors": [
{
"first": "Yongqin",
"middle": [],
"last": "Xian",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Christoph",
"suffix": ""
},
{
"first": "Bernt",
"middle": [],
"last": "Lampert",
"suffix": ""
},
{
"first": "Zeynep",
"middle": [],
"last": "Schiele",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Akata",
"suffix": ""
}
],
"year": 2018,
"venue": "IEEE transactions on pattern analysis and machine intelligence",
"volume": "41",
"issue": "",
"pages": "2251--2265",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yongqin Xian, Christoph H Lampert, Bernt Schiele, and Zeynep Akata. 2018. Zero-shot learning-a comprehensive evaluation of the good, the bad and the ugly. IEEE transactions on pattern analysis and machine intelligence, 41(9):2251-2265.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "f-vaegan-d2: A feature generating framework for any-shot learning",
"authors": [
{
"first": "Yongqin",
"middle": [],
"last": "Xian",
"suffix": ""
},
{
"first": "Saurabh",
"middle": [],
"last": "Sharma",
"suffix": ""
},
{
"first": "Bernt",
"middle": [],
"last": "Schiele",
"suffix": ""
},
{
"first": "Zeynep",
"middle": [],
"last": "Akata",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition",
"volume": "",
"issue": "",
"pages": "10275--10284",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yongqin Xian, Saurabh Sharma, Bernt Schiele, and Zeynep Akata. 2019. f-vaegan-d2: A feature gener- ating framework for any-shot learning. In Proceed- ings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 10275-10284.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Yin and Yang: Balancing and answering binary visual questions",
"authors": [
{
"first": "Peng",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Yash",
"middle": [],
"last": "Goyal",
"suffix": ""
},
{
"first": "Douglas",
"middle": [],
"last": "Summers-Stay",
"suffix": ""
},
{
"first": "Dhruv",
"middle": [],
"last": "Batra",
"suffix": ""
},
{
"first": "Devi",
"middle": [],
"last": "Parikh",
"suffix": ""
}
],
"year": 2016,
"venue": "Conference on Computer Vision and Pattern Recognition (CVPR)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Peng Zhang, Yash Goyal, Douglas Summers-Stay, Dhruv Batra, and Devi Parikh. 2016. Yin and Yang: Balancing and answering binary visual questions. In Conference on Computer Vision and Pattern Recog- nition (CVPR).",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "A generative adversarial approach for zero-shot learning from noisy texts",
"authors": [
{
"first": "Yizhe",
"middle": [],
"last": "Zhu",
"suffix": ""
},
{
"first": "Mohamed",
"middle": [],
"last": "Elhoseiny",
"suffix": ""
},
{
"first": "Bingchen",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Xi",
"middle": [],
"last": "Peng",
"suffix": ""
},
{
"first": "Ahmed",
"middle": [],
"last": "Elgammal",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the IEEE conference on computer vision and pattern recognition",
"volume": "",
"issue": "",
"pages": "1004--1013",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yizhe Zhu, Mohamed Elhoseiny, Bingchen Liu, Xi Peng, and Ahmed Elgammal. 2018. A genera- tive adversarial approach for zero-shot learning from noisy texts. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 1004-1013.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"uris": null,
"text": "Distinctive features of all elephants include a long trunk, tusks, large ear flaps, massive legs, and tough but sensitive skin. (...) Class names: hippopotamus, hippo, river horse, Hippopotamus amphibius Wikipedia title & article: Hippopotamus (...) Hippos are recognisable by their barrelshaped torsos, wide-opening mouths revealing large canine tusks, nearly hairless bodies, columnar legs and large size (...) Class names: purse Wikipedia title & article: Handbag A handbag, commonly known as a purse in North American English, is a handled medium-to-large bag used to carry personal items. (...) Class names: plastic bag Wikipedia title & article: Plastic bag A plastic bag, poly bag, or pouch is a type of container made of thin, flexible, plastic film, Nonwoven fabric, or plastic textile. (...)Training data for seen classes ?",
"num": null
},
"TABREF0": {
"content": "<table><tr><td>Dataset</td><td colspan=\"2\"># images # classes</td><td colspan=\"2\">Used auxiliary data</td><td>Semantic classes</td></tr><tr><td/><td/><td/><td>Other</td><td>Wiki articles Attrib.</td></tr><tr><td>aP/aY</td><td>15k</td><td>32</td><td/><td/><td>person, animals, objects</td></tr><tr><td>AwA1</td><td>30k</td><td>50</td><td/><td/><td>animals</td></tr><tr><td>AwA2</td><td>37k</td><td>50</td><td/><td/><td>animals</td></tr><tr><td>LAD</td><td>78k</td><td>230</td><td/><td/><td>animals, fruits, objects, other</td></tr><tr><td>SUN</td><td>14k</td><td>717</td><td/><td/><td>scenes</td></tr><tr><td>CUB</td><td>11k</td><td colspan=\"2\">200 short image capts. ( \u2020)</td><td>( \u2021)</td><td>birds</td></tr><tr><td>NABirds</td><td>48k</td><td>1,011</td><td/><td>( \u2021)</td><td>birds</td></tr><tr><td>ImageNet-Wiki</td><td>1.3M*</td><td>1,000*</td><td>class names</td><td>(this work)</td></tr></table>",
"type_str": "table",
"num": null,
"html": null,
"text": ""
},
"TABREF1": {
"content": "<table><tr><td>Auxiliary</td><td/></tr><tr><td>data</td><td>Advantages / Disadvantages</td></tr><tr><td>type</td><td/></tr><tr><td>Attributes</td><td>+ Little noise: attributes selected by experts</td></tr><tr><td/><td>+ Potentially highly discriminative: depend-</td></tr><tr><td/><td>ing on the selected features, might be</td></tr><tr><td/><td>highly informative</td></tr><tr><td/><td>\u2212 Difficult to scale: as the number of classes</td></tr><tr><td/><td>increases, the more attributes are required</td></tr><tr><td/><td>to discriminate</td></tr><tr><td/><td>\u2212 Simple relations: attributes may be condi-</td></tr><tr><td/><td>tional and difficult to quantify (e.g. zebra</td></tr><tr><td/><td>has brown and white stripes at birth)</td></tr><tr><td/><td>\u2212 Expensive annotation: requires domain</td></tr><tr><td/><td>knowledge to define discriminative at-</td></tr><tr><td/><td>tributes</td></tr><tr><td/><td>\u2212 Low flexibility: difficult to aggregate mul-</td></tr><tr><td/><td>tiple independent datasets as attributes not</td></tr><tr><td/><td>standardized</td></tr><tr><td>Word</td><td>+ Very simple approach: little or no labor</td></tr><tr><td>embedding</td><td>required to create data</td></tr><tr><td>of class names</td><td>\u2212 Sensitive to linguistic issues: names may partially overlap or not reflect well seman-</td></tr><tr><td/><td>tic similarity</td></tr><tr><td/><td>\u2212 Little information: the name gives little</td></tr><tr><td/><td>discriminability of classes</td></tr></table>",
"type_str": "table",
"num": null,
"html": null,
"text": "Advantages and disadvantages of different auxiliary data types for zero-shot image classification."
},
"TABREF2": {
"content": "<table><tr><td/><td/><td colspan=\"2\">Result on val</td></tr><tr><td>Auxiliary data</td><td>Features</td><td colspan=\"2\">top-1 (%) top-5 (%)</td></tr><tr><td>Wiki article</td><td>GloVe</td><td>40.24</td><td>77.96</td></tr><tr><td>Wiki article</td><td>word2vec</td><td>36.14</td><td>72.80</td></tr><tr><td>Wiki article</td><td>ALBERT</td><td>31.29</td><td>64.92</td></tr><tr><td>Wiki abstract</td><td>ALBERT</td><td>22.71</td><td>52.84</td></tr><tr><td>Class names</td><td>ALBERT</td><td>14.19</td><td>30.53</td></tr><tr><td>Gloss</td><td>ALBERT</td><td>19.67</td><td>48.09</td></tr><tr><td colspan=\"2\">Class names &amp; gloss ALBERT</td><td>21.38</td><td>49.75</td></tr><tr><td>Class names</td><td>GloVe</td><td>29.47</td><td>59.26</td></tr><tr><td>Gloss</td><td>GloVe</td><td>22.41</td><td>54.91</td></tr><tr><td colspan=\"2\">Class names &amp; gloss GloVe</td><td>33.26</td><td>65.47</td></tr><tr><td>Class names*</td><td>word2vec*</td><td>27.87</td><td>61.91</td></tr></table>",
"type_str": "table",
"num": null,
"html": null,
"text": "Validation set comparison of different textual inputs and encodings for zero-shot learning on ImageNet. 750 train classes were used as seen and 250 val classes as unseen. Hyperparameters for each features type were tuned only on Wiki article data. All ALBERT encodings are of type ALBERT (xxlarge). word2vec* refers to the features provided by Xian et al. (2018) typically used for ZSL, which are different from the standard pretrained word2vec model."
},
"TABREF5": {
"content": "<table/>",
"type_str": "table",
"num": null,
"html": null,
"text": "Zero-shot learning results of the models we evaluate on ImageNet dataset. word2vec* uses features from"
}
}
}
}