ACL-OCL / Base_JSON /prefixN /json /N16 /N16-1043.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "N16-1043",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T14:36:48.290626Z"
},
"title": "Multimodal Semantic Learning from Child-Directed Input",
"authors": [
{
"first": "Angeliki",
"middle": [],
"last": "Lazaridou",
"suffix": "",
"affiliation": {},
"email": "angeliki.lazaridou@unitn.it"
},
{
"first": "Grzegorz",
"middle": [],
"last": "Chrupa\u0142a",
"suffix": "",
"affiliation": {},
"email": "g.chrupala@uvt.nl"
},
{
"first": "Raquel",
"middle": [],
"last": "Fern\u00e1ndez",
"suffix": "",
"affiliation": {},
"email": "raquel.fernandez@uva.nl"
},
{
"first": "Marco",
"middle": [],
"last": "Baroni",
"suffix": "",
"affiliation": {},
"email": "marco.baroni@unitn.it"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Children learn the meaning of words by being exposed to perceptually rich situations (linguistic discourse, visual scenes, etc). Current computational learning models typically simulate these rich situations through impoverished symbolic approximations. In this work, we present a distributed word learning model that operates on child-directed speech paired with realistic visual scenes. The model integrates linguistic and extra-linguistic information (visual and social cues), handles referential uncertainty, and correctly learns to associate words with objects, even in cases of limited linguistic exposure.",
"pdf_parse": {
"paper_id": "N16-1043",
"_pdf_hash": "",
"abstract": [
{
"text": "Children learn the meaning of words by being exposed to perceptually rich situations (linguistic discourse, visual scenes, etc). Current computational learning models typically simulate these rich situations through impoverished symbolic approximations. In this work, we present a distributed word learning model that operates on child-directed speech paired with realistic visual scenes. The model integrates linguistic and extra-linguistic information (visual and social cues), handles referential uncertainty, and correctly learns to associate words with objects, even in cases of limited linguistic exposure.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Computational models of word learning typically approximate the perceptual context that learners are exposed to through artificial proxies, e.g., representing a visual scene via a collection of symbols such as cat and dog, signaling the presence of a cat, a dog, etc. (Yu and Ballard, 2007; Fazly et al., 2010 , inter alia). 1 While large amounts of data can be generated in this way, they will not display the complexity and richness of the signal found in the natural environment a child is exposed to. We take a step towards a more realistic setup by introducing a model that operates on naturalistic images of the objects present in a communicative episode. Inspired by recent computational models of meaning (Bruni et al., 2014; Kiros et al., 2014; Silberer and Lapata, 2014) , that integrate distributed linguistic and visual information, we build upon the Multimodal Skip-Gram (MSG) model of Lazaridou et al. (2015) . and enhance it to handle cross-referential uncertainty. Moreover, we extend the cues commonly used in multimodal learning (e.g., objects in the environment) to include social cues (e.g., eyegaze, gestures, body posture, etc.) that reflect speakers' intentions and generally contribute to the unfolding of the communicative situation (Stivers and Sidnell, 2005) . As a first step towards developing full-fleged learning systems that leverage all signals available within a communicative setup, in our extended model we incorporate information regarding the objects that caregivers are holding.",
"cite_spans": [
{
"start": 268,
"end": 290,
"text": "(Yu and Ballard, 2007;",
"ref_id": "BIBREF24"
},
{
"start": 291,
"end": 309,
"text": "Fazly et al., 2010",
"ref_id": "BIBREF4"
},
{
"start": 713,
"end": 733,
"text": "(Bruni et al., 2014;",
"ref_id": "BIBREF2"
},
{
"start": 734,
"end": 753,
"text": "Kiros et al., 2014;",
"ref_id": "BIBREF11"
},
{
"start": 754,
"end": 780,
"text": "Silberer and Lapata, 2014)",
"ref_id": "BIBREF21"
},
{
"start": 899,
"end": 922,
"text": "Lazaridou et al. (2015)",
"ref_id": "BIBREF13"
},
{
"start": 1258,
"end": 1285,
"text": "(Stivers and Sidnell, 2005)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Like the original MSG, our model learns multimodal word embeddings by reading an utterance sequentially and making, for each word, two sets of predictions: (a) the preceding and following words, and (b) the visual representations of objects co-occurring with the utterance. However, unlike Lazaridou et al. (2015), we do not assume we know the right object to be associated with a word. We consider instead a more realistic scenario where multiple words in an utterance co-occur with multiple objects in the corresponding scene. Under this referential uncertainty, the model needs to induce word-object associations as part of learning, relying on current knowledge about word-object affinities as well as on any social clues present in the scene.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Attentive Social MSG Model",
"sec_num": "2"
},
{
"text": "Similar to the standard skipgram, the model's parameters are context word embeddings W and tar-get word embeddings W. The model aims at optimizing these parameters with respect to the following multi-task loss function for an utterance w with associated set of objects U :",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Attentive Social MSG Model",
"sec_num": "2"
},
{
"text": "L(w, U ) = T t=1 ( ling (w, t) + vis (w t , U )) (1)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Attentive Social MSG Model",
"sec_num": "2"
},
{
"text": "where t ranges over the positions in the utterance w, such that w t is t th word. The linguistic loss function is the standard skip-gram loss (Mikolov et al., 2013) . The visual loss is defined as:",
"cite_spans": [
{
"start": 142,
"end": 164,
"text": "(Mikolov et al., 2013)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Attentive Social MSG Model",
"sec_num": "2"
},
{
"text": "vis (w t , U ) = S s=1 \u03bb\u03b1(w t , u s )g(w t , u s ) +(1 \u2212 \u03bb)h(u s )g(w t , u s ) (2)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Attentive Social MSG Model",
"sec_num": "2"
},
{
"text": "where w t stands for the column of W corresponding to word w t , u s is the vector associated with object U s , and g the penalty function",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Attentive Social MSG Model",
"sec_num": "2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "g(w t , u s ) = u max(0, \u03b3 \u2212 cos(w t , u s ) + cos(w t , u )),",
"eq_num": "(3)"
}
],
"section": "Attentive Social MSG Model",
"sec_num": "2"
},
{
"text": "which is small when projections to the visual space w t of words from the utterance are similar to the vectors representing co-occurring objects, and at the same time they are dissimilar to vectors u representing randomly sampled objects. The first term in Eq. 2 is the penalty g weighted by the current wordobject affinity \u03b1, inspired by the \"attention\" of Bahdanau et al. (2015) . If \u03b1 is set to a constant 1, the model treats all words in an utterance as equally relevant for each object. Alternatively it can be used to encourage the model to place more weight on words which it already knows are likely to be related to a given object, by defining it as the (exponentiated) cosine similarity between word and object normalized over all words in the utterance:",
"cite_spans": [
{
"start": 358,
"end": 380,
"text": "Bahdanau et al. (2015)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Attentive Social MSG Model",
"sec_num": "2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "\u03b1(w t , u s ) = exp(cos(w t , u s )) r exp(cos(w r , u s ))",
"eq_num": "(4)"
}
],
"section": "Attentive Social MSG Model",
"sec_num": "2"
},
{
"text": "The second term of Eq. 2 is the penalty weighted by the social salience h of the object, which could be based on various cues in the scene. In our experiments we set it to 1 if the caregiver holds the object, 0 otherwise. We experiment with three versions of the model. With \u03bb = 1 and \u03b1 frozen to 1, the model reduces to the original MSG, but now trained with referential uncertainty. The Attentive MSG sets \u03bb = 1 and calculates \u03b1(w t , u s ) using Equation 4 (we use the term \"attentive\" to emphasize the fact that, when processing a word, the model will pay more attention to the more relevant objects). Finally, Attentive Social MSG further sets \u03bb = 1 2 , boosting the importance of socially salient objects.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Attentive Social MSG Model",
"sec_num": "2"
},
{
"text": "All other hyperparameters are set to the values found by Lazaridou et al. (2015) to be optimal after tuning, except hidden layer size that we set to 200 instead of 300 due to the small corpus (see Section 3). We train the MSG models with stochastic gradient descent for one epoch.",
"cite_spans": [
{
"start": 57,
"end": 80,
"text": "Lazaridou et al. (2015)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Attentive Social MSG Model",
"sec_num": "2"
},
{
"text": "3 The Illustrated Frank et al. Corpus Frank et al. (2007) present a Bayesian crosssituational learning model for simulating early word learning in first language acquisition. The model is tested on a portion of the Rollins section of the CHILDES Database (MacWhinney, 2000) consisting of two transcribed video files (me03 and di06), of approximately 10 minutes each, where a mother and a pre-verbal infant play with a set of toys. By inspecting the video recordings, the authors manually annotated each utterance in the transcripts with a list of object labels (e.g., ring, hat, cow) corresponding to all midsize objects judged to be visible to the infant while the utterance took place, as well as various social cues. The dataset includes a gold-standard lexicon consisting of 36 words paired with 17 object labels (e.g., hat=hat, pig=pig, piggie=pig). 2 Aiming at creating a more realistic version of the original dataset, akin to simulating a real visual scene, we replaced symbolic object labels with actual visual representations of objects. To construct such visual representations, we sample for each object 100 images from the respective ImageNet (Deng et al., 2009) entry, and from each image we extract a 4096-dimensional visual vector using the Caffe toolkit (Jia et al., 2014) , together with the pretrained convolutional neural network of Krizhevsky et al. (2012) . 3 These vectors are finally averaged to obtain a single visual representation of each object. Concerning social cues, since infants rarely follow the caregivers' eye gaze but rather attend to objects held by them (Yu and Smith, 2013) , we include in our corpus only information on whether the caregiver is holding any of the objects present in the scene. Note however that this signal, while informative, can also be ambiguous or even misleading with respect to the actual referents of a statement. Several aspects make IFC a challenging dataset. Firstly, we are dealing with language produced in an interactive setting rather than written discourse. For example, compare the first sentence in the Wikipedia entry for hat (\"A hat is a head covering\") to the third utterance in Figure 1 , corresponding to the first occurrence of hat in our corpus. Secondly, there is a large amount of referential uncertainty, with up to 7 objects present per utterance (2 on average) and with only 33% of utterances explicitly including a word directly associated with a possible referent (i.e., not taking into account pronouns). For instance, the first, second and last utterances in Figure 1 do not explicitly mention any of the objects present in the scene. This uncertainty also extends to social cues: only in 23% of utterances does the mother explicitly name an object that she is holding in her hands. Finally, models must induce wordobject associations from minimal exposure to input rather than from large amounts of training data. Indeed, the IFC is extremely small by any standards: 624 utterances making up 2,533 words in total, with 8/37 test words occurring only once. ",
"cite_spans": [
{
"start": 38,
"end": 57,
"text": "Frank et al. (2007)",
"ref_id": "BIBREF5"
},
{
"start": 1156,
"end": 1175,
"text": "(Deng et al., 2009)",
"ref_id": "BIBREF3"
},
{
"start": 1271,
"end": 1289,
"text": "(Jia et al., 2014)",
"ref_id": "BIBREF6"
},
{
"start": 1353,
"end": 1377,
"text": "Krizhevsky et al. (2012)",
"ref_id": "BIBREF12"
},
{
"start": 1380,
"end": 1381,
"text": "3",
"ref_id": null
},
{
"start": 1593,
"end": 1613,
"text": "(Yu and Smith, 2013)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [
{
"start": 2157,
"end": 2165,
"text": "Figure 1",
"ref_id": "FIGREF1"
},
{
"start": 2550,
"end": 2556,
"text": "Figure",
"ref_id": null
}
],
"eq_spans": [],
"section": "Attentive Social MSG Model",
"sec_num": "2"
},
{
"text": "We follow the evaluation protocol of Frank et al. (2007) and Kievit-Kylar et al. (2013) . Given 37 test words and the corresponding 17 objects (see Table 2 ), all found in the corpus, we rank the objects with respect to each word. A mean Best-F score is then derived by computing, for each word, the top F score across the precision-recall curve, and averaging it across the words. MSG rankings are obtained by directly ordering the visual representations of the objects by cosine similarity to the MSG word vectors. Table 1 reports our results compared to those in earlier studies, all of which did not use actual visual representations of objects but rather arbitrary symbolic IDs. Bayesian CSL is the original Bayesian cross-situational model of Frank et al. (2007) , also including social cues (not limited, like us, to mother's touch). BEAGLE is the best semantic-space result across a range of distributional models and word-object matching methods from Kievit-Kylar et al. (2013) . Their distributional models were trained in a batch mode, and by treating object IDs as words so that standard word-vector-based similarity methods could be used to rank objects with respect to words. Plain MSG is outperforming nearly all earlier approaches by a large margin. The only method bettering it is the BEAGLE+PMI combination of Kievit-Kylar et al. (PMI measures direct co-occurrence of test words and object IDs). The latter was obtained through a grid search of all possible model combinations performed directly on the test set, and relied on a weight parameter optimized on the corpus by assuming access to gold annotation.",
"cite_spans": [
{
"start": 37,
"end": 56,
"text": "Frank et al. (2007)",
"ref_id": "BIBREF5"
},
{
"start": 61,
"end": 87,
"text": "Kievit-Kylar et al. (2013)",
"ref_id": "BIBREF10"
},
{
"start": 750,
"end": 769,
"text": "Frank et al. (2007)",
"ref_id": "BIBREF5"
},
{
"start": 961,
"end": 987,
"text": "Kievit-Kylar et al. (2013)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [
{
"start": 148,
"end": 156,
"text": "Table 2",
"ref_id": "TABREF2"
},
{
"start": 518,
"end": 525,
"text": "Table 1",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "It is thus not comparable to the untuned MSG.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "Plain MSG, then, performs remarkably well, even without any mechanism attempting to track wordobject matching across scenes. Still, letting the model pay more attention to the objects currently most tightly associated to a word (AttentiveMSG) brings a large improvement over plain MSG, and a further improvement is brought about by giving more weight to objects touched by the mother (AttentiveSocialMSG). As concrete examples, plain MSG associated the word cow with a pig, whereas AttentiveMSG correctly shifts attention to the cow. In turn, AttentiveSocialMSG associates to the right object several words that AttentiveMSG wrongly pairs with the hand holding them, instead.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "One might fear the better performance of our models might be due to the skip-gram method being superior to the older distributional semantic approaches tested by Kievit-Kylar et al. (2013) , independently of the extra visual information we exploit. In other words, it could be that MSG has simply learned to treat, say, the lamb visual vector as an arbitrary signature, functioning as a semantically opaque ID for the relevant object, without exploiting the visual resemblance between lamb and sheep. In this case, we should obtain similar performance when arbitrarily shuffling the visual vectors across object types (e.g., consistently replacing each occurrence of the lamb visual vector with, say, the hand visual vector). The lower results obtained in this control condition (ASMSG+shuffled visual vector) confirm that our performance boost is largely due to exploitation of genuine visual information.",
"cite_spans": [
{
"start": 162,
"end": 188,
"text": "Kievit-Kylar et al. (2013)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "Since our approach is incremental (unlike the vast majority of traditional distributional models that operate on batch mode), it can in principle exploit the fact that the linguistic and visual flows in the corpus are meaningfully ordered (discourse and visual environment will evolve in a coherent manner: a hat appears on the scene, it's there for a while, in the meantime a few statements about hats are uttered, etc.). The dramatic quality drop in the ASMSG+randomized sentences condition, where AttentiveSocialMSG was trained on IFC after randomizing sentence order, confirms the coherent situation flow is crucial to our good performance. Minimal exposure. Given the small size of the input corpus, good performance on the word-object association already counts as indirect evidence that MSG, like children, can learn from small amounts of data. In Table 2 we take a more specific look at this challenge by reporting AttentiveSocialMSG performance on the task of ranking object visual representations for test words that occurred only once in IFC, considering both the standard evaluation set and a much larger confusion set including visual vectors for 5.1K distinct objects (those of Lazaridou et al. (2015) ). Remarkably, in all but one case, the model associates the test word to the right object from the small set, and to either the right object or another relevant visual concept (e.g., a ranch for moocows) when the extended set is considered. The exception is kitty, and even for this word the model ranks the correct object as second in the smaller set, and well above chance for the larger one. Our approach, just like humans (Trueswell et al., 2013) , can often get a word meaning right based on a single exposure to it.",
"cite_spans": [
{
"start": 1192,
"end": 1215,
"text": "Lazaridou et al. (2015)",
"ref_id": "BIBREF13"
},
{
"start": 1643,
"end": 1667,
"text": "(Trueswell et al., 2013)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [
{
"start": 855,
"end": 862,
"text": "Table 2",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "Generalization. Unlike the earlier models relying on arbitrary IDs, our model is learning to associate words to actual feature-based visual representations. Thus, once the model is trained on IFC, we can test its generalization capabilities to associate known words with new object instances that belong to the right category. We focus on 19 words in our test set corresponding to objects that were normed for visual similarity to other objects by Silberer and Lapata (2014) . Each test word was paired with 40 ImageNet pictures evenly divided between images of the gold object (not used in IFC), of a highly visually similar object, of a mildly visually similar object and of a dissimilar one (for duck: duck, chicken, finch and garage, respectively). The pictures were represented by vectors obtained with the same method outlined in Section 3, and were ranked by similarity to a test word AttentiveSocialMSG representation. Average Precision@10 for retrieving gold object instances is at 62% (chance: 25%). In the majority of cases the top-10 intruders are instances of the most visually related concepts (60% of intruders, vs. 33% expected by chance). For example, the model retrieves pictures of sheep for the word lamb, or bulls for cow. Intriguingly, this points to classic overextension errors that are commonly reported in child language acquisition (Rescorla, 1980) .",
"cite_spans": [
{
"start": 448,
"end": 474,
"text": "Silberer and Lapata (2014)",
"ref_id": "BIBREF21"
},
{
"start": 1359,
"end": 1375,
"text": "(Rescorla, 1980)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "While there is work on learning from multimodal data (Roy, 2000; Yu, 2005, a.o.) as well as work on learning distributed representations from childdirected speech (Baroni et al., 2007; Kievit-Kylar and Jones, 2011, a.o.) , to the best of our knowledge ours is the first method which learns distributed representations from multimodal child-directed data. For example, in comparison to Yu (2005) 's model, our approach (1) induces distributed representations for words, based on linguistic and visual context, and (2) operates entirely on distributed representations through similarity measures without positing a categorical level on which to learn wordsymbol/category-symbol associations. This leads to rich multimodal conceptual representations of words in terms of distributed multimodal features, while in Yu's approach words are simply distributions over categories. It is therefore not clear how Yu's approach could capture phenomena such as predicting appearance from a verbal description or representing abstract words-all tasks that our model is at least in principle well-suited for. Note also that Frank et al. (2007) 's Bayesian model we compare against could be extended to include realistic visual data in a similar vein to Yu's, but it would then have the same limitations.",
"cite_spans": [
{
"start": 53,
"end": 64,
"text": "(Roy, 2000;",
"ref_id": "BIBREF19"
},
{
"start": 65,
"end": 80,
"text": "Yu, 2005, a.o.)",
"ref_id": null
},
{
"start": 163,
"end": 184,
"text": "(Baroni et al., 2007;",
"ref_id": "BIBREF1"
},
{
"start": 185,
"end": 220,
"text": "Kievit-Kylar and Jones, 2011, a.o.)",
"ref_id": null
},
{
"start": 385,
"end": 394,
"text": "Yu (2005)",
"ref_id": "BIBREF26"
},
{
"start": 1109,
"end": 1128,
"text": "Frank et al. (2007)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "5"
},
{
"text": "Our work is also related to research on reference resolution in dialogue systems, such as Kennington and Schlangen (2015) . However, unlike Kennington and Schlangen, who explicitly train an object recognizer associated with each word of interest, with at least 65 labeled positive training examples per word, our model does not have any comparable form of supervision and our data exhibits much lower frequencies of object and word (co-)occurrence. Moreover, reference resolution is only an aspect of what we do: Besides being able to associate a word with a visual extension, our model is simultaneously learning word representations that allow us to deal with a variety of other tasks-for example, as mentioned above, guessing the appearance of the object denoted by a new word from a purely verbal description, grouping concepts into categories by their similarity, or having both abstract and concrete words represented in the same space.",
"cite_spans": [
{
"start": 90,
"end": 121,
"text": "Kennington and Schlangen (2015)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "5"
},
{
"text": "Our very encouraging results suggest that multimodal distributed models are well-suited to simulating human word learning. We think the most pressing issue to move ahead in this direction is to construct larger corpora recording the linguistic and visual environment in which children acquire language, in line with the efforts of the Human Speechome Project (Roy, 2009; Roy et al., 2015) . Having access to such data will enable us to design agents that acquire semantic knowledge by leveraging all available cues present in multimodal communicative setups, such as learning agents that can automatically predict eye-gaze (Recasens * et al., 2015) and incorporate this knowledge into the semantic learning process.",
"cite_spans": [
{
"start": 359,
"end": 370,
"text": "(Roy, 2009;",
"ref_id": "BIBREF20"
},
{
"start": 371,
"end": 388,
"text": "Roy et al., 2015)",
"ref_id": "BIBREF18"
},
{
"start": 623,
"end": 648,
"text": "(Recasens * et al., 2015)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "See K\u00e1d\u00e1r et al. (2015) for a recent review of this line of work, and another learning model using, like ours, real visual input.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "http://langcog.stanford.edu/materials/ nipsmaterials.html",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "To match the hidden layer size, we average every k = 4096/200 original non-overlapping visual dimensions into a single dimension.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "We thank Marco Marelli for useful advice and Brent Kievit-Kylar for help implementing the Best-F measure. We acknowledge the European Network on Integrating Vision and Language for a Short-Term Scientific Mission grant, awarded to Raquel Fern\u00e1ndez to visit the University of Trento.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Neural machine translation by jointly learning to align and translate",
"authors": [
{
"first": "Dzmitry",
"middle": [],
"last": "Bahdanau",
"suffix": ""
},
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of ICLR Conference Track",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Ben- gio. 2015. Neural machine translation by jointly learning to align and translate. In Proceedings of ICLR Conference Track, San Diego, CA. Published online: http://www.iclr.cc/doku.php?id= iclr2015:main.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "ISA meets Lara: An incremental word space model for cognitively plausible simulations of semantic learning",
"authors": [
{
"first": "Marco",
"middle": [],
"last": "Baroni",
"suffix": ""
},
{
"first": "Alessandro",
"middle": [],
"last": "Lenci",
"suffix": ""
},
{
"first": "Luca",
"middle": [],
"last": "Onnis",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the ACL Workshop on Cognitive Aspects of Computational Language Acquisition",
"volume": "",
"issue": "",
"pages": "49--56",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marco Baroni, Alessandro Lenci, and Luca Onnis. 2007. ISA meets Lara: An incremental word space model for cognitively plausible simulations of semantic learning. In Proceedings of the ACL Workshop on Cognitive As- pects of Computational Language Acquisition, pages 49-56.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Multimodal distributional semantics",
"authors": [
{
"first": "Elia",
"middle": [],
"last": "Bruni",
"suffix": ""
},
{
"first": "Nam",
"middle": [
"Khanh"
],
"last": "Tran",
"suffix": ""
},
{
"first": "Marco",
"middle": [],
"last": "Baroni",
"suffix": ""
}
],
"year": 2014,
"venue": "Journal of Artificial Intelligence Research",
"volume": "49",
"issue": "",
"pages": "1--47",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Elia Bruni, Nam Khanh Tran, and Marco Baroni. 2014. Multimodal distributional semantics. Journal of Arti- ficial Intelligence Research, 49:1-47.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Imagenet: A large-scale hierarchical image database",
"authors": [
{
"first": "Jia",
"middle": [],
"last": "Deng",
"suffix": ""
},
{
"first": "Wei",
"middle": [],
"last": "Dong",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "Lia-Ji",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Li",
"middle": [],
"last": "Fei-Fei",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of CVPR",
"volume": "",
"issue": "",
"pages": "248--255",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jia Deng, Wei Dong, Richard Socher, Lia-Ji Li, and Li Fei-Fei. 2009. Imagenet: A large-scale hierarchi- cal image database. In Proceedings of CVPR, pages 248-255, Miami Beach, FL.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "A probabilistic computational model of cross-situational word learning",
"authors": [
{
"first": "Afsaneh",
"middle": [],
"last": "Fazly",
"suffix": ""
},
{
"first": "Afra",
"middle": [],
"last": "Alishahi",
"suffix": ""
},
{
"first": "Suzanne",
"middle": [],
"last": "Stevenson",
"suffix": ""
}
],
"year": 2010,
"venue": "Cognitive Science",
"volume": "34",
"issue": "",
"pages": "1017--1063",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Afsaneh Fazly, Afra Alishahi, and Suzanne Steven- son. 2010. A probabilistic computational model of cross-situational word learning. Cognitive Science, 34:1017-1063.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "A Bayesian framework for cross-situational word-learning",
"authors": [
{
"first": "Michael",
"middle": [],
"last": "Frank",
"suffix": ""
},
{
"first": "Noah",
"middle": [],
"last": "Goodman",
"suffix": ""
},
{
"first": "Joshua",
"middle": [],
"last": "Tenenbaum",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of NIPS",
"volume": "",
"issue": "",
"pages": "457--464",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michael Frank, Noah Goodman, and Joshua Tenenbaum. 2007. A Bayesian framework for cross-situational word-learning. In Proceedings of NIPS, pages 457- 464, Vancouver, Canada.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Caffe: Convolutional architecture for fast feature embedding",
"authors": [
{
"first": "Yangqing",
"middle": [],
"last": "Jia",
"suffix": ""
},
{
"first": "Evan",
"middle": [],
"last": "Shelhamer",
"suffix": ""
},
{
"first": "Jeff",
"middle": [],
"last": "Donahue",
"suffix": ""
},
{
"first": "Sergey",
"middle": [],
"last": "Karayev",
"suffix": ""
},
{
"first": "Jonathan",
"middle": [],
"last": "Long",
"suffix": ""
},
{
"first": "Ross",
"middle": [],
"last": "Girshick",
"suffix": ""
},
{
"first": "Sergio",
"middle": [],
"last": "Guadarrama",
"suffix": ""
},
{
"first": "Trevor",
"middle": [],
"last": "Darrell",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1408.5093"
]
},
"num": null,
"urls": [],
"raw_text": "Yangqing Jia, Evan Shelhamer, Jeff Donahue, Sergey Karayev, Jonathan Long, Ross Girshick, Sergio Guadarrama, and Trevor Darrell. 2014. Caffe: Convo- lutional architecture for fast feature embedding. arXiv preprint arXiv:1408.5093.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Learning word meanings from images of natural scenes. Traitement Automatique des Langues",
"authors": [
{
"first": "Akos",
"middle": [],
"last": "K\u00e1d\u00e1r",
"suffix": ""
},
{
"first": "Afra",
"middle": [],
"last": "Alishahi",
"suffix": ""
},
{
"first": "Grzegorz",
"middle": [],
"last": "Chrupa\u0142a",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Akos K\u00e1d\u00e1r, Afra Alishahi, and Grzegorz Chrupa\u0142a. 2015. Learning word meanings from images of nat- ural scenes. Traitement Automatique des Langues. In press, preprint available at http://grzegorz. chrupala.me/papers/tal-2015.pdf.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Simple learning and compositional application of perceptually grounded word meanings for incremental reference resolution",
"authors": [
{
"first": "Casey",
"middle": [],
"last": "Kennington",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Schlangen",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the Conference for the Association for Computational Linguistics (ACL)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Casey Kennington and David Schlangen. 2015. Sim- ple learning and compositional application of percep- tually grounded word meanings for incremental refer- ence resolution. In Proceedings of the Conference for the Association for Computational Linguistics (ACL).",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "The Semantic Pictionary project",
"authors": [
{
"first": "Brent",
"middle": [],
"last": "Kievit",
"suffix": ""
},
{
"first": "-",
"middle": [],
"last": "Kylar",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Jones",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of CogSci",
"volume": "",
"issue": "",
"pages": "2229--2234",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Brent Kievit-Kylar and Michael Jones. 2011. The Se- mantic Pictionary project. In Proceedings of CogSci, pages 2229-2234, Austin, TX.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Naturalistic word-concept pair learning with semantic spaces",
"authors": [
{
"first": "Brent",
"middle": [],
"last": "Kievit-Kylar",
"suffix": ""
},
{
"first": "George",
"middle": [],
"last": "Kachergis",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Jones",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of CogSci",
"volume": "",
"issue": "",
"pages": "2716--2721",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Brent Kievit-Kylar, George Kachergis, and Michael Jones. 2013. Naturalistic word-concept pair learn- ing with semantic spaces. In Proceedings of CogSci, pages 2716-2721, Berlin, Germany.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Unifying visual-semantic embeddings with multimodal neural language models",
"authors": [
{
"first": "Ryan",
"middle": [],
"last": "Kiros",
"suffix": ""
},
{
"first": "Ruslan",
"middle": [],
"last": "Salakhutdinov",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Zemel",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the NIPS Deep Learning and Representation Learning Workshop",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ryan Kiros, Ruslan Salakhutdinov, and Richard Zemel. 2014. Unifying visual-semantic embeddings with multimodal neural language models. In Proceed- ings of the NIPS Deep Learning and Representa- tion Learning Workshop, Montreal, Canada. Pub- lished online: http://www.dlworkshop.org/ accepted-papers.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "ImageNet classification with deep convolutional neural networks",
"authors": [
{
"first": "Alex",
"middle": [],
"last": "Krizhevsky",
"suffix": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
},
{
"first": "Geoffrey",
"middle": [],
"last": "Hinton",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of NIPS",
"volume": "",
"issue": "",
"pages": "1097--1105",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alex Krizhevsky, Ilya Sutskever, and Geoffrey Hinton. 2012. ImageNet classification with deep convolutional neural networks. In Proceedings of NIPS, pages 1097- 1105, Lake Tahoe, Nevada.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Combining language and vision with a multimodal skip-gram model",
"authors": [
{
"first": "Angeliki",
"middle": [],
"last": "Lazaridou",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Nghia The",
"suffix": ""
},
{
"first": "Marco",
"middle": [],
"last": "Pham",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Baroni",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of NAACL",
"volume": "",
"issue": "",
"pages": "153--163",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Angeliki Lazaridou, Nghia The Pham, and Marco Baroni. 2015. Combining language and vision with a multi- modal skip-gram model. In Proceedings of NAACL, pages 153-163, Denver, CO.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "The CHILDES Project: Tools for analyzing talk",
"authors": [
{
"first": "Brian",
"middle": [],
"last": "Macwhinney",
"suffix": ""
}
],
"year": 2000,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Brian MacWhinney. 2000. The CHILDES Project: Tools for analyzing talk. Lawrence Erlbaum Associates, 3rd edition.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Efficient estimation of word representations in vector space",
"authors": [
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "Kai",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Greg",
"middle": [],
"last": "Corrado",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Dean",
"suffix": ""
}
],
"year": 2013,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Efficient estimation of word representa- tions in vector space. http://arxiv.org/abs/ 1301.3781/.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Where are they looking?",
"authors": [
{
"first": "Adria",
"middle": [],
"last": "Recasens",
"suffix": ""
},
{
"first": "*",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "Aditya",
"middle": [],
"last": "Khosla",
"suffix": ""
},
{
"first": "*",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "Carl",
"middle": [],
"last": "Vondrick",
"suffix": ""
},
{
"first": "Antonio",
"middle": [],
"last": "Torralba",
"suffix": ""
}
],
"year": 2015,
"venue": "Advances in Neural Information Processing Systems (NIPS). * indicates equal contribution",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Adria Recasens * , Aditya Khosla * , Carl Vondrick, and Antonio Torralba. 2015. Where are they looking? In Advances in Neural Information Processing Systems (NIPS). * indicates equal contribution.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Overextension in early language development",
"authors": [
{
"first": "Leslie",
"middle": [],
"last": "Rescorla",
"suffix": ""
}
],
"year": 1980,
"venue": "Journal of Child Language",
"volume": "7",
"issue": "2",
"pages": "321--335",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Leslie Rescorla. 1980. Overextension in early language development. Journal of Child Language, 7(2):321- 335.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Predicting the birth of a spoken word",
"authors": [
{
"first": "C",
"middle": [],
"last": "Brandon",
"suffix": ""
},
{
"first": "Michael",
"middle": [
"C"
],
"last": "Roy",
"suffix": ""
},
{
"first": "Philip",
"middle": [],
"last": "Frank",
"suffix": ""
},
{
"first": "Matthew",
"middle": [],
"last": "Decamp",
"suffix": ""
},
{
"first": "Deb",
"middle": [],
"last": "Miller",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Roy",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the National Academy of Sciences",
"volume": "112",
"issue": "41",
"pages": "12663--12668",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Brandon C. Roy, Michael C. Frank, Philip DeCamp, Matthew Miller, and Deb Roy. 2015. Predicting the birth of a spoken word. Proceedings of the National Academy of Sciences, 112(41):12663-12668.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "A computational model of word learning from multimodal sensory input",
"authors": [
{
"first": "Deb",
"middle": [],
"last": "Roy",
"suffix": ""
}
],
"year": 2000,
"venue": "Proceedings of the International Conference of Cognitive Modeling (ICCM2000)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Deb Roy. 2000. A computational model of word learn- ing from multimodal sensory input. In Proceedings of the International Conference of Cognitive Modeling (ICCM2000), Groningen, Netherlands.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "New horizons in the study of child language acquisition",
"authors": [
{
"first": "Deb",
"middle": [],
"last": "Roy",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of Interspeech",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Deb Roy. 2009. New horizons in the study of child lan- guage acquisition. In Proceedings of Interspeech.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Learning grounded meaning representations with autoencoders",
"authors": [
{
"first": "Carina",
"middle": [],
"last": "Silberer",
"suffix": ""
},
{
"first": "Mirella",
"middle": [],
"last": "Lapata",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of ACL",
"volume": "",
"issue": "",
"pages": "721--732",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Carina Silberer and Mirella Lapata. 2014. Learning grounded meaning representations with autoencoders. In Proceedings of ACL, pages 721-732, Baltimore, Maryland.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Introduction: Multimodal interaction",
"authors": [
{
"first": "Tanya",
"middle": [],
"last": "Stivers",
"suffix": ""
},
{
"first": "Jack",
"middle": [],
"last": "Sidnell",
"suffix": ""
}
],
"year": 2005,
"venue": "Semiotica",
"volume": "",
"issue": "",
"pages": "1--20",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tanya Stivers and Jack Sidnell. 2005. Introduction: Mul- timodal interaction. Semiotica, pages 1-20.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Propose but verify: Fast mapping meets cross-situational word learning",
"authors": [
{
"first": "John",
"middle": [],
"last": "Trueswell",
"suffix": ""
},
{
"first": "Tamara",
"middle": [],
"last": "Medina",
"suffix": ""
},
{
"first": "Alon",
"middle": [],
"last": "Hafri",
"suffix": ""
},
{
"first": "Lila",
"middle": [],
"last": "Gleitman",
"suffix": ""
}
],
"year": 2013,
"venue": "Cognitive Psychology",
"volume": "66",
"issue": "1",
"pages": "126--156",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "John Trueswell, Tamara Medina, Alon Hafri, and Lila Gleitman. 2013. Propose but verify: Fast mapping meets cross-situational word learning. Cognitive Psy- chology, 66(1):126-156.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "A unified model of early word learning: Integrating statistical and social cues",
"authors": [
{
"first": "Chen",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "Dana",
"middle": [
"H"
],
"last": "Ballard",
"suffix": ""
}
],
"year": 2007,
"venue": "Neurocomputing",
"volume": "70",
"issue": "",
"pages": "2149--2165",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chen Yu and Dana H. Ballard. 2007. A unified model of early word learning: Integrating statistical and social cues. Neurocomputing, 70(13-15):2149-2165.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Joint attention without gaze following: human infants and their parents coordinate visual attention to objects through eye-hand coordination",
"authors": [
{
"first": "Chen",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "Linda",
"middle": [
"B"
],
"last": "Smith",
"suffix": ""
}
],
"year": 2013,
"venue": "PLoS ONE",
"volume": "8",
"issue": "11",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chen Yu and Linda B. Smith. 2013. Joint attention with- out gaze following: human infants and their parents coordinate visual attention to objects through eye-hand coordination. PLoS ONE, 8(11).",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "The emergence of links between lexical acquisition and object categorization: A computational study",
"authors": [
{
"first": "C",
"middle": [],
"last": "Yu",
"suffix": ""
}
],
"year": 2005,
"venue": "Connection Science",
"volume": "17",
"issue": "3",
"pages": "381--397",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "C. Yu. 2005. The emergence of links between lexical ac- quisition and object categorization: A computational study. Connection Science, 17(3):381-397.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"type_str": "figure",
"uris": null,
"text": "look like with the hat on do i look pretty good with the hat on"
},
"FIGREF1": {
"num": null,
"type_str": "figure",
"uris": null,
"text": "Fragment of the IFC corpus where symbolic labels ring and hat have been replaced by real images. Red frames mark objects being touched by the caregiver."
},
"FIGREF2": {
"num": null,
"type_str": "figure",
"uris": null,
"text": "Figure 1exemplifies our version of the corpus, the IllustratedFrank et al. Corpus (IFC)."
},
"TABREF0": {
"num": null,
"content": "<table><tr><td>Model</td><td>Best-F</td></tr><tr><td>MSG</td><td>.64 (.04)</td></tr><tr><td>AttentiveMSG</td><td>.70 (.04)</td></tr><tr><td>AttentiveSocialMSG</td><td>.73 (.03)</td></tr><tr><td colspan=\"2\">ASMSG+shuffled visual vectors .65 (.06)</td></tr><tr><td colspan=\"2\">ASMSG+randomized sentences .59 (.03)</td></tr><tr><td>BEAGLE</td><td>.55</td></tr><tr><td>PMI</td><td>.53</td></tr><tr><td>Bayesian CSL</td><td>.54</td></tr><tr><td>BEAGLE+PMI</td><td>.83</td></tr></table>",
"html": null,
"text": "Best-F results for the MSG variations and alternative models on word-object matching. For all MSG models, we report Best-F mean and standard deviation over 100 iterations.",
"type_str": "table"
},
"TABREF2": {
"num": null,
"content": "<table><tr><td>with corresponding gold objects, AttentiveSocialMSG top vi-</td></tr><tr><td>sual neighbours among the test items and in a larger 5.1K-</td></tr><tr><td>objects set, and ranks of gold object in the two confusion sets.</td></tr></table>",
"html": null,
"text": "Test words occurring only once in IFC, together",
"type_str": "table"
}
}
}
}