| { |
| "paper_id": "2021", |
| "header": { |
| "generated_with": "S2ORC 1.0.0", |
| "date_generated": "2023-01-19T14:58:09.166150Z" |
| }, |
| "title": "The impact of answers in referential visual dialog", |
| "authors": [ |
| { |
| "first": "Mauricio", |
| "middle": [], |
| "last": "Mazuecos", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "CONICET", |
| "location": { |
| "country": "Argentina" |
| } |
| }, |
| "email": "mmazuecos@mi.unc.edu.ar" |
| }, |
| { |
| "first": "Patrick", |
| "middle": [], |
| "last": "Blackburn", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "University of Roskilde", |
| "location": { |
| "country": "Denmark" |
| } |
| }, |
| "email": "" |
| }, |
| { |
| "first": "Luciana", |
| "middle": [], |
| "last": "Benotti", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "CONICET", |
| "location": { |
| "country": "Argentina" |
| } |
| }, |
| "email": "luciana.benotti@unc.edu.ar" |
| } |
| ], |
| "year": "", |
| "venue": null, |
| "identifiers": {}, |
| "abstract": "In the visual dialog task GuessWhat?! two players maintain a dialog in order to identify a secret object in an image. Computationally, this is modeled using a question generation module and a guesser module for the questioner role and an answering model, the oracle, to answer the generated questions. This raises a question: what's the risk of having an imperfect oracle model?. Here we present work in progress on the study of the impact of different oracles in human generated questions in GuessWhat?!. We show that having access to better quality answers has a direct impact on the guessing task for human dialog and argue that better answers could help train better question generation models.", |
| "pdf_parse": { |
| "paper_id": "2021", |
| "_pdf_hash": "", |
| "abstract": [ |
| { |
| "text": "In the visual dialog task GuessWhat?! two players maintain a dialog in order to identify a secret object in an image. Computationally, this is modeled using a question generation module and a guesser module for the questioner role and an answering model, the oracle, to answer the generated questions. This raises a question: what's the risk of having an imperfect oracle model?. Here we present work in progress on the study of the impact of different oracles in human generated questions in GuessWhat?!. We show that having access to better quality answers has a direct impact on the guessing task for human dialog and argue that better answers could help train better question generation models.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Abstract", |
| "sec_num": null |
| } |
| ], |
| "body_text": [ |
| { |
| "text": "Collaborative reference resolution is a task that has attracted a lot of attention in recent years with the introduction of GuessWhat?! . GuessWhat?! is a cooperative two-player referential visual dialogue game. One player (the Oracle) is assigned an object in an image and the other player (the Questioner) has to guess the referent by asking yes/no questions. An example of a dialog in the GuessWhat?! dataset can be seen in Figure 1 .", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 427, |
| "end": 435, |
| "text": "Figure 1", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "In this task, much work has been done on the question generation policies Lee et al., 2018; Shekhar et al., 2019; Pang and Wang, 2020b,a) , the linguistic capabilities of these questioner models (Shukla et al., 2019a ) and on improving guessing models (Pang and Wang, 2020a) .", |
| "cite_spans": [ |
| { |
| "start": 74, |
| "end": 91, |
| "text": "Lee et al., 2018;", |
| "ref_id": "BIBREF6" |
| }, |
| { |
| "start": 92, |
| "end": 113, |
| "text": "Shekhar et al., 2019;", |
| "ref_id": "BIBREF14" |
| }, |
| { |
| "start": 114, |
| "end": 137, |
| "text": "Pang and Wang, 2020b,a)", |
| "ref_id": null |
| }, |
| { |
| "start": 195, |
| "end": 216, |
| "text": "(Shukla et al., 2019a", |
| "ref_id": "BIBREF15" |
| }, |
| { |
| "start": 252, |
| "end": 274, |
| "text": "(Pang and Wang, 2020a)", |
| "ref_id": "BIBREF9" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Most of the work on the questioner models was performed employing a simple oracle model to play the GuessWhat?! game. This oracle model was too simple and it struggled to answer questions that asked for anything beyond the available annotation information, thus pushing models to produce those type of questions ; a new SOTA for the oracle task based on LXMERT (Tan and Bansal, 2019) was proposed using this approach. It seems reasonable to investigate the impact on the question generation policy learned by questioner models and on task success. In this work we will focus on the latter 1 . We will show the impact of having access to better answers in the guessing task by evaluating a guesser model with questions from the human corpus that were answered by different oracle models.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "In the next section we review previous work. Then we explain the GuessWhat?! task, the models, and the experiments. Finally we argue that having access to better answers could be the difference between success and failure in the guessing task.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "GuessWhat?! (de Vries et al., 2017) is a cooperative guessing game in which two players hold a dialog intended to identify a secret object in a picture. We call this object the target object. The two players have different roles: the Questioner has to pose questions and guess the object at the end of the dialog, and the Oracle has to answer these questions. The corpus comprises more than 155K dialogs with more than 821K question-answer pairs made across 67K images extracted from the MSCOCO dataset (Lin et al., 2014) .", |
| "cite_spans": [ |
| { |
| "start": 503, |
| "end": 521, |
| "text": "(Lin et al., 2014)", |
| "ref_id": "BIBREF7" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Previous Work", |
| "sec_num": "2" |
| }, |
| { |
| "text": "GuessWhat?! is a simplification of the collaborative process of referring studied by Clark and Wilkes-Gibbs (1986) . The process of multimodal reference resolution had received attention from the vision and computational linguistics communities (Pineda and Garza, 1997; Schlangen et al., 2009) . The task requires both reference resolution capabilities and the ability to ground the language expressions to objects in the real world (Roy, 2005) .", |
| "cite_spans": [ |
| { |
| "start": 95, |
| "end": 114, |
| "text": "Wilkes-Gibbs (1986)", |
| "ref_id": "BIBREF2" |
| }, |
| { |
| "start": 245, |
| "end": 269, |
| "text": "(Pineda and Garza, 1997;", |
| "ref_id": "BIBREF11" |
| }, |
| { |
| "start": 270, |
| "end": 293, |
| "text": "Schlangen et al., 2009)", |
| "ref_id": "BIBREF13" |
| }, |
| { |
| "start": 433, |
| "end": 444, |
| "text": "(Roy, 2005)", |
| "ref_id": "BIBREF12" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Previous Work", |
| "sec_num": "2" |
| }, |
| { |
| "text": "In this task, much work has been done on training question generation policies to perform the questioner role Abbasnejad et al., 2019; Shukla et al., 2019b; Shekhar et al., 2019; Pang and Wang, 2020b ) with different levels of task success at guessing the target objects. Most of the approaches receive some sort of reward that weights to some extent the task success at the game. Being a two player game, the task success will be conditioned by both the questioner performance and the oracle performance. Most works use the same oracle model proposed with the GuessWhat?! dataset (de Vries et al., 2017) 2 .", |
| "cite_spans": [ |
| { |
| "start": 110, |
| "end": 134, |
| "text": "Abbasnejad et al., 2019;", |
| "ref_id": "BIBREF0" |
| }, |
| { |
| "start": 135, |
| "end": 156, |
| "text": "Shukla et al., 2019b;", |
| "ref_id": "BIBREF16" |
| }, |
| { |
| "start": 157, |
| "end": 178, |
| "text": "Shekhar et al., 2019;", |
| "ref_id": "BIBREF14" |
| }, |
| { |
| "start": 179, |
| "end": 199, |
| "text": "Pang and Wang, 2020b", |
| "ref_id": "BIBREF10" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Previous Work", |
| "sec_num": "2" |
| }, |
| { |
| "text": "We previously showed that the baseline oracle proposed by de Vries et al. (2017) does not have the same performance for human and model generated questions, and that performance was linked to the type of question . Most RLbased models would not ask for information other than the type of object and its location, exactly the two manually annotated features that the oracle receives. As a result the grammatical and lexical diversity of the generated questions is poor.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Previous Work", |
| "sec_num": "2" |
| }, |
| { |
| "text": "Following this line, proposed a more complex oracle model based on the multimodal transformer LXMERT (Tan and Bansal, 2019) . This model achieved SOTA for the Oracle task and proved to perform better across most question types except for object questions (due to not having access to the gold standard category label for the target).", |
| "cite_spans": [ |
| { |
| "start": 101, |
| "end": 123, |
| "text": "(Tan and Bansal, 2019)", |
| "ref_id": "BIBREF18" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Previous Work", |
| "sec_num": "2" |
| }, |
| { |
| "text": "The impact of answers has previously been noted for the VisDial task (Das et al., 2017) . Guo et al. (2019) show that a visual dialog model with integration of better answers achieves better performance in the Visual Dialog Challenge 2018. We show the impact the answer has for the GuessWhat?! task.", |
| "cite_spans": [ |
| { |
| "start": 69, |
| "end": 87, |
| "text": "(Das et al., 2017)", |
| "ref_id": "BIBREF3" |
| }, |
| { |
| "start": 90, |
| "end": 107, |
| "text": "Guo et al. (2019)", |
| "ref_id": "BIBREF4" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Previous Work", |
| "sec_num": "2" |
| }, |
| { |
| "text": "In the GuessWhat?! game there are two roles: the Questioner, that makes questions and guesses the target object at the end of the dialog, and the Oracle that answers those questions. At the beginning of a game, the oracle is assigned an target object in the image and the questioner has to pose yes/no questions in order to identify the target. An usual computational modeling for each player divides the Questioner role into two components: the Question Generator and the Guesser. In our experiments we make use of a guesser model and oracle models.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "GuessWhat?! task and Models", |
| "sec_num": "3" |
| }, |
| { |
| "text": "For the Oracle models we used the baseline model (de Vries et al., 2017) as well as the LXMERT based model .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "GuessWhat?! task and Models", |
| "sec_num": "3" |
| }, |
| { |
| "text": "The baseline models (Figure 2a ) use a set of features passed through a multilayer perceptron (MLP). We consider a subset of the features according to previous work (de Vries et al., 2017): The question (Q), the spatial information of the target (Sp), the target object's category extracted from the MSCOCO (Ca) and the visual features of crop of the target (Cr), extracted with a ResNet152 (He et al., 2016) .", |
| "cite_spans": [ |
| { |
| "start": 391, |
| "end": 408, |
| "text": "(He et al., 2016)", |
| "ref_id": "BIBREF5" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 20, |
| "end": 30, |
| "text": "(Figure 2a", |
| "ref_id": "FIGREF0" |
| } |
| ], |
| "eq_spans": [], |
| "section": "GuessWhat?! task and Models", |
| "sec_num": "3" |
| }, |
| { |
| "text": "The LXMERT based model (Figure 2b ) receives the question, the visual features of 36 regions of the image (the same as in (Anderson et al., 2018) ) and the crop of the target inserted in the 36th position of the regions. Notice that this model has no access to the category's label for the target object.", |
| "cite_spans": [ |
| { |
| "start": 122, |
| "end": 145, |
| "text": "(Anderson et al., 2018)", |
| "ref_id": "BIBREF1" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 23, |
| "end": 33, |
| "text": "(Figure 2b", |
| "ref_id": "FIGREF0" |
| } |
| ], |
| "eq_spans": [], |
| "section": "GuessWhat?! task and Models", |
| "sec_num": "3" |
| }, |
| { |
| "text": "We use the Guesser model ( Figure 3) proposed by Shekhar et al. (2019) . This guesser model adds an encoding of both vision (the image) and language modalities (the full history). A single MLP processes each object's spatial information and category and outputs a score for each object. These concatenated scores combined with the vision and language encoding output the probability of each object being the target used to make the final guess. ", |
| "cite_spans": [ |
| { |
| "start": 49, |
| "end": 70, |
| "text": "Shekhar et al. (2019)", |
| "ref_id": "BIBREF14" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 27, |
| "end": 36, |
| "text": "Figure 3)", |
| "ref_id": "FIGREF1" |
| } |
| ], |
| "eq_spans": [], |
| "section": "GuessWhat?! task and Models", |
| "sec_num": "3" |
| }, |
| { |
| "text": "In this section we show the results of our experiments. Our experiments were performed using a single 1080ti GPU. We retrained all of our models following the procedure stated in their respective paper using the publicly available source code. For our experiments we take the human dialogs from the test set of GuessWhat?!. We keep the human posed questions but change the answers with the ones given by the different oracle models we employed. We stick to the successful games in the test set, as failed and incomplete games tend to contain malformed questions, misunderstandings or are not finished and, thus, guessing is not possible with the information available.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experiments", |
| "sec_num": "4" |
| }, |
| { |
| "text": "We measure the task success of the guesser at guessing the target object in these resulting dialogs. In Table 1 have a strong performance, just points below the more complex model based on LXMERT. To test the impact of the target category (Ca) we discard it (Q+Sp) or change it for the crop of the target (Q+Sp+Cr). Our hypothesis was that the category was playing a heavy role on the performance of the oracle models. The first and second row of Table 1 show that leaving the category out reduces up to 13 points of task success on the guesser. Replacing the category with the crop improves the performance adding information about the target but still is almost 7 points below the Q+Sp+Ca baseline. The LXMERT model, despite not having the gold stardard labels for the categories achieves a similar and even higher performance when paired with the guesser model. This shows that the LXMERT oracle can be used in settings with no annotation for the objects categories.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 104, |
| "end": 111, |
| "text": "Table 1", |
| "ref_id": null |
| }, |
| { |
| "start": 447, |
| "end": 455, |
| "text": "Table 1", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Experiments", |
| "sec_num": "4" |
| }, |
| { |
| "text": "In Figure 4 we see an example of a dialog and the answers given by different oracles. In this example, models other than LXMERT miss at least one answers and fail at guessing. This shows that missing a single answer can end up in failure at guessing Following this we compute the percentage of incorrect answers (IAns) and the average amount of them per failed game (Avg IAns/failure) for each model. In Table 2 we see the result of this analysis. There is a negative correlation (\u22120.93 Pearson's R) between task success and IAns as well as for Avg IAns/failure (\u22120.87 Pearson's R).This suggest that future evaluated models should take into account the oracle's performance when it comes to gameplay between agents. The oracle could be hindering the real potential of questioner and guesser models as they would learn to either exploit the oracle's annotation or risk failure.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 3, |
| "end": 11, |
| "text": "Figure 4", |
| "ref_id": null |
| }, |
| { |
| "start": 404, |
| "end": 411, |
| "text": "Table 2", |
| "ref_id": "TABREF3" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Experiments", |
| "sec_num": "4" |
| }, |
| { |
| "text": "In this paper we described work in progress on studying the impact of having access to better answers in GuessWhat?!. Dialogs with better answer quality had a higher task success when sent to an automatic guesser. The task success when using the widely used Q+Sp+Ca and the LXMERT oracles are comparable although LXMERT does not require the object manual annotations. Task success drops when the gold standard category label for the target is not a feature. This suggest that the MLP oracles rely strongly on the manual annotations.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusions", |
| "sec_num": "5" |
| }, |
| { |
| "text": "The next step will be to investigate the impact of the answer quality on the quality of the generated questions. In order to do so we will perform a similar analysis for the different questioner and guesser models proposed in the literature. We hypothesize that this could lead the question generation policies to have richer linguistic capabilities and to learn better strategies for identifying the target object.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusions", |
| "sec_num": "5" |
| }, |
| { |
| "text": "The code for reproducing this paper is available at github.com/mmazuecos/ReInAct2021-Impact-of-answers", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "We could not confirm nor deny thatAbbasnejad et al. (2019) employed that Oracle mode. They stated that they kept the classical GuessWhat?! setup.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| } |
| ], |
| "back_matter": [], |
| "bib_entries": { |
| "BIBREF0": { |
| "ref_id": "b0", |
| "title": "Whats to know? uncertainty as a guide to asking goal-oriented questions", |
| "authors": [ |
| { |
| "first": "Ehsan", |
| "middle": [], |
| "last": "Abbasnejad", |
| "suffix": "" |
| }, |
| { |
| "first": "Qi", |
| "middle": [], |
| "last": "Wu", |
| "suffix": "" |
| }, |
| { |
| "first": "Javen", |
| "middle": [], |
| "last": "Shi", |
| "suffix": "" |
| }, |
| { |
| "first": "Anton", |
| "middle": [], |
| "last": "Van Den", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Hengel", |
| "suffix": "" |
| } |
| ], |
| "year": 2019, |
| "venue": "IEEE Conference on Computer Vision and Pattern Recognition (CVPR)", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ehsan Abbasnejad, Qi Wu, Javen Shi, and Anton van den Hengel. 2019. Whats to know? uncertainty as a guide to asking goal-oriented questions. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR).", |
| "links": null |
| }, |
| "BIBREF1": { |
| "ref_id": "b1", |
| "title": "Bottom-up and top-down attention for image captioning and visual question answering", |
| "authors": [ |
| { |
| "first": "Peter", |
| "middle": [], |
| "last": "Anderson", |
| "suffix": "" |
| }, |
| { |
| "first": "Xiaodong", |
| "middle": [], |
| "last": "He", |
| "suffix": "" |
| }, |
| { |
| "first": "Chris", |
| "middle": [], |
| "last": "Buehler", |
| "suffix": "" |
| }, |
| { |
| "first": "Damien", |
| "middle": [], |
| "last": "Teney", |
| "suffix": "" |
| }, |
| { |
| "first": "Mark", |
| "middle": [], |
| "last": "Johnson", |
| "suffix": "" |
| }, |
| { |
| "first": "Stephen", |
| "middle": [], |
| "last": "Gould", |
| "suffix": "" |
| }, |
| { |
| "first": "Lei", |
| "middle": [], |
| "last": "Zhang", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "CVPR", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Peter Anderson, Xiaodong He, Chris Buehler, Damien Teney, Mark Johnson, Stephen Gould, and Lei Zhang. 2018. Bottom-up and top-down attention for image captioning and visual question answering. In CVPR.", |
| "links": null |
| }, |
| "BIBREF2": { |
| "ref_id": "b2", |
| "title": "Referring as a collaborative process", |
| "authors": [ |
| { |
| "first": "H", |
| "middle": [], |
| "last": "Herbert", |
| "suffix": "" |
| }, |
| { |
| "first": "Deanna", |
| "middle": [], |
| "last": "Clark", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Wilkes-Gibbs", |
| "suffix": "" |
| } |
| ], |
| "year": 1986, |
| "venue": "Cognition", |
| "volume": "22", |
| "issue": "1", |
| "pages": "1--39", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Herbert H. Clark and Deanna Wilkes-Gibbs. 1986. Referring as a collaborative process. Cognition, 22(1):1 -39.", |
| "links": null |
| }, |
| "BIBREF3": { |
| "ref_id": "b3", |
| "title": "Visual Dialog", |
| "authors": [ |
| { |
| "first": "Abhishek", |
| "middle": [], |
| "last": "Das", |
| "suffix": "" |
| }, |
| { |
| "first": "Satwik", |
| "middle": [], |
| "last": "Kottur", |
| "suffix": "" |
| }, |
| { |
| "first": "Khushi", |
| "middle": [], |
| "last": "Gupta", |
| "suffix": "" |
| }, |
| { |
| "first": "Avi", |
| "middle": [], |
| "last": "Singh", |
| "suffix": "" |
| }, |
| { |
| "first": "Deshraj", |
| "middle": [], |
| "last": "Yadav", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [ |
| "F" |
| ], |
| "last": "Jos\u00e9", |
| "suffix": "" |
| }, |
| { |
| "first": "Devi", |
| "middle": [], |
| "last": "Moura", |
| "suffix": "" |
| }, |
| { |
| "first": "Dhruv", |
| "middle": [], |
| "last": "Parikh", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Batra", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR)", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Abhishek Das, Satwik Kottur, Khushi Gupta, Avi Singh, Deshraj Yadav, Jos\u00e9 M.F. Moura, Devi Parikh, and Dhruv Batra. 2017. Visual Dialog. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR).", |
| "links": null |
| }, |
| "BIBREF4": { |
| "ref_id": "b4", |
| "title": "Imagequestion-answer synergistic network for visual dialog", |
| "authors": [ |
| { |
| "first": "Dalu", |
| "middle": [], |
| "last": "Guo", |
| "suffix": "" |
| }, |
| { |
| "first": "Chang", |
| "middle": [], |
| "last": "Xu", |
| "suffix": "" |
| }, |
| { |
| "first": "Dacheng", |
| "middle": [], |
| "last": "Tao", |
| "suffix": "" |
| } |
| ], |
| "year": 2019, |
| "venue": "2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)", |
| "volume": "", |
| "issue": "", |
| "pages": "10426--10435", |
| "other_ids": { |
| "DOI": [ |
| "10.1109/CVPR.2019.01068" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Dalu Guo, Chang Xu, and Dacheng Tao. 2019. Image- question-answer synergistic network for visual dia- log. In 2019 IEEE/CVF Conference on Computer Vi- sion and Pattern Recognition (CVPR), pages 10426- 10435.", |
| "links": null |
| }, |
| "BIBREF5": { |
| "ref_id": "b5", |
| "title": "Deep residual learning for image recognition", |
| "authors": [ |
| { |
| "first": "Kaiming", |
| "middle": [], |
| "last": "He", |
| "suffix": "" |
| }, |
| { |
| "first": "Xiangyu", |
| "middle": [], |
| "last": "Zhang", |
| "suffix": "" |
| }, |
| { |
| "first": "Shaoqing", |
| "middle": [], |
| "last": "Ren", |
| "suffix": "" |
| }, |
| { |
| "first": "Jian", |
| "middle": [], |
| "last": "Sun", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "2016 IEEE Conference on Computer Vision and Pattern Recognition", |
| "volume": "", |
| "issue": "", |
| "pages": "770--778", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep residual learning for image recog- nition. In 2016 IEEE Conference on Computer Vi- sion and Pattern Recognition, CVPR 2016, Las Ve- gas, NV, USA, June 27-30, 2016, pages 770-778. IEEE Computer Society.", |
| "links": null |
| }, |
| "BIBREF6": { |
| "ref_id": "b6", |
| "title": "Answerer in questioner's mind: Information theoretic approach to goal-oriented visual dialog", |
| "authors": [ |
| { |
| "first": "Sang-Woo", |
| "middle": [], |
| "last": "Lee", |
| "suffix": "" |
| }, |
| { |
| "first": "Yu-Jung", |
| "middle": [], |
| "last": "Heo", |
| "suffix": "" |
| }, |
| { |
| "first": "Byoung-Tak", |
| "middle": [], |
| "last": "Zhang", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "Advances in Neural Information Processing Systems 31", |
| "volume": "", |
| "issue": "", |
| "pages": "2579--2589", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Sang-Woo Lee, Yu-Jung Heo, and Byoung-Tak Zhang. 2018. Answerer in questioner's mind: Information theoretic approach to goal-oriented visual dialog. In S. Bengio, H. Wallach, H. Larochelle, K. Grauman, N. Cesa-Bianchi, and R. Garnett, editors, Advances in Neural Information Processing Systems 31, pages 2579-2589. Curran Associates, Inc.", |
| "links": null |
| }, |
| "BIBREF7": { |
| "ref_id": "b7", |
| "title": "Microsoft coco: Common objects in context. Cite arxiv:1405.0312Comment: 1) updated annotation pipeline description and figures; 2) added new section describing datasets splits", |
| "authors": [ |
| { |
| "first": "Tsung-Yi", |
| "middle": [], |
| "last": "Lin", |
| "suffix": "" |
| }, |
| { |
| "first": "Michael", |
| "middle": [], |
| "last": "Maire", |
| "suffix": "" |
| }, |
| { |
| "first": "Serge", |
| "middle": [], |
| "last": "Belongie", |
| "suffix": "" |
| }, |
| { |
| "first": "Lubomir", |
| "middle": [], |
| "last": "Bourdev", |
| "suffix": "" |
| }, |
| { |
| "first": "Ross", |
| "middle": [], |
| "last": "Girshick", |
| "suffix": "" |
| }, |
| { |
| "first": "James", |
| "middle": [], |
| "last": "Hays", |
| "suffix": "" |
| }, |
| { |
| "first": "Pietro", |
| "middle": [], |
| "last": "Perona", |
| "suffix": "" |
| }, |
| { |
| "first": "Deva", |
| "middle": [], |
| "last": "Ramanan", |
| "suffix": "" |
| }, |
| { |
| "first": "C", |
| "middle": [ |
| "Lawrence" |
| ], |
| "last": "Zitnick", |
| "suffix": "" |
| }, |
| { |
| "first": "Piotr", |
| "middle": [], |
| "last": "Doll\u00e1r", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Tsung-Yi Lin, Michael Maire, Serge Belongie, Lubomir Bourdev, Ross Girshick, James Hays, Pietro Perona, Deva Ramanan, C. Lawrence Zitnick, and Piotr Doll\u00e1r. 2014. Microsoft coco: Common objects in context. Cite arxiv:1405.0312Comment: 1) updated annotation pipeline description and fig- ures; 2) added new section describing datasets splits;", |
| "links": null |
| }, |
| "BIBREF8": { |
| "ref_id": "b8", |
| "title": "On the role of effective and referring questions in GuessWhat?", |
| "authors": [ |
| { |
| "first": "Mauricio", |
| "middle": [], |
| "last": "Mazuecos", |
| "suffix": "" |
| }, |
| { |
| "first": "Alberto", |
| "middle": [], |
| "last": "Testoni", |
| "suffix": "" |
| }, |
| { |
| "first": "Raffaella", |
| "middle": [], |
| "last": "Bernardi", |
| "suffix": "" |
| }, |
| { |
| "first": "Luciana", |
| "middle": [], |
| "last": "Benotti", |
| "suffix": "" |
| } |
| ], |
| "year": 2020, |
| "venue": "! In Proceedings of the First Workshop on Advances in Language and Vision Research", |
| "volume": "", |
| "issue": "", |
| "pages": "19--25", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Mauricio Mazuecos, Alberto Testoni, Raffaella Bernardi, and Luciana Benotti. 2020. On the role of effective and referring questions in GuessWhat?! In Proceedings of the First Workshop on Advances in Language and Vision Research, pages 19-25, Online. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF9": { |
| "ref_id": "b9", |
| "title": "Guessing state tracking for visual dialogue", |
| "authors": [ |
| { |
| "first": "Wei", |
| "middle": [], |
| "last": "Pang", |
| "suffix": "" |
| }, |
| { |
| "first": "Xiaojie", |
| "middle": [], |
| "last": "Wang", |
| "suffix": "" |
| } |
| ], |
| "year": 2020, |
| "venue": "ECCV", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Wei Pang and Xiaojie Wang. 2020a. Guessing state tracking for visual dialogue. In ECCV.", |
| "links": null |
| }, |
| "BIBREF10": { |
| "ref_id": "b10", |
| "title": "The Thirty-Second Innovative Applications of Artificial Intelligence Conference", |
| "authors": [ |
| { |
| "first": "Wei", |
| "middle": [], |
| "last": "Pang", |
| "suffix": "" |
| }, |
| { |
| "first": "Xiaojie", |
| "middle": [], |
| "last": "Wang", |
| "suffix": "" |
| } |
| ], |
| "year": 2020, |
| "venue": "The Tenth AAAI Symposium on Educational Advances in Artificial Intelligence", |
| "volume": "2020", |
| "issue": "", |
| "pages": "11831--11838", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Wei Pang and Xiaojie Wang. 2020b. Visual dialogue state tracking for question generation. In The Thirty- Fourth AAAI Conference on Artificial Intelligence, AAAI 2020, The Thirty-Second Innovative Appli- cations of Artificial Intelligence Conference, IAAI 2020, The Tenth AAAI Symposium on Educational Advances in Artificial Intelligence, EAAI 2020, New York, NY, USA, February 7-12, 2020, pages 11831- 11838. AAAI Press.", |
| "links": null |
| }, |
| "BIBREF11": { |
| "ref_id": "b11", |
| "title": "A model for multimodal reference resolution", |
| "authors": [ |
| { |
| "first": "", |
| "middle": [ |
| "A" |
| ], |
| "last": "Luis", |
| "suffix": "" |
| }, |
| { |
| "first": "E", |
| "middle": [ |
| "Gabriela" |
| ], |
| "last": "Pineda", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Garza", |
| "suffix": "" |
| } |
| ], |
| "year": 1997, |
| "venue": "Referring Phenomena in a Multimedia Context and their Computational Treatment", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Luis. A. Pineda and E. Gabriela Garza. 1997. A model for multimodal reference resolution. In Referring Phenomena in a Multimedia Context and their Com- putational Treatment.", |
| "links": null |
| }, |
| "BIBREF12": { |
| "ref_id": "b12", |
| "title": "Grounding words in perception and action: computational insights", |
| "authors": [ |
| { |
| "first": "Deb", |
| "middle": [], |
| "last": "Roy", |
| "suffix": "" |
| } |
| ], |
| "year": 2005, |
| "venue": "Trends in Cognitive Sciences", |
| "volume": "9", |
| "issue": "8", |
| "pages": "389--396", |
| "other_ids": { |
| "DOI": [ |
| "10.1016/j.tics.2005.06.013" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Deb Roy. 2005. Grounding words in perception and action: computational insights. Trends in Cognitive Sciences, 9(8):389-396.", |
| "links": null |
| }, |
| "BIBREF13": { |
| "ref_id": "b13", |
| "title": "Incremental reference resolution: The task, metrics for evaluation, and a Bayesian filtering model that is sensitive to disfluencies", |
| "authors": [ |
| { |
| "first": "David", |
| "middle": [], |
| "last": "Schlangen", |
| "suffix": "" |
| }, |
| { |
| "first": "Timo", |
| "middle": [], |
| "last": "Baumann", |
| "suffix": "" |
| }, |
| { |
| "first": "Michaela", |
| "middle": [], |
| "last": "Atterer", |
| "suffix": "" |
| } |
| ], |
| "year": 2009, |
| "venue": "Proceedings of the SIGDIAL 2009 Conference", |
| "volume": "", |
| "issue": "", |
| "pages": "30--37", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "David Schlangen, Timo Baumann, and Michaela At- terer. 2009. Incremental reference resolution: The task, metrics for evaluation, and a Bayesian filter- ing model that is sensitive to disfluencies. In Pro- ceedings of the SIGDIAL 2009 Conference, pages 30-37, London, UK. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF14": { |
| "ref_id": "b14", |
| "title": "Beyond task success: A closer look at jointly learning to see, ask, and Guess-What", |
| "authors": [ |
| { |
| "first": "Ravi", |
| "middle": [], |
| "last": "Shekhar", |
| "suffix": "" |
| }, |
| { |
| "first": "Aashish", |
| "middle": [], |
| "last": "Venkatesh", |
| "suffix": "" |
| }, |
| { |
| "first": "Tim", |
| "middle": [], |
| "last": "Baumg\u00e4rtner", |
| "suffix": "" |
| }, |
| { |
| "first": "Elia", |
| "middle": [], |
| "last": "Bruni", |
| "suffix": "" |
| }, |
| { |
| "first": "Barbara", |
| "middle": [], |
| "last": "Plank", |
| "suffix": "" |
| }, |
| { |
| "first": "Raffaella", |
| "middle": [], |
| "last": "Bernardi", |
| "suffix": "" |
| }, |
| { |
| "first": "Raquel", |
| "middle": [], |
| "last": "Fern\u00e1ndez", |
| "suffix": "" |
| } |
| ], |
| "year": 2019, |
| "venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "2578--2587", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ravi Shekhar, Aashish Venkatesh, Tim Baumg\u00e4rtner, Elia Bruni, Barbara Plank, Raffaella Bernardi, and Raquel Fern\u00e1ndez. 2019. Beyond task success: A closer look at jointly learning to see, ask, and Guess- What. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Com- putational Linguistics, pages 2578-2587. ACL.", |
| "links": null |
| }, |
| "BIBREF15": { |
| "ref_id": "b15", |
| "title": "What should I ask? using conversationally informative rewards for goal-oriented visual dialog", |
| "authors": [ |
| { |
| "first": "Pushkar", |
| "middle": [], |
| "last": "Shukla", |
| "suffix": "" |
| }, |
| { |
| "first": "Carlos", |
| "middle": [], |
| "last": "Elmadjian", |
| "suffix": "" |
| }, |
| { |
| "first": "Richika", |
| "middle": [], |
| "last": "Sharan", |
| "suffix": "" |
| }, |
| { |
| "first": "Vivek", |
| "middle": [], |
| "last": "Kulkarni", |
| "suffix": "" |
| }, |
| { |
| "first": "Matthew", |
| "middle": [], |
| "last": "Turk", |
| "suffix": "" |
| }, |
| { |
| "first": "William", |
| "middle": [ |
| "Yang" |
| ], |
| "last": "Wang", |
| "suffix": "" |
| } |
| ], |
| "year": 2019, |
| "venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "6442--6451", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Pushkar Shukla, Carlos Elmadjian, Richika Sharan, Vivek Kulkarni, Matthew Turk, and William Yang Wang. 2019a. What should I ask? using conversa- tionally informative rewards for goal-oriented visual dialog. In Proceedings of the 57th Annual Meet- ing of the Association for Computational Linguis- tics, pages 6442-6451, Florence, Italy. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF16": { |
| "ref_id": "b16", |
| "title": "What should I ask? using conversationally informative rewards for goal-oriented visual dialog", |
| "authors": [ |
| { |
| "first": "Pushkar", |
| "middle": [], |
| "last": "Shukla", |
| "suffix": "" |
| }, |
| { |
| "first": "Carlos", |
| "middle": [], |
| "last": "Elmadjian", |
| "suffix": "" |
| }, |
| { |
| "first": "Richika", |
| "middle": [], |
| "last": "Sharan", |
| "suffix": "" |
| }, |
| { |
| "first": "Vivek", |
| "middle": [], |
| "last": "Kulkarni", |
| "suffix": "" |
| }, |
| { |
| "first": "Matthew", |
| "middle": [], |
| "last": "Turk", |
| "suffix": "" |
| }, |
| { |
| "first": "William", |
| "middle": [ |
| "Yang" |
| ], |
| "last": "Wang", |
| "suffix": "" |
| } |
| ], |
| "year": 2019, |
| "venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "6442--6451", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Pushkar Shukla, Carlos Elmadjian, Richika Sharan, Vivek Kulkarni, Matthew Turk, and William Yang Wang. 2019b. What should I ask? using conversa- tionally informative rewards for goal-oriented visual dialog. In Proceedings of the 57th Annual Meet- ing of the Association for Computational Linguis- tics, pages 6442-6451, Florence, Italy. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF17": { |
| "ref_id": "b17", |
| "title": "End-to-end optimization of goal-driven and visually grounded dialogue systems", |
| "authors": [ |
| { |
| "first": "Florian", |
| "middle": [], |
| "last": "Strub", |
| "suffix": "" |
| }, |
| { |
| "first": "Jeremie", |
| "middle": [], |
| "last": "Harm De Vries", |
| "suffix": "" |
| }, |
| { |
| "first": "Bilal", |
| "middle": [], |
| "last": "Mary", |
| "suffix": "" |
| }, |
| { |
| "first": "Aaron", |
| "middle": [], |
| "last": "Piot", |
| "suffix": "" |
| }, |
| { |
| "first": "Olivier", |
| "middle": [], |
| "last": "Courville", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Pietquin", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "Proceedings of international joint conference on artificial intelligenc (IJCAI)", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Florian Strub, Harm De Vries, Jeremie Mary, Bilal Piot, Aaron Courville, and Olivier Pietquin. 2017. End-to-end optimization of goal-driven and visually grounded dialogue systems. In Proceedings of in- ternational joint conference on artificial intelligenc (IJCAI).", |
| "links": null |
| }, |
| "BIBREF18": { |
| "ref_id": "b18", |
| "title": "LXMERT: Learning cross-modality encoder representations from transformers", |
| "authors": [ |
| { |
| "first": "Hao", |
| "middle": [], |
| "last": "Tan", |
| "suffix": "" |
| }, |
| { |
| "first": "Mohit", |
| "middle": [], |
| "last": "Bansal", |
| "suffix": "" |
| } |
| ], |
| "year": 2019, |
| "venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Hao Tan and Mohit Bansal. 2019. LXMERT: Learning cross-modality encoder representations from trans- formers. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natu- ral Language Processing (EMNLP-IJCNLP), Hong", |
| "links": null |
| }, |
| "BIBREF19": { |
| "ref_id": "b19", |
| "title": "Association for Computational Linguistics", |
| "authors": [ |
| { |
| "first": "China", |
| "middle": [], |
| "last": "Kong", |
| "suffix": "" |
| } |
| ], |
| "year": null, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Kong, China. Association for Computational Lin- guistics.", |
| "links": null |
| }, |
| "BIBREF20": { |
| "ref_id": "b20", |
| "title": "They are not all alike: Answering different spatial questions requires different grounding strategies", |
| "authors": [ |
| { |
| "first": "Alberto", |
| "middle": [], |
| "last": "Testoni", |
| "suffix": "" |
| }, |
| { |
| "first": "Claudio", |
| "middle": [], |
| "last": "Greco", |
| "suffix": "" |
| }, |
| { |
| "first": "Tobias", |
| "middle": [], |
| "last": "Bianchi", |
| "suffix": "" |
| }, |
| { |
| "first": "Mauricio", |
| "middle": [], |
| "last": "Mazuecos", |
| "suffix": "" |
| }, |
| { |
| "first": "Agata", |
| "middle": [], |
| "last": "Marcante", |
| "suffix": "" |
| }, |
| { |
| "first": "Luciana", |
| "middle": [], |
| "last": "Benotti", |
| "suffix": "" |
| }, |
| { |
| "first": "Raffaella", |
| "middle": [], |
| "last": "Bernardi", |
| "suffix": "" |
| } |
| ], |
| "year": 2020, |
| "venue": "Proceedings of the Third International Workshop on Spatial Language Understanding", |
| "volume": "", |
| "issue": "", |
| "pages": "29--38", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Alberto Testoni, Claudio Greco, Tobias Bianchi, Mauri- cio Mazuecos, Agata Marcante, Luciana Benotti, and Raffaella Bernardi. 2020. They are not all alike: Answering different spatial questions requires dif- ferent grounding strategies. In Proceedings of the Third International Workshop on Spatial Language Understanding, pages 29-38, Online. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF21": { |
| "ref_id": "b21", |
| "title": "Guesswhat?! visual object discovery through multi-modal dialogue", |
| "authors": [ |
| { |
| "first": "Florian", |
| "middle": [], |
| "last": "Harm De Vries", |
| "suffix": "" |
| }, |
| { |
| "first": "Sarath", |
| "middle": [], |
| "last": "Strub", |
| "suffix": "" |
| }, |
| { |
| "first": "Olivier", |
| "middle": [], |
| "last": "Chandar", |
| "suffix": "" |
| }, |
| { |
| "first": "Hugo", |
| "middle": [], |
| "last": "Pietquin", |
| "suffix": "" |
| }, |
| { |
| "first": "Aaron", |
| "middle": [ |
| "C" |
| ], |
| "last": "Larochelle", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Courville", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "2017 IEEE Conference on Computer Vision and Pattern Recognition", |
| "volume": "", |
| "issue": "", |
| "pages": "4466--4475", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Harm de Vries, Florian Strub, Sarath Chandar, Olivier Pietquin, Hugo Larochelle, and Aaron C. Courville. 2017. Guesswhat?! visual object discovery through multi-modal dialogue. In 2017 IEEE Conference on Computer Vision and Pattern Recognition, pages 4466-4475. IEEE Computer Society.", |
| "links": null |
| }, |
| "BIBREF22": { |
| "ref_id": "b22", |
| "title": "Goaloriented visual question generation via intermediate rewards", |
| "authors": [ |
| { |
| "first": "Junjie", |
| "middle": [], |
| "last": "Zhang", |
| "suffix": "" |
| }, |
| { |
| "first": "Qi", |
| "middle": [], |
| "last": "Wu", |
| "suffix": "" |
| }, |
| { |
| "first": "Chunhua", |
| "middle": [], |
| "last": "Shen", |
| "suffix": "" |
| }, |
| { |
| "first": "Jian", |
| "middle": [], |
| "last": "Zhang", |
| "suffix": "" |
| }, |
| { |
| "first": "Jianfeng", |
| "middle": [], |
| "last": "Lu", |
| "suffix": "" |
| }, |
| { |
| "first": "Anton", |
| "middle": [], |
| "last": "Van Den", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Hengel", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "Computer Vision -ECCV 2018 -15th", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Junjie Zhang, Qi Wu, Chunhua Shen, Jian Zhang, Jian- feng Lu, and Anton van den Hengel. 2018. Goal- oriented visual question generation via intermediate rewards. In Computer Vision -ECCV 2018 -15th", |
| "links": null |
| }, |
| "BIBREF23": { |
| "ref_id": "b23", |
| "title": "September 8-14, Proceedings, Part V", |
| "authors": [], |
| "year": null, |
| "venue": "", |
| "volume": "11209", |
| "issue": "", |
| "pages": "189--204", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "European Conference, Munich, Germany, Septem- ber 8-14, Proceedings, Part V, volume 11209 of Lecture Notes in Computer Science, pages 189-204. Springer.", |
| "links": null |
| } |
| }, |
| "ref_entries": { |
| "FIGREF0": { |
| "text": "(a) Baseline oracle model (b) LXMERT based oracle model Oracle models.", |
| "uris": null, |
| "num": null, |
| "type_str": "figure" |
| }, |
| "FIGREF1": { |
| "text": "Guesser model(Shekhar et al., 2019).", |
| "uris": null, |
| "num": null, |
| "type_str": "figure" |
| }, |
| "TABREF1": { |
| "content": "<table><tr><td colspan=\"2\">Answers from Guesser Task Success</td></tr><tr><td>Q+Sp</td><td>46.9</td></tr><tr><td>Q+Sp+Cr</td><td>52.7</td></tr><tr><td>Q+Sp+Ca</td><td>59.4</td></tr><tr><td>LXMERT</td><td>59.7</td></tr><tr><td>Human</td><td>62.2</td></tr><tr><td colspan=\"2\">Table 1: Task success of the Guesser model in human</td></tr><tr><td colspan=\"2\">generated questions from the GuessWhat?! test set</td></tr><tr><td colspan=\"2\">with answers from different Oracles.</td></tr></table>", |
| "html": null, |
| "text": "we see the tasks success for the guesser in each setup. We see that the Q+Sp+Ca answers", |
| "num": null, |
| "type_str": "table" |
| }, |
| "TABREF2": { |
| "content": "<table><tr><td>Question</td><td/><td/><td colspan=\"4\">Human Q+Sp Q+Sp+Cr Q+Sp+Ca LXMERT</td></tr><tr><td>1. is it a ship?</td><td/><td/><td>yes</td><td>yes</td><td>no</td><td>yes</td><td>yes</td></tr><tr><td>2. is it white?</td><td/><td/><td>yes</td><td>no</td><td>no</td><td>yes</td><td>yes</td></tr><tr><td colspan=\"3\">3. is it under the plane slightly left?</td><td>yes</td><td>yes</td><td>no</td><td>no</td><td>yes</td></tr><tr><td>Status</td><td/><td/><td colspan=\"2\">Success Failure</td><td>Failure</td><td>Failure</td><td>Success</td></tr><tr><td>Figure 4: Oracle</td><td>IAns</td><td colspan=\"2\">Avg IAns/failure</td><td/><td/></tr><tr><td>Q+Sp</td><td>31.73%</td><td>2.08</td><td/><td/><td/></tr><tr><td colspan=\"2\">Q+Sp+Cr 25.60%</td><td>1.78</td><td/><td/><td/></tr><tr><td colspan=\"2\">Q+Sp+Ca 22.03%</td><td>1.66</td><td/><td/><td/></tr><tr><td colspan=\"2\">LXMERT 16.05%</td><td>1.27</td><td/><td/><td/></tr></table>", |
| "html": null, |
| "text": "Example different answers for the same human dialog given by the different Oracles. Human corresponds to the human answers given in the corpus. In this example, the guesser model correctly guesses the target for the human and LXMERT oracles, while the game resulted in failure for the rest of the Oracle models. the target object, as seen for the Q+Sp+Ca and Q+Sp oracles.", |
| "num": null, |
| "type_str": "table" |
| }, |
| "TABREF3": { |
| "content": "<table/>", |
| "html": null, |
| "text": "Percentage of incorrect answers (IAns) with respect ot the human answers and Average number of incorrect answers per failed game in the test set of the GuessWhat?! for the different oracle models evaluated.", |
| "num": null, |
| "type_str": "table" |
| } |
| } |
| } |
| } |