| { |
| "paper_id": "K19-1039", |
| "header": { |
| "generated_with": "S2ORC 1.0.0", |
| "date_generated": "2023-01-19T07:05:51.479057Z" |
| }, |
| "title": "A Case Study on Combining ASR and Visual Features for Generating Instructional Video Captions", |
| "authors": [ |
| { |
| "first": "Jack", |
| "middle": [], |
| "last": "Hessel", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "Cornell University", |
| "location": {} |
| }, |
| "email": "jhessel@cs.cornell.edu" |
| }, |
| { |
| "first": "Bo", |
| "middle": [], |
| "last": "Pang", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "Cornell University", |
| "location": {} |
| }, |
| "email": "bopang@google.com" |
| }, |
| { |
| "first": "Zhenhai", |
| "middle": [], |
| "last": "Zhu", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "Cornell University", |
| "location": {} |
| }, |
| "email": "zhenhai@google.com" |
| }, |
| { |
| "first": "Radu", |
| "middle": [], |
| "last": "Soricut", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "Cornell University", |
| "location": {} |
| }, |
| "email": "rsoricut@google.com" |
| } |
| ], |
| "year": "", |
| "venue": null, |
| "identifiers": {}, |
| "abstract": "Instructional videos get high-traffic on video sharing platforms, and prior work suggests that providing time-stamped, subtask annotations (e.g., \"heat the oil in the pan\") improves user experiences. However, current automatic annotation methods based on visual features alone perform only slightly better than constant prediction. Taking cues from prior work, we show that we can improve performance significantly by considering automatic speech recognition (ASR) tokens as input. Furthermore, jointly modeling ASR tokens and visual features results in higher performance compared to training individually on either modality. We find that unstated background information is better explained by visual features, whereas fine-grained distinctions (e.g., \"add oil\" vs. \"add olive oil\") are disambiguated more easily via ASR tokens.", |
| "pdf_parse": { |
| "paper_id": "K19-1039", |
| "_pdf_hash": "", |
| "abstract": [ |
| { |
| "text": "Instructional videos get high-traffic on video sharing platforms, and prior work suggests that providing time-stamped, subtask annotations (e.g., \"heat the oil in the pan\") improves user experiences. However, current automatic annotation methods based on visual features alone perform only slightly better than constant prediction. Taking cues from prior work, we show that we can improve performance significantly by considering automatic speech recognition (ASR) tokens as input. Furthermore, jointly modeling ASR tokens and visual features results in higher performance compared to training individually on either modality. We find that unstated background information is better explained by visual features, whereas fine-grained distinctions (e.g., \"add oil\" vs. \"add olive oil\") are disambiguated more easily via ASR tokens.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Abstract", |
| "sec_num": null |
| } |
| ], |
| "body_text": [ |
| { |
| "text": "Instructional videos increasingly dominate user attention on online video platforms. For example, 86% of YouTube users report using the platform often to learn new things, and 70% of users report using videos to solve problems related to work, school, or hobbies (O'Neil-Hart, 2018) .", |
| "cite_spans": [ |
| { |
| "start": 263, |
| "end": 282, |
| "text": "(O'Neil-Hart, 2018)", |
| "ref_id": "BIBREF35" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Prior work in user experience has investigated the best way of presenting instructional videos to users. Kim et al. (2014) , for example, compare two options; first: presenting users with the video alone, and second: presenting the video with an additional structured representation, including a timeline populated with task subgoals. Users interacting with the structured video representation reported higher satisfaction, and external judges rated the work they completed using the videos as having higher quality. Margulieux et al. (2012) and Weir et al. (2015) Figure 1: Illustration of a multimodal dense instructional video captioning task. Models are given access to both video frames and ASR tokens, and must generate a recipe instruction step for each video segment. The speaker in the video sometimes (but not always) references literal objects and actions.", |
| "cite_spans": [ |
| { |
| "start": 105, |
| "end": 122, |
| "text": "Kim et al. (2014)", |
| "ref_id": "BIBREF19" |
| }, |
| { |
| "start": 517, |
| "end": 541, |
| "text": "Margulieux et al. (2012)", |
| "ref_id": "BIBREF29" |
| }, |
| { |
| "start": 546, |
| "end": 564, |
| "text": "Weir et al. (2015)", |
| "ref_id": "BIBREF52" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "proves user experiences. Thus, presenting instructional videos with additional structured annotations is likely to benefit users. These studies rely on human annotation of timestamped subtask goals, e.g., timed captions created through crowdsourcing. However, humanin-the-loop annotation is infeasible to deploy for popular video sharing platforms like YouTube that receive hundreds of hours of uploads per minute. In this work, we address the task of automatically producing captions for instructional videos at the level of video segments. Ideally, generated captions provide a literal, imperative description of the procedural step occurring for a given video segment, e.g., in the cooking context we consider, \"add the oil to the pan.\"", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Producing segment-level captions is a sub-task of dense video captioning, where prior work has mostly focused on visual-only models. Dense captioning is a difficult task, particularly in the instructional video domain, as fine-grained distinctions may be difficult or impossible to make with visual features alone. Visual information can be ambiguous (e.g., distinguishing between \"olive oil\" vs. \"vegetable oil\") or incomplete (e.g., preparation steps may occur off-camera). In our study, a first important finding is that, for the dataset considered, current state-of-the-art, visual-features-only models only slightly outperform a constant prediction baseline, e.g., by 1.5 BLEU/METEOR points.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "To improve performance in this difficult setting, we consider the automatic speech recognition (ASR) tokens generated by YouTube. These publicly available tokens are an ASR model's attempts to map words spoken in videos into text. However, while a promising potential source for signal, it is not always trivial to transform even accurate ASR into the desired imperative target: while there are cases of clear correspondence between the literal actions in the video and the ASR tokens, in other cases, the mapping is imperfect (Fig. 1 ). For example, when finishing a dish, a user says \"that's perfection in my book right there\" rather than \"put the dish on a plate and serve.\" There are also cases where no ASR tokens are available at all. Despite these potential difficulties, previous work has demonstrated that ASR can be informative in a variety of instructional video understanding tasks (Naim et al., 2014 (Naim et al., , 2015 Malmaud et al., 2015; Sener et al., 2015; Alayrac et al., 2016; ; though less work has focused on instructional caption generation, which is known to be difficult and sensitive to input perturbations (Chen et al., 2018) .", |
| "cite_spans": [ |
| { |
| "start": 894, |
| "end": 912, |
| "text": "(Naim et al., 2014", |
| "ref_id": "BIBREF34" |
| }, |
| { |
| "start": 913, |
| "end": 933, |
| "text": "(Naim et al., , 2015", |
| "ref_id": "BIBREF33" |
| }, |
| { |
| "start": 934, |
| "end": 955, |
| "text": "Malmaud et al., 2015;", |
| "ref_id": "BIBREF28" |
| }, |
| { |
| "start": 956, |
| "end": 975, |
| "text": "Sener et al., 2015;", |
| "ref_id": "BIBREF42" |
| }, |
| { |
| "start": 976, |
| "end": 997, |
| "text": "Alayrac et al., 2016;", |
| "ref_id": "BIBREF1" |
| }, |
| { |
| "start": 1134, |
| "end": 1153, |
| "text": "(Chen et al., 2018)", |
| "ref_id": "BIBREF6" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 527, |
| "end": 534, |
| "text": "(Fig. 1", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "We find that incorporating ASR-token-based features significantly improves performance over visual-features-only models (e.g., CIDEr improves 0.53 \u21d2 1.0, BLEU-4 improves 4.3 \u21d2 8.5). We also show that combining ASR tokens and visual features results in the highest performing models, suggesting that the modalities contain complementary information.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "We conclude by asking: what information is captured by the visual features that is not captured by the ASR tokens (and vice versa)? Auxiliary experiments examining performance of models in predicting the presence/absence of individual word types suggest that visual signals are superior for identifying unspoken, implicit aspects of scenes; for instance, in order to mix ingredi-ents, they must be placed in a bowl -and although bowls are often visually present in the scene, \"bowl\" is often not explicitly mentioned by the speaker. Conversely, ASR features readily disambiguate between fine-grained entities, e.g., \"olive oil\" vs.\"vegetable oil\", a task that is difficult (and sometimes impossible) for visual features alone.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Narrated instructional videos. While several works have matched audio and video signals in an unconstrained setting (Arandjelovic and Zisserman, 2017; Tian et al., 2018) , our work builds upon previous efforts to utilize accompanying speech signals to understand online instructional videos, specifically. Several works focus on learning video-instruction alignments, and match a fixed set of instructions to temporal video segments (Regneri et al., 2013; Naim et al., 2015; Malmaud et al., 2015; Hendricks et al., 2017; Kuehne et al., 2017) . Another line of previous work uses speech to extract and align language fragments, e.g., verb-noun pairs, with instructional videos (Gupta and Mooney, 2010; Motwani and Mooney, 2012; Alayrac et al., 2016; Huang et al., , 2018 Hahn et al., 2018) . Sener et al. (2015) , as part of their parsing pipeline, train a 3-gram language model on segmented ASR token inputs to produce recipe steps. Dense Video Captioning. Recent work in computer vision addresses dense video captioning (Krishna et al., 2017; Wang et al., 2018) , a supervised task that involves (i) segmenting the input video, and, (ii) generating a natural language description for each segment. Here, we focus on the second subtask of generating descriptions given a ground-truth segmentation; this setting isolates the language generation part of the modeling process. 1 Most related to the present work are several dense captioning approaches that have been applied to instructional videos (Zhou et al., 2018b,c) . Zhou et al. (2018c) achieve stateof-the-art performance on the dataset we consider; their model is video-only, and combines a region proposal network (Ren et al., 2015) and a Transformer (Vaswani et al., 2017) decoder. Multimodal Video Captioning. Several works have employed multimodal signals to caption the MSR-VTT dataset (Xu et al., 2016) , which consists of 2K video clips from 20 general categories (e.g., \"news\", \"sports\") with an average duration of 10 seconds per clip. In particular, Ramanishka et al. (2016) ; Xu et al. (2017) ; Hori et al. (2017) ; Shen et al. (2017) ; Chuang et al. 2017; Hao et al. (2018) all report small performance gains when incorporating audio features on top of visual features. However -we suspect that instructional video domain is significantly different than MSR-VTT (where the audio information does not necessarily correspond to human speech), as we find that ASR-only models significantly surpass the state-of-the-art video model in our case. Palaskar et al. (2019) and Shi et al. (2019) , contemporaneous with the submission of the present work, also examine ASR as a source of signal for generating how-to video captions.", |
| "cite_spans": [ |
| { |
| "start": 116, |
| "end": 150, |
| "text": "(Arandjelovic and Zisserman, 2017;", |
| "ref_id": "BIBREF3" |
| }, |
| { |
| "start": 151, |
| "end": 169, |
| "text": "Tian et al., 2018)", |
| "ref_id": "BIBREF47" |
| }, |
| { |
| "start": 433, |
| "end": 455, |
| "text": "(Regneri et al., 2013;", |
| "ref_id": "BIBREF39" |
| }, |
| { |
| "start": 456, |
| "end": 474, |
| "text": "Naim et al., 2015;", |
| "ref_id": "BIBREF33" |
| }, |
| { |
| "start": 475, |
| "end": 496, |
| "text": "Malmaud et al., 2015;", |
| "ref_id": "BIBREF28" |
| }, |
| { |
| "start": 497, |
| "end": 520, |
| "text": "Hendricks et al., 2017;", |
| "ref_id": "BIBREF14" |
| }, |
| { |
| "start": 521, |
| "end": 541, |
| "text": "Kuehne et al., 2017)", |
| "ref_id": "BIBREF23" |
| }, |
| { |
| "start": 676, |
| "end": 700, |
| "text": "(Gupta and Mooney, 2010;", |
| "ref_id": "BIBREF10" |
| }, |
| { |
| "start": 701, |
| "end": 726, |
| "text": "Motwani and Mooney, 2012;", |
| "ref_id": "BIBREF31" |
| }, |
| { |
| "start": 727, |
| "end": 748, |
| "text": "Alayrac et al., 2016;", |
| "ref_id": "BIBREF1" |
| }, |
| { |
| "start": 749, |
| "end": 769, |
| "text": "Huang et al., , 2018", |
| "ref_id": "BIBREF16" |
| }, |
| { |
| "start": 770, |
| "end": 788, |
| "text": "Hahn et al., 2018)", |
| "ref_id": "BIBREF11" |
| }, |
| { |
| "start": 791, |
| "end": 810, |
| "text": "Sener et al. (2015)", |
| "ref_id": "BIBREF42" |
| }, |
| { |
| "start": 1021, |
| "end": 1043, |
| "text": "(Krishna et al., 2017;", |
| "ref_id": "BIBREF22" |
| }, |
| { |
| "start": 1044, |
| "end": 1062, |
| "text": "Wang et al., 2018)", |
| "ref_id": "BIBREF51" |
| }, |
| { |
| "start": 1496, |
| "end": 1518, |
| "text": "(Zhou et al., 2018b,c)", |
| "ref_id": null |
| }, |
| { |
| "start": 1521, |
| "end": 1540, |
| "text": "Zhou et al. (2018c)", |
| "ref_id": "BIBREF58" |
| }, |
| { |
| "start": 1708, |
| "end": 1730, |
| "text": "(Vaswani et al., 2017)", |
| "ref_id": "BIBREF49" |
| }, |
| { |
| "start": 1847, |
| "end": 1864, |
| "text": "(Xu et al., 2016)", |
| "ref_id": "BIBREF53" |
| }, |
| { |
| "start": 2016, |
| "end": 2040, |
| "text": "Ramanishka et al. (2016)", |
| "ref_id": "BIBREF38" |
| }, |
| { |
| "start": 2043, |
| "end": 2059, |
| "text": "Xu et al. (2017)", |
| "ref_id": "BIBREF54" |
| }, |
| { |
| "start": 2062, |
| "end": 2080, |
| "text": "Hori et al. (2017)", |
| "ref_id": "BIBREF15" |
| }, |
| { |
| "start": 2083, |
| "end": 2101, |
| "text": "Shen et al. (2017)", |
| "ref_id": "BIBREF43" |
| }, |
| { |
| "start": 2124, |
| "end": 2141, |
| "text": "Hao et al. (2018)", |
| "ref_id": "BIBREF12" |
| }, |
| { |
| "start": 2509, |
| "end": 2531, |
| "text": "Palaskar et al. (2019)", |
| "ref_id": "BIBREF36" |
| }, |
| { |
| "start": 2536, |
| "end": 2553, |
| "text": "Shi et al. (2019)", |
| "ref_id": "BIBREF44" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "2" |
| }, |
| { |
| "text": "We focus on YouCook2 (Zhou et al., 2018b) , the largest human-captioned dataset of instructional videos publicly available. 2 It contains 2000 YouTube cooking videos, for a total of 176 hours, and spans 89 different recipes. Each video averages at 5.26 minutes, and is annotated with an average of 7.7 temporal segments (i.e., start/end points) corresponding to semantically distinct recipe steps. Each segment is associated with an imperative caption, e.g., \"add the oil to the pan\", for an average of 8.8 words per caption.", |
| "cite_spans": [ |
| { |
| "start": 21, |
| "end": 41, |
| "text": "(Zhou et al., 2018b)", |
| "ref_id": "BIBREF57" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Dataset", |
| "sec_num": "3" |
| }, |
| { |
| "text": "At the time of analysis (June 2018), over 25% of the YouCook2 videos had been removed from YouTube, and therefore we do not consider them. As a result, all our experiments operate on a subset of the YouCook2 data. While this makes direct comparison with previous and future work more difficult, our performance metrics can be viewed as lower bounds, as they are trained on less data compared to, e.g., (Zhou et al., 2018c) . Unless noted otherwise, our analyses are conducted over 1.4K videos and the 10.6K annotated segments contained therein.", |
| "cite_spans": [ |
| { |
| "start": 402, |
| "end": 422, |
| "text": "(Zhou et al., 2018c)", |
| "ref_id": "BIBREF58" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Dataset", |
| "sec_num": "3" |
| }, |
| { |
| "text": "We collected the ASR tokens automatically generated by YouTube (available through the YouTube Data API 3 with trackKind = ASR), which are then mapped to their temporally corresponding video segments. We start by asking the following questions: How much narration do users provide for instructional videos? And: can YouTube's ASR system detect that speech?", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "A Closer Look at ASR tokens", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "Not surprisingly, speakers in videos tend to be more verbose than the annotated groundtruth captions: we find the length distribution of ASR tokens per segment to be roughly log-normal, with mean/median length being 42/28 tokens respectively (compared to a mean of 9 tokens/segment for captions). Over the 10.6K available segments, only 1.6% of them have zero associated tokens. Furthermore, based on automatic language identification provided by the YouTube API and some manual verification, we estimated that less than 1% of videos contain completely non-English speech (but we do not discard them from our experiments).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "A Closer Look at ASR tokens", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "We also investigate the words-per-minute (WPM) ratio, based on the video segment length. The mean value of 134 WPM is slightly lower than, but comparable to, previously reported figures of English speaking rates (Yuan et al., 2006) , which indicates that, for this set of video segments, words are being detected at rates comparable to everyday English speech.", |
| "cite_spans": [ |
| { |
| "start": 212, |
| "end": 231, |
| "text": "(Yuan et al., 2006)", |
| "ref_id": "BIBREF55" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "A Closer Look at ASR tokens", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "To better understand the generation task, we computed lower and upper bounds for generation performance using a constant-prediction baseline and human performance, respectively. Lower bound: constant. For all segments at test time, we predict \"heat some oil in a pan and add salt and pepper to the pan and stir.\" This sentence is constructed by examining the most common ngrams in the corpus and pasting them together. Upper bound: human estimate. We conducted a small-scale experiment to estimate human performance for the segment-level captioning task. Two of the authors of this paper, after being trained on segment-level captions from three videos, attempted to mirror that style of annotation for the segments of 20 randomly sampled videos, totalling over 140 segment annotations each. 4 Both human annotators report low-confidence with the task, in particular, they found it difficult to maintain a consistent level of specificity in terms of how many factual details to include (e.g., \"mix together\" vs. \"mix the peppers and mushrooms together.\") Results: We compute corpus-level performance statistics using four standard generation evaluation metrics: ROUGE-L (Lin, 2004) , CIDEr (Vedantam et al., 2015) , (Papineni et al., 2002) and METEOR (Banerjee and Lavie, 2005 ) (higher is better in all cases).", |
| "cite_spans": [ |
| { |
| "start": 792, |
| "end": 793, |
| "text": "4", |
| "ref_id": null |
| }, |
| { |
| "start": 1170, |
| "end": 1181, |
| "text": "(Lin, 2004)", |
| "ref_id": "BIBREF25" |
| }, |
| { |
| "start": 1190, |
| "end": 1213, |
| "text": "(Vedantam et al., 2015)", |
| "ref_id": "BIBREF50" |
| }, |
| { |
| "start": 1216, |
| "end": 1239, |
| "text": "(Papineni et al., 2002)", |
| "ref_id": "BIBREF37" |
| }, |
| { |
| "start": 1251, |
| "end": 1276, |
| "text": "(Banerjee and Lavie, 2005", |
| "ref_id": "BIBREF4" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "A Closer Look at the Generation Task", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "Note that our evaluation is micro-averaged at the segment level, and differs slightly from prior work on this dataset, which has mostly reported metrics macro-averaged at the video level. We switched the evaluation because some metrics like BLEU-4 exhibit undesirable sparsity artifacts when macro-averaging, e.g., any video without a correct 4-gram gets a zero BLEU score, even if there are many 1/2/3-grams correct. Segment-level averaging, the standard evaluation practice in fields like machine translation, is insensitive to this sparsity concern, and (we believe) provides a more robust perspective on performance. This comparison highlights the gap that remains between the simplest possible baseline, several computer vision based models, and (roughly) how well humans perform at this task. Given that Sun et al. (2019a) is a highly tuned computer vision model transfer learned from a corpus of over 300K cooking videos, from the perspective of building video captioning systems in practice, we suspect that incorporating additional modalities like ASR is more likely to result in performance gains versus building better computer vision models.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "A Closer Look at the Generation Task", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "In addition to the constant prediction baseline, we explore a series of ASR-based baseline methods: ASR as the Caption (ASC) This baseline returns the test-time ASR token sequence as the caption. While the result is not a coherent, imperative step, performance of this method offers insight into the extent of word overlap between the ASR sequence and the target groundtruth, as measured by the captioning metrics. Filtered ASR (FASC) Given that the ASR token sequences are much longer than groundtruth captions ( \u00a7 3.1), the performance of ASC incurs a length (or precision-based) penalty for several metrics. The FASC baseline strengthens ASC by removing word types that are less likely to appear in groundtruth captions, e.g., \"ah\", \"he\", \"hello,\" or \"wish\". Specifically, we only keep words with high P (w | GT ) P (w | ASR) values, i.e., words that would be indicative of the groundtruth class if we were to build a Naive-Bayes classifier with addone smoothing; probabilities are computed only over the training set to reduce the risk of overfitting. This baseline produces outputs that are shorter compared to ASC, but it is unlikely to yield fluent, readable text. ASR-based Retrieval (RET) This retrieval baseline memorizes the recipe steps in the training set, and represents them each as tf-idf vectors. At testtime, the ASR sequence is converted into a tf-idf vector and compared to each training-set caption via cosine similarity. 5 The training caption that is most similar to the test-time ASR according to this metric is returned as the \"generated\" caption. Note that, although a memorization-based technique, this baseline method produces de-facto captions as outputs.", |
| "cite_spans": [ |
| { |
| "start": 1443, |
| "end": 1444, |
| "text": "5", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Models", |
| "sec_num": "4" |
| }, |
| { |
| "text": "We explore neural encoder-decoder models based on Transformer Networks (Vaswani et al., 2017) . In contrast to RNNs, Transformers abandon recurrence in favor of a mix of different types of feedforward layers, e.g., in the case of the Transformer decoder, self-attention layers, cross-attention layers (attending to the encoder outputs), and fully connected feed-forward layers. We explore two variants of the Transformer, corresponding to different hypotheses about what information might be useful for captioning instructional videos. ASR Transformer (AT) This model learns to map ASR-token sequences directly to captions using a standard sequence-to-sequence Transformer architecture. The model's parameters are optimized to maximize the probability of the ground-truth instructions, conditioned on the input ASR sequences.", |
| "cite_spans": [ |
| { |
| "start": 71, |
| "end": 93, |
| "text": "(Vaswani et al., 2017)", |
| "ref_id": "BIBREF49" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Transformer-based Neural Models", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "Multimodal model (AT+Video) We incorporate video features into the ASR transformer (Fig 2) . For ease of comparison with prior and future work, we use features extracted from ResNet34 (He et al., 2016) pretrained on the ImageNet classification task; these features are provided in the YouCook2 data release. Each video is initially uniformly sampled at 512 frames, with an average of 30 frames per captioned-segment. To represent each video segment, first, k frames are randomly sampled with replacement. The sampled frames are temporally sorted to preserve ordering information, and their corresponding ResNet34 feature vectors are projected to the Transformer encoder hidden dimension via a width-1 1D convolution. We use k = 10 for all our experiments. The encoder self-attention layers perform cross-modal attention operations between the visual features and the ASR-token-based features. For each output token, the decoder attends to previously predicted tokens, and encoder outputs for all input frames / ASR tokens.", |
| "cite_spans": [ |
| { |
| "start": 184, |
| "end": 201, |
| "text": "(He et al., 2016)", |
| "ref_id": "BIBREF13" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 83, |
| "end": 90, |
| "text": "(Fig 2)", |
| "ref_id": "FIGREF0" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Transformer-based Neural Models", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "We perform 10-fold cross-validation with randomly sampled 80/10/10 train/dev/test splits (split at the video-level), using the same splits for all models. After discarding the videos that were deleted at the time of data collection, each split contains roughly 1.1K training videos (averaging 8.3K training segments). We report mean performance over these splits according to four standard captioning accuracy metrics, introduced in \u00a73.2. ROUGE-L, CIDEr, BLEU-4, and METEOR. We perform both Wilcoxon signed-rank tests (Dem\u0161ar, 2006) and two-sided corrected resampled t-tests (Nadeau and Bengio, 2000) to estimate statistical significance. To be conservative and reduce the chance of Type I error, we take whichever p-value is larger between these two tests. Transformer-based model details. For each cross-validation split, we use a batch size of 128, tie the Transformer model's feed forward and model dimensions d f f n = d model , and optimize regularized cross-entropy loss using Adam (Kingma and Ba, 2015) with lr = .001. We train models for 100K steps, storing checkpoint files periodically. For each split, we train 8 model variants, conducting a grid search over model dimension, number of encoder/decoder layers, and L2 regularization: we consider all model parameter settings in (d model , N layer , \u03bb reg ) \u2208 {128, 256} \u00d7 {2, 3} \u00d7 {.0005, .001} for each cross-validation split independently, and select the highest performing, checkpointed model according to ROUGE-L over the development set for that fold. Transformer models are implemented using tensor2tensor (Vaswani et al., 2018) and Tensorflow (Abadi et al., 2015) . The vocabulary (average size 800) is determined separately using the training data for each cross-validation split. Words are considered if they occur at least 5 times in the ground-truth of the current training set. 6 This leads to an OOV rate of \u223c60% in the input. We truncate inputs at 80 tokens (\u223c10-15%", |
| "cite_spans": [ |
| { |
| "start": 518, |
| "end": 532, |
| "text": "(Dem\u0161ar, 2006)", |
| "ref_id": "BIBREF9" |
| }, |
| { |
| "start": 575, |
| "end": 600, |
| "text": "(Nadeau and Bengio, 2000)", |
| "ref_id": "BIBREF32" |
| }, |
| { |
| "start": 1573, |
| "end": 1595, |
| "text": "(Vaswani et al., 2018)", |
| "ref_id": "BIBREF48" |
| }, |
| { |
| "start": 1611, |
| "end": 1631, |
| "text": "(Abadi et al., 2015)", |
| "ref_id": null |
| }, |
| { |
| "start": 1851, |
| "end": 1852, |
| "text": "6", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experiments", |
| "sec_num": "5" |
| }, |
| { |
| "text": "\"so i just want to go ahead and remove all of this fat from our chicken... cut it into about one inch pieces so you want pieces\" cut the chicken into pieces \"... color them and then shape them \u2026 tongs so as not to burn yourself it goes with total tacos in a frying pan ...'\" \"fattoush salad but you can add in cilantro and some other herbs if you prefer to do that instead of the parsley and one\" \"out of the ball now we're going to cut it and divide it\"", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experiments", |
| "sec_num": "5" |
| }, |
| { |
| "text": "\"get the colored variety the kashmiri variety is very good one and a half tablespoon of coriander\" of transcripts are truncated in this process). For simplicity, decoding is done greedily in all cases. Generation Experiment Results. Table 2 reports the performance of each model. For unimodal models, simple baselines like FASC (filtered ASR) and RET (training-caption retrieval) outperform the state-of-the-art video-only model of Sun et al. (2019a) , according to the four automatic evaluation metrics. Overall, AT yields the best unimodal performance. Combining ASR and visual signals into a multimodal representation performs even better: the AT+Video model tends to outperform AT (and Sun et al. (2019a) ), according to ROUGE-L, CIDEr, and METEOR (p <.01). Since AT and AT+Video have identical architectures and differ only in the available inputs, this result provides strong evidence that it is indeed the multimodality of AT+Video that leads to the (statistically significant) performance gains over the strongest unimodal models. We present some output examples in Fig. 3 .", |
| "cite_spans": [ |
| { |
| "start": 432, |
| "end": 450, |
| "text": "Sun et al. (2019a)", |
| "ref_id": "BIBREF45" |
| }, |
| { |
| "start": 685, |
| "end": 708, |
| "text": "(and Sun et al. (2019a)", |
| "ref_id": "BIBREF45" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 233, |
| "end": 240, |
| "text": "Table 2", |
| "ref_id": "TABREF4" |
| }, |
| { |
| "start": 1074, |
| "end": 1080, |
| "text": "Fig. 3", |
| "ref_id": "FIGREF1" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Experiments", |
| "sec_num": "5" |
| }, |
| { |
| "text": "In addition to the automatic quality metrics, we measure how diverse the generated caption are for each model, using the following metrics: vocabulary coverage (the percent of vocabulary that was predicted at test-time by each algorithm at least once); proportion not copied (the percent of generated captions that do not appear in the training set verbatim); and output uniqueness (the percent of generated captions that are unique). These metrics are useful because they can highlight undesirable, degenerate behavior for models. 7 As an upperbound, we compute these metrics for the groundtruth (GT) test-time targets. Note that even the ground-truth targets do not achieve 100% in these diversity metrics: for vocabulary coverage, not all vocabulary items appear in the ground-truth captions for a given cross-validation split; similarly, for proportion not copied/output uniqueness, because there are repeated captions in the label set.", |
| "cite_spans": [ |
| { |
| "start": 532, |
| "end": 533, |
| "text": "7", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Diversity of Generated Captions", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "Cov. Not Copied Unique 30% 65% 100%", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Vocab", |
| "sec_num": null |
| }, |
| { |
| "text": "AT+Video GT Figure 4 : The multimodal model AT+Video produces slightly more diverse captions than its unimodal counterparts.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 12, |
| "end": 20, |
| "text": "Figure 4", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "AT", |
| "sec_num": null |
| }, |
| { |
| "text": "According to all metrics, AT+Video outputs are slightly more diverse compared to the AT outputs (Fig. 4) . This observation suggests that the multimodal model is not simply exploiting a degeneracy to achieve its performance improvements.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 96, |
| "end": 104, |
| "text": "(Fig. 4)", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "AT", |
| "sec_num": null |
| }, |
| { |
| "text": "We now turn to the question of why multimodal models produce better captions: what type of signal does video contain that speech does not (and vice versa)? Our initial idea was to quantitatively compare the captions generated by AT versus AT+Video; however, because the dataset is relatively small, we were unable to make observations about the generated captions that were statistically significant. 8", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Complementarity of Video and ASR", |
| "sec_num": "6" |
| }, |
| { |
| "text": ".50", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Complementarity of Video and ASR", |
| "sec_num": "6" |
| }, |
| { |
| "text": ".75 Instead, we examine properties of the ASRtoken-based and visual features directly. Following a procedure inspired from (Lu et al., 2008; Berg et al., 2012; Dai et al., 2018; Mahajan et al., 2018) , we consider the auxiliary task of predicting presence/absence of unigrams in the ground truth captions from features extracted from corresponding segments. We train two unimodal classifiers, one using ASR-token-based features and one using visual features, and measure their relative capacity to predict different word types; the goal is to measure which word types are most-predictable from the ASR tokens and, conversely, which ones are most-predictable from the visual features.", |
| "cite_spans": [ |
| { |
| "start": 123, |
| "end": 140, |
| "text": "(Lu et al., 2008;", |
| "ref_id": "BIBREF26" |
| }, |
| { |
| "start": 141, |
| "end": 159, |
| "text": "Berg et al., 2012;", |
| "ref_id": "BIBREF5" |
| }, |
| { |
| "start": 160, |
| "end": 177, |
| "text": "Dai et al., 2018;", |
| "ref_id": "BIBREF8" |
| }, |
| { |
| "start": 178, |
| "end": 199, |
| "text": "Mahajan et al., 2018)", |
| "ref_id": "BIBREF27" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Complementarity of Video and ASR", |
| "sec_num": "6" |
| }, |
| { |
| "text": "For each segment, we predict the unigram distribution of its corresponding caption using a unimodal softmax classifier: for simplicity, we use a 2-layer, residual deep averaging network (Iyyer et al., 2015) for both the visual and ASR-based classifier. We measure per-word-type performance using AUC, which is word-frequency independent.", |
| "cite_spans": [ |
| { |
| "start": 186, |
| "end": 206, |
| "text": "(Iyyer et al., 2015)", |
| "ref_id": "BIBREF18" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Complementarity of Video and ASR", |
| "sec_num": "6" |
| }, |
| { |
| "text": "Specifically -for each word type w (e.g., w = beer) we measure how well w is predicted by the classifier based on ASR / spoken tokens AUC t,w (e.g., AUC t,beer = 98) and, conversely, how well w is predicted by the visual classifier AUC v,w (AUC v,beer = 68). For a given word type, we measure its overall difficulty by averaging AUC t,w and AUC v,w ; we call this AUC \u00b5,w (AUC \u00b5,beer = 83). Similarly, we measure the difference in difficulty by subtracting AUC t,w and AUC v,w to give AUC \u2206,w (AUC \u2206,beer = 30) with higher values indicating that a word type is predicted better by the spoken-token features compared to the visual features. We plot AUC t,w versus AUC v,w for 382 words in Fig. 5 (results are averaged over 10 cross-val splits). Absolute Performance. Points in the upper-right quadrant of Fig. 5 represent words that are easy for both visual and ASR-token-based features to predict, whereas points in the lower-left represent words that are more difficult. Specific ingredients, e.g., \"nori\" and \"mozzarella,\" are often easy to detect, as are actions closely associated with particular objects (e.g., \"dough\" is almost always the object being \"knead\"-ed). Conversely, pronouns (e.g., \"it\") and conjunctions (e.g., \"or\") are universally difficult to predict. Visual vs. ASR-token-based features. In general, ASR-token-based features carry greater predictive power, as evidenced by the skew towards the bottom right in the scatterplot in Fig. 5 . One pattern in the cases where speech features perform better (Fig. 5c ) is that words are often modifiers, e.g., white (pepper), sea (salt), dried (chilies), olive (oil), etc. Indeed, small, detailed distinctions may be often difficult to make from visual features, e.g., \"vegetable oil\" and \"olive oil\" may look identical in most YouTube videos.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 688, |
| "end": 694, |
| "text": "Fig. 5", |
| "ref_id": null |
| }, |
| { |
| "start": 804, |
| "end": 810, |
| "text": "Fig. 5", |
| "ref_id": null |
| }, |
| { |
| "start": 1451, |
| "end": 1457, |
| "text": "Fig. 5", |
| "ref_id": null |
| }, |
| { |
| "start": 1522, |
| "end": 1530, |
| "text": "(Fig. 5c", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Complementarity of Video and ASR", |
| "sec_num": "6" |
| }, |
| { |
| "text": "Nonetheless, there are types better predicted by video features (Fig. 5d ). Often, these are cases that require unstated, background knowledge, i.e., references to objects not explicitly stated by the speaker(s). To quantify this observation, for each word type we compute the likelihood that it is stated by the speaker in the video, given that it appears in the ground-truth caption, i.e., P (w \u2208 ASR | w \u2208 GT). Aside from trivial cases (e.g., words misspelled in the GT never appear in the ASR), words that are often unstated include action words (e.g., \"place\", \"crush\") and cookware (e.g., \"pan\", \"wok\", \"pot\"). Words that are often stated include specific ingredients (e.g., \"honey\", \"coconut\", \"ginger\"). In contrast to word frequency (which is uncorrelated with AUC \u2206,w , Spearman \u03c1 \u2248 0), stated rate is correlated with AUC \u2206,w (\u03c1 = 0.44, p < .01).", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 64, |
| "end": 72, |
| "text": "(Fig. 5d", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Complementarity of Video and ASR", |
| "sec_num": "6" |
| }, |
| { |
| "text": "The results in Table 2 indicate that, while adding visual information yields statistically significant improvements to the ASR-only model, the improvements are not large in magnitude. This leaves open the question of whether (a) any visual information simply does not provide much additional information on top of ASR, or (b) we need better visual modeling. We take a first step in addressing this question by experimenting with an \"oracle\" object detector that provides perfectprecision predictions. 9 If even oracle object detection does not help, then the answer is more likely (a) rather than (b) above.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 15, |
| "end": 22, |
| "text": "Table 2", |
| "ref_id": "TABREF4" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Oracle Object Detection", |
| "sec_num": "7" |
| }, |
| { |
| "text": "As part of a YouCook2 data release, bounding box annotations for selected objects in the recipe text (Zhou et al., 2018a) were provided. Unfortunately, while these could have served as an oracle, the actual annotations are only available for a small fraction of the data. Instead, we consider the set of 62 object labels made available. We simulate a high-precision, oracle object detector by identifying -per video segment -the overlap between (morphology-normalized) groundtruth caption mentions and the 62 object labels available. 10 For instance, for the groundtruth caption \"put the mushrooms in the pan\", the oracle object detector yields \"mushroom\" and \"pan\". 89% of segments receive at least one oracle object. The oracle object detections are then fed into the Transformer encoder (in random order), either by themselves (Oracle) or along with the ASR token sequence (AT+Oracle). We perform the same crossvalidation experiments as described in \u00a75, and report the average ROUGE-L (we observe similar trends with other metrics): Because the AT+Oracle model achieves large improvements over AT+Video, we suspect that building higher-quality visual representations is a promising avenue for future work.", |
| "cite_spans": [ |
| { |
| "start": 101, |
| "end": 121, |
| "text": "(Zhou et al., 2018a)", |
| "ref_id": "BIBREF56" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Oracle Object Detection", |
| "sec_num": "7" |
| }, |
| { |
| "text": "50% 90%", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "10%", |
| "sec_num": null |
| }, |
| { |
| "text": "% of Vocab Avail. How weak of an oracle can still produce high performance? Fig. 6 shows performances of models using subsets of the 62 objects (most frequent 10% of objects through 90%) over one crossvalidation fold. AT+Oracle gives better performance than AT+Video by detecting just 6 object types, and the oracle by-itself (which is only given access to object sets) achieves comparable performance to AT+Video with 30 object types. These results suggest that, at least for this task, the Transformer decoder is likely not the main performance bottleneck, as it is able to paste-together unordered object detections into captions effectively.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 76, |
| "end": 82, |
| "text": "Fig. 6", |
| "ref_id": "FIGREF2" |
| } |
| ], |
| "eq_spans": [], |
| "section": "10%", |
| "sec_num": null |
| }, |
| { |
| "text": "In this work, we demonstrate the impact of incorporating both visual and ASR-token-based features into instructional video captioning models. Additional experiments investigate the complementarity of the visual and speech signals.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusion", |
| "sec_num": "8" |
| }, |
| { |
| "text": "Our oracle experiments suggest that performance bottlenecks likely derive from the input encoding, as the decoder is able to paste-together even simple sets of object detections into highquality captions. Future work would thus be wellsuited to investigate better models of input data. Given the small size of the dataset, transfer learning may prove fruitful, e.g., pre-training the encoder with an unsupervised, auxiliary task; work contemporaneous with our submission from the computer vision community suggests that transfer learning indeed is a promising direction (Sun et al., 2019b,a; Miech et al., 2019) .", |
| "cite_spans": [ |
| { |
| "start": 570, |
| "end": 591, |
| "text": "(Sun et al., 2019b,a;", |
| "ref_id": null |
| }, |
| { |
| "start": 592, |
| "end": 611, |
| "text": "Miech et al., 2019)", |
| "ref_id": "BIBREF30" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusion", |
| "sec_num": "8" |
| }, |
| { |
| "text": "We find that state-of-the-art models perform poorly even for just this subtask (see \u00a7 3.2), so we reserve the full task for future work.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "How2(Sanabria et al., 2018) tackles the different task of predicting video uploader-provided descriptions/captions, which are not always appropriate summarizations.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "https://developers.google.comyoutube/v3/docs/captions 4 These preliminary experiments are not meant to provide a definitive, exact measure of inter-annotator agreement.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "We tried several variants of this method, e.g., comparing test ASR to train ASR, but found that comparing test ASR to train captions performed the best.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "Different vocabulary creation schemes, e.g., sub-word tokenization, led to small performance decreases.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "For instance, the constant prediction baseline we consider would score low in both vocab coverage and uniqueness.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "In general, making concrete statements about the causal link between inputs and outputs of sequence-to-sequence models is challenging, even in the text-to-text case, see Alvarez-Melis and Jaakkola (2017).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "High-precision object detectors are gaining popularity in the computer vision community because the training data is easier to annotate, e.g.,Krasin et al. (2017).10 This oracle is unlikely to be achievable, as it assumes 100% precision for the 62 objects considered (which also implies modeling which objects to talk about, a non-trivial task in itself (Berg et al., 2012)).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| } |
| ], |
| "back_matter": [ |
| { |
| "text": "Acknowledgements. We would like to thank Maria Antoniak, Nan Ding, Sebastian Goodman, Jean Griffin, Fernando Pereira, Chen Sun, and the anonymous reviewers for their helpful comments.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "acknowledgement", |
| "sec_num": null |
| } |
| ], |
| "bib_entries": { |
| "BIBREF1": { |
| "ref_id": "b1", |
| "title": "Unsupervised learning from narrated instruction videos", |
| "authors": [ |
| { |
| "first": "Piotr", |
| "middle": [], |
| "last": "Jean-Baptiste Alayrac", |
| "suffix": "" |
| }, |
| { |
| "first": "Nishant", |
| "middle": [], |
| "last": "Bojanowski", |
| "suffix": "" |
| }, |
| { |
| "first": "Josef", |
| "middle": [], |
| "last": "Agrawal", |
| "suffix": "" |
| }, |
| { |
| "first": "Ivan", |
| "middle": [], |
| "last": "Sivic", |
| "suffix": "" |
| }, |
| { |
| "first": "Simon", |
| "middle": [], |
| "last": "Laptev", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Lacoste-Julien", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "CVPR", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Jean-Baptiste Alayrac, Piotr Bojanowski, Nishant Agrawal, Josef Sivic, Ivan Laptev, and Simon Lacoste-Julien. 2016. Unsupervised learning from narrated instruction videos. In CVPR.", |
| "links": null |
| }, |
| "BIBREF2": { |
| "ref_id": "b2", |
| "title": "A causal framework for explaining the predictions of black-box sequence-to-sequence models", |
| "authors": [ |
| { |
| "first": "David", |
| "middle": [], |
| "last": "Alvarez-Melis", |
| "suffix": "" |
| }, |
| { |
| "first": "Tommi", |
| "middle": [ |
| "S" |
| ], |
| "last": "Jaakkola", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "EMNLP", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "David Alvarez-Melis and Tommi S Jaakkola. 2017. A causal framework for explaining the predictions of black-box sequence-to-sequence models. In EMNLP.", |
| "links": null |
| }, |
| "BIBREF3": { |
| "ref_id": "b3", |
| "title": "Look, listen and learn", |
| "authors": [ |
| { |
| "first": "Relja", |
| "middle": [], |
| "last": "Arandjelovic", |
| "suffix": "" |
| }, |
| { |
| "first": "Andrew", |
| "middle": [], |
| "last": "Zisserman", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "ICCV", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Relja Arandjelovic and Andrew Zisserman. 2017. Look, listen and learn. In ICCV.", |
| "links": null |
| }, |
| "BIBREF4": { |
| "ref_id": "b4", |
| "title": "Meteor: An automatic metric for mt evaluation with improved correlation with human judgments", |
| "authors": [ |
| { |
| "first": "Satanjeev", |
| "middle": [], |
| "last": "Banerjee", |
| "suffix": "" |
| }, |
| { |
| "first": "Alon", |
| "middle": [], |
| "last": "Lavie", |
| "suffix": "" |
| } |
| ], |
| "year": 2005, |
| "venue": "ACL workshop on Evaluation Measures for MT and Summarization", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Satanjeev Banerjee and Alon Lavie. 2005. Meteor: An automatic metric for mt evaluation with improved correlation with human judgments. In ACL work- shop on Evaluation Measures for MT and Summa- rization.", |
| "links": null |
| }, |
| "BIBREF5": { |
| "ref_id": "b5", |
| "title": "Understanding and predicting importance in images", |
| "authors": [ |
| { |
| "first": "C", |
| "middle": [], |
| "last": "Alexander", |
| "suffix": "" |
| }, |
| { |
| "first": "Tamara", |
| "middle": [ |
| "L" |
| ], |
| "last": "Berg", |
| "suffix": "" |
| }, |
| { |
| "first": "Hal", |
| "middle": [], |
| "last": "Berg", |
| "suffix": "" |
| }, |
| { |
| "first": "Iii", |
| "middle": [], |
| "last": "Daum\u00e9", |
| "suffix": "" |
| }, |
| { |
| "first": "Jesse", |
| "middle": [], |
| "last": "Dodge", |
| "suffix": "" |
| }, |
| { |
| "first": "Amit", |
| "middle": [], |
| "last": "Goyal", |
| "suffix": "" |
| }, |
| { |
| "first": "Xufeng", |
| "middle": [], |
| "last": "Han", |
| "suffix": "" |
| }, |
| { |
| "first": "Alyssa", |
| "middle": [], |
| "last": "Mensch", |
| "suffix": "" |
| }, |
| { |
| "first": "Margaret", |
| "middle": [], |
| "last": "Mitchell", |
| "suffix": "" |
| }, |
| { |
| "first": "Aneesh", |
| "middle": [], |
| "last": "Sood", |
| "suffix": "" |
| }, |
| { |
| "first": "Karl", |
| "middle": [], |
| "last": "Stratos", |
| "suffix": "" |
| } |
| ], |
| "year": 2012, |
| "venue": "CVPR", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Alexander C Berg, Tamara L Berg, Hal Daum\u00e9 III, Jesse Dodge, Amit Goyal, Xufeng Han, Alyssa Mensch, Margaret Mitchell, Aneesh Sood, Karl Stratos, et al. 2012. Understanding and predicting importance in images. In CVPR.", |
| "links": null |
| }, |
| "BIBREF6": { |
| "ref_id": "b6", |
| "title": "Attacking visual language grounding with adversarial examples: A case study on neural image captioning", |
| "authors": [ |
| { |
| "first": "Hongge", |
| "middle": [], |
| "last": "Chen", |
| "suffix": "" |
| }, |
| { |
| "first": "Huan", |
| "middle": [], |
| "last": "Zhang", |
| "suffix": "" |
| }, |
| { |
| "first": "Pin-Yu", |
| "middle": [], |
| "last": "Chen", |
| "suffix": "" |
| }, |
| { |
| "first": "Jinfeng", |
| "middle": [], |
| "last": "Yi", |
| "suffix": "" |
| }, |
| { |
| "first": "Cho-Jui", |
| "middle": [], |
| "last": "Hsieh", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "ACL", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Hongge Chen, Huan Zhang, Pin-Yu Chen, Jinfeng Yi, and Cho-Jui Hsieh. 2018. Attacking visual language grounding with adversarial examples: A case study on neural image captioning. In ACL.", |
| "links": null |
| }, |
| "BIBREF7": { |
| "ref_id": "b7", |
| "title": "Seeing and hearing too: Audio representation for video captioning", |
| "authors": [ |
| { |
| "first": "", |
| "middle": [], |
| "last": "Shun-Po", |
| "suffix": "" |
| }, |
| { |
| "first": "Chia-Hung", |
| "middle": [], |
| "last": "Chuang", |
| "suffix": "" |
| }, |
| { |
| "first": "Pang-Chi", |
| "middle": [], |
| "last": "Wan", |
| "suffix": "" |
| }, |
| { |
| "first": "Chi-Yu", |
| "middle": [], |
| "last": "Huang", |
| "suffix": "" |
| }, |
| { |
| "first": "Hung-Yi", |
| "middle": [], |
| "last": "Yang", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Lee", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "IEEE Automatic Speech Recognition and Understanding Workshop", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Shun-Po Chuang, Chia-Hung Wan, Pang-Chi Huang, Chi-Yu Yang, and Hung-Yi Lee. 2017. Seeing and hearing too: Audio representation for video caption- ing. In IEEE Automatic Speech Recognition and Understanding Workshop.", |
| "links": null |
| }, |
| "BIBREF8": { |
| "ref_id": "b8", |
| "title": "A neural compositional paradigm for image captioning", |
| "authors": [ |
| { |
| "first": "Bo", |
| "middle": [], |
| "last": "Dai", |
| "suffix": "" |
| }, |
| { |
| "first": "Sanja", |
| "middle": [], |
| "last": "Fidler", |
| "suffix": "" |
| }, |
| { |
| "first": "Dahua", |
| "middle": [], |
| "last": "Lin", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Bo Dai, Sanja Fidler, and Dahua Lin. 2018. A neu- ral compositional paradigm for image captioning. In NeurIPS.", |
| "links": null |
| }, |
| "BIBREF9": { |
| "ref_id": "b9", |
| "title": "Statistical comparisons of classifiers over multiple data sets", |
| "authors": [ |
| { |
| "first": "Janez", |
| "middle": [], |
| "last": "Dem\u0161ar", |
| "suffix": "" |
| } |
| ], |
| "year": 2006, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Janez Dem\u0161ar. 2006. Statistical comparisons of classi- fiers over multiple data sets. JMLR.", |
| "links": null |
| }, |
| "BIBREF10": { |
| "ref_id": "b10", |
| "title": "Using closed captions as supervision for video activity recognition", |
| "authors": [ |
| { |
| "first": "Sonal", |
| "middle": [], |
| "last": "Gupta", |
| "suffix": "" |
| }, |
| { |
| "first": "Raymond J", |
| "middle": [], |
| "last": "Mooney", |
| "suffix": "" |
| } |
| ], |
| "year": 2010, |
| "venue": "AAAI", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Sonal Gupta and Raymond J Mooney. 2010. Us- ing closed captions as supervision for video activity recognition. In AAAI.", |
| "links": null |
| }, |
| "BIBREF11": { |
| "ref_id": "b11", |
| "title": "Learning to localize and align fine-grained actions to sparse instructions", |
| "authors": [ |
| { |
| "first": "Meera", |
| "middle": [], |
| "last": "Hahn", |
| "suffix": "" |
| }, |
| { |
| "first": "Nataniel", |
| "middle": [], |
| "last": "Ruiz", |
| "suffix": "" |
| }, |
| { |
| "first": "Jean-Baptiste", |
| "middle": [], |
| "last": "Alayrac", |
| "suffix": "" |
| }, |
| { |
| "first": "Ivan", |
| "middle": [], |
| "last": "Laptev", |
| "suffix": "" |
| }, |
| { |
| "first": "James", |
| "middle": [ |
| "M" |
| ], |
| "last": "Rehg", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": { |
| "arXiv": [ |
| "arXiv:1809.08381" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Meera Hahn, Nataniel Ruiz, Jean-Baptiste Alayrac, Ivan Laptev, and James M Rehg. 2018. Learning to localize and align fine-grained actions to sparse instructions. arXiv preprint arXiv:1809.08381.", |
| "links": null |
| }, |
| "BIBREF12": { |
| "ref_id": "b12", |
| "title": "Integrating both visual and audio cues for enhanced video caption", |
| "authors": [ |
| { |
| "first": "Wangli", |
| "middle": [], |
| "last": "Hao", |
| "suffix": "" |
| }, |
| { |
| "first": "Zhaoxiang", |
| "middle": [], |
| "last": "Zhang", |
| "suffix": "" |
| }, |
| { |
| "first": "He", |
| "middle": [], |
| "last": "Guan", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "AAAI", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Wangli Hao, Zhaoxiang Zhang, and He Guan. 2018. Integrating both visual and audio cues for enhanced video caption. In AAAI.", |
| "links": null |
| }, |
| "BIBREF13": { |
| "ref_id": "b13", |
| "title": "Deep residual learning for image recognition", |
| "authors": [ |
| { |
| "first": "Kaiming", |
| "middle": [], |
| "last": "He", |
| "suffix": "" |
| }, |
| { |
| "first": "Xiangyu", |
| "middle": [], |
| "last": "Zhang", |
| "suffix": "" |
| }, |
| { |
| "first": "Shaoqing", |
| "middle": [], |
| "last": "Ren", |
| "suffix": "" |
| }, |
| { |
| "first": "Jian", |
| "middle": [], |
| "last": "Sun", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "CVPR", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep residual learning for image recog- nition. In CVPR.", |
| "links": null |
| }, |
| "BIBREF14": { |
| "ref_id": "b14", |
| "title": "Localizing moments in video with natural language", |
| "authors": [ |
| { |
| "first": "Lisa", |
| "middle": [ |
| "Anne" |
| ], |
| "last": "Hendricks", |
| "suffix": "" |
| }, |
| { |
| "first": "Oliver", |
| "middle": [], |
| "last": "Wang", |
| "suffix": "" |
| }, |
| { |
| "first": "Eli", |
| "middle": [], |
| "last": "Shechtman", |
| "suffix": "" |
| }, |
| { |
| "first": "Josef", |
| "middle": [], |
| "last": "Sivic", |
| "suffix": "" |
| }, |
| { |
| "first": "Trevor", |
| "middle": [], |
| "last": "Darrell", |
| "suffix": "" |
| }, |
| { |
| "first": "Bryan", |
| "middle": [], |
| "last": "Russell", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "ICCV", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Lisa Anne Hendricks, Oliver Wang, Eli Shechtman, Josef Sivic, Trevor Darrell, and Bryan Russell. 2017. Localizing moments in video with natural language. In ICCV.", |
| "links": null |
| }, |
| "BIBREF15": { |
| "ref_id": "b15", |
| "title": "Attention-based multimodal fusion for video description", |
| "authors": [ |
| { |
| "first": "Chiori", |
| "middle": [], |
| "last": "Hori", |
| "suffix": "" |
| }, |
| { |
| "first": "Takaaki", |
| "middle": [], |
| "last": "Hori", |
| "suffix": "" |
| }, |
| { |
| "first": "Teng-Yok", |
| "middle": [], |
| "last": "Lee", |
| "suffix": "" |
| }, |
| { |
| "first": "Ziming", |
| "middle": [], |
| "last": "Zhang", |
| "suffix": "" |
| }, |
| { |
| "first": "Bret", |
| "middle": [], |
| "last": "Harsham", |
| "suffix": "" |
| }, |
| { |
| "first": "R", |
| "middle": [], |
| "last": "John", |
| "suffix": "" |
| }, |
| { |
| "first": "Tim", |
| "middle": [ |
| "K" |
| ], |
| "last": "Hershey", |
| "suffix": "" |
| }, |
| { |
| "first": "Kazuhiko", |
| "middle": [], |
| "last": "Marks", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Sumi", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "ICCV", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Chiori Hori, Takaaki Hori, Teng-Yok Lee, Ziming Zhang, Bret Harsham, John R Hershey, Tim K Marks, and Kazuhiko Sumi. 2017. Attention-based multimodal fusion for video description. In ICCV.", |
| "links": null |
| }, |
| "BIBREF16": { |
| "ref_id": "b16", |
| "title": "Finding it: Weakly-supervised reference-aware visual grounding in instructional videos", |
| "authors": [ |
| { |
| "first": "De-An", |
| "middle": [], |
| "last": "Huang", |
| "suffix": "" |
| }, |
| { |
| "first": "Shyamal", |
| "middle": [], |
| "last": "Buch", |
| "suffix": "" |
| }, |
| { |
| "first": "Lucio", |
| "middle": [], |
| "last": "Dery", |
| "suffix": "" |
| }, |
| { |
| "first": "Animesh", |
| "middle": [], |
| "last": "Garg", |
| "suffix": "" |
| }, |
| { |
| "first": "Li", |
| "middle": [], |
| "last": "Fei-Fei", |
| "suffix": "" |
| }, |
| { |
| "first": "Juan", |
| "middle": [ |
| "Carlos" |
| ], |
| "last": "Niebles", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "CVPR", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "De-An Huang, Shyamal Buch, Lucio Dery, Animesh Garg, Li Fei-Fei, and Juan Carlos Niebles. 2018. Finding it: Weakly-supervised reference-aware vi- sual grounding in instructional videos. In CVPR.", |
| "links": null |
| }, |
| "BIBREF17": { |
| "ref_id": "b17", |
| "title": "Unsupervised visual-linguistic reference resolution in instructional videos", |
| "authors": [ |
| { |
| "first": "De-An", |
| "middle": [], |
| "last": "Huang", |
| "suffix": "" |
| }, |
| { |
| "first": "Joseph", |
| "middle": [ |
| "J" |
| ], |
| "last": "Lim", |
| "suffix": "" |
| }, |
| { |
| "first": "Li", |
| "middle": [], |
| "last": "Fei-Fei", |
| "suffix": "" |
| }, |
| { |
| "first": "Juan", |
| "middle": [ |
| "Carlos" |
| ], |
| "last": "Niebles", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "CVPR", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "De-An Huang, Joseph J Lim, Li Fei-Fei, and Juan Car- los Niebles. 2017. Unsupervised visual-linguistic reference resolution in instructional videos. In CVPR.", |
| "links": null |
| }, |
| "BIBREF18": { |
| "ref_id": "b18", |
| "title": "Deep unordered composition rivals syntactic methods for text classification", |
| "authors": [ |
| { |
| "first": "Mohit", |
| "middle": [], |
| "last": "Iyyer", |
| "suffix": "" |
| }, |
| { |
| "first": "Varun", |
| "middle": [], |
| "last": "Manjunatha", |
| "suffix": "" |
| }, |
| { |
| "first": "Jordan", |
| "middle": [], |
| "last": "Boyd-Graber", |
| "suffix": "" |
| }, |
| { |
| "first": "Hal", |
| "middle": [], |
| "last": "Daum\u00e9", |
| "suffix": "" |
| }, |
| { |
| "first": "Iii", |
| "middle": [], |
| "last": "", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "ACL", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Mohit Iyyer, Varun Manjunatha, Jordan Boyd-Graber, and Hal Daum\u00e9 III. 2015. Deep unordered compo- sition rivals syntactic methods for text classification. In ACL.", |
| "links": null |
| }, |
| "BIBREF19": { |
| "ref_id": "b19", |
| "title": "Crowdsourcing step-by-step information extraction to enhance existing how-to videos", |
| "authors": [ |
| { |
| "first": "Juho", |
| "middle": [], |
| "last": "Kim", |
| "suffix": "" |
| }, |
| { |
| "first": "Sarah", |
| "middle": [], |
| "last": "Phu Tran Nguyen", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Weir", |
| "suffix": "" |
| }, |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Philip", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Guo", |
| "suffix": "" |
| }, |
| { |
| "first": "C", |
| "middle": [], |
| "last": "Robert", |
| "suffix": "" |
| }, |
| { |
| "first": "Krzysztof Z", |
| "middle": [], |
| "last": "Miller", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Gajos", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "CHI", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Juho Kim, Phu Tran Nguyen, Sarah Weir, Philip J Guo, Robert C Miller, and Krzysztof Z Gajos. 2014. Crowdsourcing step-by-step information extraction to enhance existing how-to videos. In CHI.", |
| "links": null |
| }, |
| "BIBREF20": { |
| "ref_id": "b20", |
| "title": "Adam: A method for stochastic optimization", |
| "authors": [ |
| { |
| "first": "P", |
| "middle": [], |
| "last": "Diederik", |
| "suffix": "" |
| }, |
| { |
| "first": "Jimmy", |
| "middle": [], |
| "last": "Kingma", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Ba", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "ICLR", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Diederik P Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In ICLR.", |
| "links": null |
| }, |
| "BIBREF21": { |
| "ref_id": "b21", |
| "title": "Openimages: A public dataset for large-scale multilabel and multi-class image classification", |
| "authors": [ |
| { |
| "first": "Ivan", |
| "middle": [], |
| "last": "Krasin", |
| "suffix": "" |
| }, |
| { |
| "first": "Tom", |
| "middle": [], |
| "last": "Duerig", |
| "suffix": "" |
| }, |
| { |
| "first": "Neil", |
| "middle": [], |
| "last": "Alldrin", |
| "suffix": "" |
| }, |
| { |
| "first": "Vittorio", |
| "middle": [], |
| "last": "Ferrari", |
| "suffix": "" |
| }, |
| { |
| "first": "Sami", |
| "middle": [], |
| "last": "Abu-El-Haija", |
| "suffix": "" |
| }, |
| { |
| "first": "Alina", |
| "middle": [], |
| "last": "Kuznetsova", |
| "suffix": "" |
| }, |
| { |
| "first": "Hassan", |
| "middle": [], |
| "last": "Rom", |
| "suffix": "" |
| }, |
| { |
| "first": "Jasper", |
| "middle": [], |
| "last": "Uijlings", |
| "suffix": "" |
| }, |
| { |
| "first": "Stefan", |
| "middle": [], |
| "last": "Popov", |
| "suffix": "" |
| }, |
| { |
| "first": "Shahab", |
| "middle": [], |
| "last": "Kamali", |
| "suffix": "" |
| }, |
| { |
| "first": "Matteo", |
| "middle": [], |
| "last": "Malloci", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ivan Krasin, Tom Duerig, Neil Alldrin, Vittorio Fer- rari, Sami Abu-El-Haija, Alina Kuznetsova, Hassan Rom, Jasper Uijlings, Stefan Popov, Shahab Kamali, Matteo Malloci, Jordi Pont-Tuset, Andreas Veit, Serge Belongie, Victor Gomes, Abhinav Gupta, Chen Sun, Gal Chechik, David Cai, Zheyun Feng, Dhyanesh Narayanan, and Kevin Murphy. 2017. Openimages: A public dataset for large-scale multi- label and multi-class image classification.", |
| "links": null |
| }, |
| "BIBREF22": { |
| "ref_id": "b22", |
| "title": "Dense-captioning events in videos", |
| "authors": [ |
| { |
| "first": "Ranjay", |
| "middle": [], |
| "last": "Krishna", |
| "suffix": "" |
| }, |
| { |
| "first": "Kenji", |
| "middle": [], |
| "last": "Hata", |
| "suffix": "" |
| }, |
| { |
| "first": "Frederic", |
| "middle": [], |
| "last": "Ren", |
| "suffix": "" |
| }, |
| { |
| "first": "Li", |
| "middle": [], |
| "last": "Fei-Fei", |
| "suffix": "" |
| }, |
| { |
| "first": "Juan", |
| "middle": [ |
| "Carlos" |
| ], |
| "last": "Niebles", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "ICCV", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ranjay Krishna, Kenji Hata, Frederic Ren, Li Fei-Fei, and Juan Carlos Niebles. 2017. Dense-captioning events in videos. In ICCV.", |
| "links": null |
| }, |
| "BIBREF23": { |
| "ref_id": "b23", |
| "title": "Weakly supervised learning of actions from transcripts. Computer Vision and Image Understanding", |
| "authors": [ |
| { |
| "first": "Hilde", |
| "middle": [], |
| "last": "Kuehne", |
| "suffix": "" |
| }, |
| { |
| "first": "Alexander", |
| "middle": [], |
| "last": "Richard", |
| "suffix": "" |
| }, |
| { |
| "first": "Juergen", |
| "middle": [], |
| "last": "Gall", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Hilde Kuehne, Alexander Richard, and Juergen Gall. 2017. Weakly supervised learning of actions from transcripts. Computer Vision and Image Under- standing.", |
| "links": null |
| }, |
| "BIBREF24": { |
| "ref_id": "b24", |
| "title": "Jointly localizing and describing events for dense video captioning", |
| "authors": [ |
| { |
| "first": "Yehao", |
| "middle": [], |
| "last": "Li", |
| "suffix": "" |
| }, |
| { |
| "first": "Ting", |
| "middle": [], |
| "last": "Yao", |
| "suffix": "" |
| }, |
| { |
| "first": "Yingwei", |
| "middle": [], |
| "last": "Pan", |
| "suffix": "" |
| }, |
| { |
| "first": "Hongyang", |
| "middle": [], |
| "last": "Chao", |
| "suffix": "" |
| }, |
| { |
| "first": "Tao", |
| "middle": [], |
| "last": "Mei", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "CVPR", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Yehao Li, Ting Yao, Yingwei Pan, Hongyang Chao, and Tao Mei. 2018. Jointly localizing and describ- ing events for dense video captioning. In CVPR.", |
| "links": null |
| }, |
| "BIBREF25": { |
| "ref_id": "b25", |
| "title": "Rouge: A package for automatic evaluation of summaries. Text Summarization Branches Out", |
| "authors": [ |
| { |
| "first": "Chin-Yew", |
| "middle": [], |
| "last": "Lin", |
| "suffix": "" |
| } |
| ], |
| "year": 2004, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Chin-Yew Lin. 2004. Rouge: A package for auto- matic evaluation of summaries. Text Summarization Branches Out.", |
| "links": null |
| }, |
| "BIBREF26": { |
| "ref_id": "b26", |
| "title": "What are the high-level concepts with small semantic gaps", |
| "authors": [ |
| { |
| "first": "Yijuan", |
| "middle": [], |
| "last": "Lu", |
| "suffix": "" |
| }, |
| { |
| "first": "Lei", |
| "middle": [], |
| "last": "Zhang", |
| "suffix": "" |
| }, |
| { |
| "first": "Qi", |
| "middle": [], |
| "last": "Tian", |
| "suffix": "" |
| }, |
| { |
| "first": "Wei-Ying", |
| "middle": [], |
| "last": "Ma", |
| "suffix": "" |
| } |
| ], |
| "year": 2008, |
| "venue": "CVPR", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Yijuan Lu, Lei Zhang, Qi Tian, and Wei-Ying Ma. 2008. What are the high-level concepts with small semantic gaps? In CVPR.", |
| "links": null |
| }, |
| "BIBREF27": { |
| "ref_id": "b27", |
| "title": "Exploring the limits of supervised pretraining", |
| "authors": [ |
| { |
| "first": "Dhruv", |
| "middle": [], |
| "last": "Mahajan", |
| "suffix": "" |
| }, |
| { |
| "first": "Ross", |
| "middle": [], |
| "last": "Girshick", |
| "suffix": "" |
| }, |
| { |
| "first": "Vignesh", |
| "middle": [], |
| "last": "Ramanathan", |
| "suffix": "" |
| }, |
| { |
| "first": "Kaiming", |
| "middle": [], |
| "last": "He", |
| "suffix": "" |
| }, |
| { |
| "first": "Manohar", |
| "middle": [], |
| "last": "Paluri", |
| "suffix": "" |
| }, |
| { |
| "first": "Yixuan", |
| "middle": [], |
| "last": "Li", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "ECCV", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Dhruv Mahajan, Ross Girshick, Vignesh Ramanathan, Kaiming He, Manohar Paluri, Yixuan Li, Ashwin Bharambe, and Laurens van der Maaten. 2018. Ex- ploring the limits of supervised pretraining. In ECCV.", |
| "links": null |
| }, |
| "BIBREF28": { |
| "ref_id": "b28", |
| "title": "What's cookin'? interpreting cooking videos using text, speech and vision", |
| "authors": [ |
| { |
| "first": "Jonathan", |
| "middle": [], |
| "last": "Malmaud", |
| "suffix": "" |
| }, |
| { |
| "first": "Jonathan", |
| "middle": [], |
| "last": "Huang", |
| "suffix": "" |
| }, |
| { |
| "first": "Vivek", |
| "middle": [], |
| "last": "Rathod", |
| "suffix": "" |
| }, |
| { |
| "first": "Nick", |
| "middle": [], |
| "last": "Johnston", |
| "suffix": "" |
| }, |
| { |
| "first": "Andrew", |
| "middle": [], |
| "last": "Rabinovich", |
| "suffix": "" |
| }, |
| { |
| "first": "Kevin", |
| "middle": [], |
| "last": "Murphy", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "NAACL", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Jonathan Malmaud, Jonathan Huang, Vivek Rathod, Nick Johnston, Andrew Rabinovich, and Kevin Murphy. 2015. What's cookin'? interpreting cook- ing videos using text, speech and vision. In NAACL.", |
| "links": null |
| }, |
| "BIBREF29": { |
| "ref_id": "b29", |
| "title": "Subgoal-labeled instructional material improves performance and transfer in learning to develop mobile applications", |
| "authors": [ |
| { |
| "first": "E", |
| "middle": [], |
| "last": "Lauren", |
| "suffix": "" |
| }, |
| { |
| "first": "Mark", |
| "middle": [], |
| "last": "Margulieux", |
| "suffix": "" |
| }, |
| { |
| "first": "Richard", |
| "middle": [], |
| "last": "Guzdial", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Catrambone", |
| "suffix": "" |
| } |
| ], |
| "year": 2012, |
| "venue": "Conference on International Computing Education Research", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Lauren E Margulieux, Mark Guzdial, and Richard Catrambone. 2012. Subgoal-labeled instructional material improves performance and transfer in learn- ing to develop mobile applications. In Conference on International Computing Education Research.", |
| "links": null |
| }, |
| "BIBREF30": { |
| "ref_id": "b30", |
| "title": "HowTo100M: Learning a Text-Video Embedding by Watching Hundred Million Narrated Video Clips", |
| "authors": [ |
| { |
| "first": "Antoine", |
| "middle": [], |
| "last": "Miech", |
| "suffix": "" |
| }, |
| { |
| "first": "Dimitri", |
| "middle": [], |
| "last": "Zhukov", |
| "suffix": "" |
| }, |
| { |
| "first": "Jean-Baptiste", |
| "middle": [], |
| "last": "Alayrac", |
| "suffix": "" |
| }, |
| { |
| "first": "Makarand", |
| "middle": [], |
| "last": "Tapaswi", |
| "suffix": "" |
| }, |
| { |
| "first": "Ivan", |
| "middle": [], |
| "last": "Laptev", |
| "suffix": "" |
| }, |
| { |
| "first": "Josef", |
| "middle": [], |
| "last": "Sivic", |
| "suffix": "" |
| } |
| ], |
| "year": 2019, |
| "venue": "ICCV", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Antoine Miech, Dimitri Zhukov, Jean-Baptiste Alayrac, Makarand Tapaswi, Ivan Laptev, and Josef Sivic. 2019. HowTo100M: Learning a Text-Video Embedding by Watching Hundred Million Narrated Video Clips. In ICCV.", |
| "links": null |
| }, |
| "BIBREF31": { |
| "ref_id": "b31", |
| "title": "Improving video activity recognition using object recognition and text mining", |
| "authors": [ |
| { |
| "first": "S", |
| "middle": [], |
| "last": "Tanvi", |
| "suffix": "" |
| }, |
| { |
| "first": "Raymond J", |
| "middle": [], |
| "last": "Motwani", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Mooney", |
| "suffix": "" |
| } |
| ], |
| "year": 2012, |
| "venue": "ECAI", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Tanvi S Motwani and Raymond J Mooney. 2012. Improving video activity recognition using object recognition and text mining. In ECAI.", |
| "links": null |
| }, |
| "BIBREF32": { |
| "ref_id": "b32", |
| "title": "Inference for the generalization error", |
| "authors": [ |
| { |
| "first": "Claude", |
| "middle": [], |
| "last": "Nadeau", |
| "suffix": "" |
| }, |
| { |
| "first": "Yoshua", |
| "middle": [], |
| "last": "Bengio", |
| "suffix": "" |
| } |
| ], |
| "year": 2000, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Claude Nadeau and Yoshua Bengio. 2000. Inference for the generalization error. In NeurIPS.", |
| "links": null |
| }, |
| "BIBREF33": { |
| "ref_id": "b33", |
| "title": "Discriminative unsupervised alignment of natural language instructions with corresponding video segments", |
| "authors": [ |
| { |
| "first": "Iftekhar", |
| "middle": [], |
| "last": "Naim", |
| "suffix": "" |
| }, |
| { |
| "first": "C", |
| "middle": [], |
| "last": "Young", |
| "suffix": "" |
| }, |
| { |
| "first": "Qiguang", |
| "middle": [], |
| "last": "Song", |
| "suffix": "" |
| }, |
| { |
| "first": "Liang", |
| "middle": [], |
| "last": "Liu", |
| "suffix": "" |
| }, |
| { |
| "first": "Henry", |
| "middle": [], |
| "last": "Huang", |
| "suffix": "" |
| }, |
| { |
| "first": "Jiebo", |
| "middle": [], |
| "last": "Kautz", |
| "suffix": "" |
| }, |
| { |
| "first": "Daniel", |
| "middle": [], |
| "last": "Luo", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Gildea", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "NAACL", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Iftekhar Naim, Young C Song, Qiguang Liu, Liang Huang, Henry Kautz, Jiebo Luo, and Daniel Gildea. 2015. Discriminative unsupervised alignment of natural language instructions with corresponding video segments. In NAACL.", |
| "links": null |
| }, |
| "BIBREF34": { |
| "ref_id": "b34", |
| "title": "Unsupervised alignment of natural language instructions with video segments", |
| "authors": [ |
| { |
| "first": "Iftekhar", |
| "middle": [], |
| "last": "Naim", |
| "suffix": "" |
| }, |
| { |
| "first": "Young", |
| "middle": [ |
| "Chol" |
| ], |
| "last": "Song", |
| "suffix": "" |
| }, |
| { |
| "first": "Qiguang", |
| "middle": [], |
| "last": "Liu", |
| "suffix": "" |
| }, |
| { |
| "first": "A", |
| "middle": [], |
| "last": "Henry", |
| "suffix": "" |
| }, |
| { |
| "first": "Jiebo", |
| "middle": [], |
| "last": "Kautz", |
| "suffix": "" |
| }, |
| { |
| "first": "Daniel", |
| "middle": [], |
| "last": "Luo", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Gildea", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "AAAI", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Iftekhar Naim, Young Chol Song, Qiguang Liu, Henry A Kautz, Jiebo Luo, and Daniel Gildea. 2014. Unsupervised alignment of natural language instruc- tions with video segments. In AAAI.", |
| "links": null |
| }, |
| "BIBREF35": { |
| "ref_id": "b35", |
| "title": "Why you should lean into how-to content", |
| "authors": [ |
| { |
| "first": "O'neil-Hart", |
| "middle": [], |
| "last": "Celie", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "2019--2028", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Celie O'Neil-Hart. 2018. Why you should lean into how-to content in 2018. www.thinkwithgoogle.com/ advertising-channels/video/ self-directed-learning-youtube/. Accessed: 2019-09-03.", |
| "links": null |
| }, |
| "BIBREF36": { |
| "ref_id": "b36", |
| "title": "Multimodal abstractive summarization for how2 videos", |
| "authors": [ |
| { |
| "first": "Shruti", |
| "middle": [], |
| "last": "Palaskar", |
| "suffix": "" |
| }, |
| { |
| "first": "Jindrich", |
| "middle": [], |
| "last": "Libovick\u1ef3", |
| "suffix": "" |
| }, |
| { |
| "first": "Spandana", |
| "middle": [], |
| "last": "Gella", |
| "suffix": "" |
| }, |
| { |
| "first": "Florian", |
| "middle": [], |
| "last": "Metze", |
| "suffix": "" |
| } |
| ], |
| "year": 2019, |
| "venue": "ACL", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Shruti Palaskar, Jindrich Libovick\u1ef3, Spandana Gella, and Florian Metze. 2019. Multimodal abstractive summarization for how2 videos. In ACL.", |
| "links": null |
| }, |
| "BIBREF37": { |
| "ref_id": "b37", |
| "title": "Bleu: a method for automatic evaluation of machine translation", |
| "authors": [ |
| { |
| "first": "Kishore", |
| "middle": [], |
| "last": "Papineni", |
| "suffix": "" |
| }, |
| { |
| "first": "Salim", |
| "middle": [], |
| "last": "Roukos", |
| "suffix": "" |
| }, |
| { |
| "first": "Todd", |
| "middle": [], |
| "last": "Ward", |
| "suffix": "" |
| }, |
| { |
| "first": "Wei-Jing", |
| "middle": [], |
| "last": "Zhu", |
| "suffix": "" |
| } |
| ], |
| "year": 2002, |
| "venue": "ACL", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Kishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. Bleu: a method for automatic eval- uation of machine translation. In ACL.", |
| "links": null |
| }, |
| "BIBREF38": { |
| "ref_id": "b38", |
| "title": "Multimodal video description", |
| "authors": [ |
| { |
| "first": "Vasili", |
| "middle": [], |
| "last": "Ramanishka", |
| "suffix": "" |
| }, |
| { |
| "first": "Abir", |
| "middle": [], |
| "last": "Das", |
| "suffix": "" |
| }, |
| { |
| "first": "Dong", |
| "middle": [ |
| "Huk" |
| ], |
| "last": "Park", |
| "suffix": "" |
| }, |
| { |
| "first": "Subhashini", |
| "middle": [], |
| "last": "Venugopalan", |
| "suffix": "" |
| }, |
| { |
| "first": "Lisa", |
| "middle": [ |
| "Anne" |
| ], |
| "last": "Hendricks", |
| "suffix": "" |
| }, |
| { |
| "first": "Marcus", |
| "middle": [], |
| "last": "Rohrbach", |
| "suffix": "" |
| }, |
| { |
| "first": "Kate", |
| "middle": [], |
| "last": "Saenko", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "ACM MM", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Vasili Ramanishka, Abir Das, Dong Huk Park, Sub- hashini Venugopalan, Lisa Anne Hendricks, Mar- cus Rohrbach, and Kate Saenko. 2016. Multimodal video description. In ACM MM.", |
| "links": null |
| }, |
| "BIBREF39": { |
| "ref_id": "b39", |
| "title": "Dominikus Wetzel, Stefan Thater, Bernt Schiele, and Manfred Pinkal", |
| "authors": [ |
| { |
| "first": "Michaela", |
| "middle": [], |
| "last": "Regneri", |
| "suffix": "" |
| }, |
| { |
| "first": "Marcus", |
| "middle": [], |
| "last": "Rohrbach", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Michaela Regneri, Marcus Rohrbach, Dominikus Wet- zel, Stefan Thater, Bernt Schiele, and Manfred Pinkal. 2013. Grounding action descriptions in videos. TACL.", |
| "links": null |
| }, |
| "BIBREF40": { |
| "ref_id": "b40", |
| "title": "Faster R-CNN: Towards real-time object detection with region proposal networks", |
| "authors": [ |
| { |
| "first": "Kaiming", |
| "middle": [], |
| "last": "Shaoqing Ren", |
| "suffix": "" |
| }, |
| { |
| "first": "Ross", |
| "middle": [], |
| "last": "He", |
| "suffix": "" |
| }, |
| { |
| "first": "Jian", |
| "middle": [], |
| "last": "Girshick", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Sun", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "NeurIPS", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Shaoqing Ren, Kaiming He, Ross Girshick, and Jian Sun. 2015. Faster R-CNN: Towards real-time ob- ject detection with region proposal networks. In NeurIPS.", |
| "links": null |
| }, |
| "BIBREF41": { |
| "ref_id": "b41", |
| "title": "How2: a large-scale dataset for multimodal language understanding", |
| "authors": [ |
| { |
| "first": "Ramon", |
| "middle": [], |
| "last": "Sanabria", |
| "suffix": "" |
| }, |
| { |
| "first": "Ozan", |
| "middle": [], |
| "last": "Caglayan", |
| "suffix": "" |
| }, |
| { |
| "first": "Shruti", |
| "middle": [], |
| "last": "Palaskar", |
| "suffix": "" |
| }, |
| { |
| "first": "Desmond", |
| "middle": [], |
| "last": "Elliott", |
| "suffix": "" |
| }, |
| { |
| "first": "Lo\u00efc", |
| "middle": [], |
| "last": "Barrault", |
| "suffix": "" |
| }, |
| { |
| "first": "Lucia", |
| "middle": [], |
| "last": "Specia", |
| "suffix": "" |
| }, |
| { |
| "first": "Florian", |
| "middle": [], |
| "last": "Metze", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "Proceedings of the Workshop on Visually Grounded Interaction and Language (ViGIL). NeurIPS", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ramon Sanabria, Ozan Caglayan, Shruti Palaskar, Desmond Elliott, Lo\u00efc Barrault, Lucia Specia, and Florian Metze. 2018. How2: a large-scale dataset for multimodal language understanding. In Pro- ceedings of the Workshop on Visually Grounded In- teraction and Language (ViGIL). NeurIPS.", |
| "links": null |
| }, |
| "BIBREF42": { |
| "ref_id": "b42", |
| "title": "Unsupervised semantic parsing of video collections", |
| "authors": [ |
| { |
| "first": "Ozan", |
| "middle": [], |
| "last": "Sener", |
| "suffix": "" |
| }, |
| { |
| "first": "Silvio", |
| "middle": [], |
| "last": "Amir R Zamir", |
| "suffix": "" |
| }, |
| { |
| "first": "Ashutosh", |
| "middle": [], |
| "last": "Savarese", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Saxena", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "ICCV", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ozan Sener, Amir R Zamir, Silvio Savarese, and Ashutosh Saxena. 2015. Unsupervised semantic parsing of video collections. In ICCV.", |
| "links": null |
| }, |
| "BIBREF43": { |
| "ref_id": "b43", |
| "title": "Weakly supervised dense video captioning", |
| "authors": [ |
| { |
| "first": "Zhiqiang", |
| "middle": [], |
| "last": "Shen", |
| "suffix": "" |
| }, |
| { |
| "first": "Jianguo", |
| "middle": [], |
| "last": "Li", |
| "suffix": "" |
| }, |
| { |
| "first": "Zhou", |
| "middle": [], |
| "last": "Su", |
| "suffix": "" |
| }, |
| { |
| "first": "Minjun", |
| "middle": [], |
| "last": "Li", |
| "suffix": "" |
| }, |
| { |
| "first": "Yurong", |
| "middle": [], |
| "last": "Chen", |
| "suffix": "" |
| }, |
| { |
| "first": "Yu-Gang", |
| "middle": [], |
| "last": "Jiang", |
| "suffix": "" |
| }, |
| { |
| "first": "Xiangyang", |
| "middle": [], |
| "last": "Xue", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "CVPR", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Zhiqiang Shen, Jianguo Li, Zhou Su, Minjun Li, Yurong Chen, Yu-Gang Jiang, and Xiangyang Xue. 2017. Weakly supervised dense video captioning. In CVPR.", |
| "links": null |
| }, |
| "BIBREF44": { |
| "ref_id": "b44", |
| "title": "Dense procedure captioning in narrated instructional videos", |
| "authors": [ |
| { |
| "first": "Botian", |
| "middle": [], |
| "last": "Shi", |
| "suffix": "" |
| }, |
| { |
| "first": "Lei", |
| "middle": [], |
| "last": "Ji", |
| "suffix": "" |
| }, |
| { |
| "first": "Yaobo", |
| "middle": [], |
| "last": "Liang", |
| "suffix": "" |
| }, |
| { |
| "first": "Nan", |
| "middle": [], |
| "last": "Duan", |
| "suffix": "" |
| }, |
| { |
| "first": "Peng", |
| "middle": [], |
| "last": "Chen", |
| "suffix": "" |
| }, |
| { |
| "first": "Zhendong", |
| "middle": [], |
| "last": "Niu", |
| "suffix": "" |
| }, |
| { |
| "first": "Ming", |
| "middle": [], |
| "last": "Zhou", |
| "suffix": "" |
| } |
| ], |
| "year": 2019, |
| "venue": "ACL", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Botian Shi, Lei Ji, Yaobo Liang, Nan Duan, Peng Chen, Zhendong Niu, and Ming Zhou. 2019. Dense proce- dure captioning in narrated instructional videos. In ACL.", |
| "links": null |
| }, |
| "BIBREF45": { |
| "ref_id": "b45", |
| "title": "Contrastive bidirectional transformer for temporal representation learning", |
| "authors": [ |
| { |
| "first": "Chen", |
| "middle": [], |
| "last": "Sun", |
| "suffix": "" |
| }, |
| { |
| "first": "Fabien", |
| "middle": [], |
| "last": "Baradel", |
| "suffix": "" |
| }, |
| { |
| "first": "Kevin", |
| "middle": [], |
| "last": "Murphy", |
| "suffix": "" |
| }, |
| { |
| "first": "Cordelia", |
| "middle": [], |
| "last": "Schmid", |
| "suffix": "" |
| } |
| ], |
| "year": 2019, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": { |
| "arXiv": [ |
| "arXiv:1906.05743" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Chen Sun, Fabien Baradel, Kevin Murphy, and Cordelia Schmid. 2019a. Contrastive bidirectional transformer for temporal representation learning. arXiv preprint arXiv:1906.05743.", |
| "links": null |
| }, |
| "BIBREF46": { |
| "ref_id": "b46", |
| "title": "Videobert: A joint model for video and language representation learning", |
| "authors": [ |
| { |
| "first": "Chen", |
| "middle": [], |
| "last": "Sun", |
| "suffix": "" |
| }, |
| { |
| "first": "Austin", |
| "middle": [], |
| "last": "Myers", |
| "suffix": "" |
| }, |
| { |
| "first": "Carl", |
| "middle": [], |
| "last": "Vondrick", |
| "suffix": "" |
| }, |
| { |
| "first": "Kevin", |
| "middle": [], |
| "last": "Murphy", |
| "suffix": "" |
| }, |
| { |
| "first": "Cordelia", |
| "middle": [], |
| "last": "Schmid", |
| "suffix": "" |
| } |
| ], |
| "year": 2019, |
| "venue": "ICCV", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Chen Sun, Austin Myers, Carl Vondrick, Kevin Mur- phy, and Cordelia Schmid. 2019b. Videobert: A joint model for video and language representation learning. In ICCV.", |
| "links": null |
| }, |
| "BIBREF47": { |
| "ref_id": "b47", |
| "title": "Audio-visual event localization in unconstrained videos", |
| "authors": [ |
| { |
| "first": "Yapeng", |
| "middle": [], |
| "last": "Tian", |
| "suffix": "" |
| }, |
| { |
| "first": "Jing", |
| "middle": [], |
| "last": "Shi", |
| "suffix": "" |
| }, |
| { |
| "first": "Bochen", |
| "middle": [], |
| "last": "Li", |
| "suffix": "" |
| }, |
| { |
| "first": "Zhiyao", |
| "middle": [], |
| "last": "Duan", |
| "suffix": "" |
| }, |
| { |
| "first": "Chenliang", |
| "middle": [], |
| "last": "Xu", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "ECCV", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Yapeng Tian, Jing Shi, Bochen Li, Zhiyao Duan, and Chenliang Xu. 2018. Audio-visual event localiza- tion in unconstrained videos. In ECCV.", |
| "links": null |
| }, |
| "BIBREF48": { |
| "ref_id": "b48", |
| "title": "Tensor2tensor for neural machine translation", |
| "authors": [ |
| { |
| "first": "Ashish", |
| "middle": [], |
| "last": "Vaswani", |
| "suffix": "" |
| }, |
| { |
| "first": "Samy", |
| "middle": [], |
| "last": "Bengio", |
| "suffix": "" |
| }, |
| { |
| "first": "Eugene", |
| "middle": [], |
| "last": "Brevdo", |
| "suffix": "" |
| }, |
| { |
| "first": "Francois", |
| "middle": [], |
| "last": "Chollet", |
| "suffix": "" |
| }, |
| { |
| "first": "Aidan", |
| "middle": [ |
| "N" |
| ], |
| "last": "Gomez", |
| "suffix": "" |
| }, |
| { |
| "first": "Stephan", |
| "middle": [], |
| "last": "Gouws", |
| "suffix": "" |
| }, |
| { |
| "first": "Llion", |
| "middle": [], |
| "last": "Jones", |
| "suffix": "" |
| }, |
| { |
| "first": "\u0141ukasz", |
| "middle": [], |
| "last": "Kaiser", |
| "suffix": "" |
| }, |
| { |
| "first": "Nal", |
| "middle": [], |
| "last": "Kalchbrenner", |
| "suffix": "" |
| }, |
| { |
| "first": "Niki", |
| "middle": [], |
| "last": "Parmar", |
| "suffix": "" |
| }, |
| { |
| "first": "Ryan", |
| "middle": [], |
| "last": "Sepassi", |
| "suffix": "" |
| }, |
| { |
| "first": "Noam", |
| "middle": [], |
| "last": "Shazeer", |
| "suffix": "" |
| }, |
| { |
| "first": "Jakob", |
| "middle": [], |
| "last": "Uszkoreit", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ashish Vaswani, Samy Bengio, Eugene Brevdo, Fran- cois Chollet, Aidan N. Gomez, Stephan Gouws, Llion Jones, \u0141ukasz Kaiser, Nal Kalchbrenner, Niki Parmar, Ryan Sepassi, Noam Shazeer, and Jakob Uszkoreit. 2018. Tensor2tensor for neural machine translation. CoRR.", |
| "links": null |
| }, |
| "BIBREF49": { |
| "ref_id": "b49", |
| "title": "Attention is all you need", |
| "authors": [ |
| { |
| "first": "Ashish", |
| "middle": [], |
| "last": "Vaswani", |
| "suffix": "" |
| }, |
| { |
| "first": "Noam", |
| "middle": [], |
| "last": "Shazeer", |
| "suffix": "" |
| }, |
| { |
| "first": "Niki", |
| "middle": [], |
| "last": "Parmar", |
| "suffix": "" |
| }, |
| { |
| "first": "Jakob", |
| "middle": [], |
| "last": "Uszkoreit", |
| "suffix": "" |
| }, |
| { |
| "first": "Llion", |
| "middle": [], |
| "last": "Jones", |
| "suffix": "" |
| }, |
| { |
| "first": "Aidan", |
| "middle": [ |
| "N" |
| ], |
| "last": "Gomez", |
| "suffix": "" |
| }, |
| { |
| "first": "\u0141ukasz", |
| "middle": [], |
| "last": "Kaiser", |
| "suffix": "" |
| }, |
| { |
| "first": "Illia", |
| "middle": [], |
| "last": "Polosukhin", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "NeurIPS", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, \u0141ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In NeurIPS.", |
| "links": null |
| }, |
| "BIBREF50": { |
| "ref_id": "b50", |
| "title": "Cider: Consensus-based image description evaluation", |
| "authors": [ |
| { |
| "first": "Ramakrishna", |
| "middle": [], |
| "last": "Vedantam", |
| "suffix": "" |
| }, |
| { |
| "first": "Lawrence", |
| "middle": [], |
| "last": "Zitnick", |
| "suffix": "" |
| }, |
| { |
| "first": "Devi", |
| "middle": [], |
| "last": "Parikh", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "CVPR", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ramakrishna Vedantam, C Lawrence Zitnick, and Devi Parikh. 2015. Cider: Consensus-based image de- scription evaluation. In CVPR.", |
| "links": null |
| }, |
| "BIBREF51": { |
| "ref_id": "b51", |
| "title": "Bidirectional attentive fusion with context gating for dense video captioning", |
| "authors": [ |
| { |
| "first": "Jingwen", |
| "middle": [], |
| "last": "Wang", |
| "suffix": "" |
| }, |
| { |
| "first": "Wenhao", |
| "middle": [], |
| "last": "Jiang", |
| "suffix": "" |
| }, |
| { |
| "first": "Lin", |
| "middle": [], |
| "last": "Ma", |
| "suffix": "" |
| }, |
| { |
| "first": "Wei", |
| "middle": [], |
| "last": "Liu", |
| "suffix": "" |
| }, |
| { |
| "first": "Yong", |
| "middle": [], |
| "last": "Xu", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "CVPR", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Jingwen Wang, Wenhao Jiang, Lin Ma, Wei Liu, and Yong Xu. 2018. Bidirectional attentive fusion with context gating for dense video captioning. In CVPR.", |
| "links": null |
| }, |
| "BIBREF52": { |
| "ref_id": "b52", |
| "title": "Learnersourcing subgoal labels for how-to videos", |
| "authors": [ |
| { |
| "first": "Sarah", |
| "middle": [], |
| "last": "Weir", |
| "suffix": "" |
| }, |
| { |
| "first": "Juho", |
| "middle": [], |
| "last": "Kim", |
| "suffix": "" |
| }, |
| { |
| "first": "Z", |
| "middle": [], |
| "last": "Krzysztof", |
| "suffix": "" |
| }, |
| { |
| "first": "Robert C", |
| "middle": [], |
| "last": "Gajos", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Miller", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "CSCW", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Sarah Weir, Juho Kim, Krzysztof Z Gajos, and Robert C Miller. 2015. Learnersourcing subgoal la- bels for how-to videos. In CSCW.", |
| "links": null |
| }, |
| "BIBREF53": { |
| "ref_id": "b53", |
| "title": "MSR-VTT: A large video description dataset for bridging video and language", |
| "authors": [ |
| { |
| "first": "Jun", |
| "middle": [], |
| "last": "Xu", |
| "suffix": "" |
| }, |
| { |
| "first": "Tao", |
| "middle": [], |
| "last": "Mei", |
| "suffix": "" |
| }, |
| { |
| "first": "Ting", |
| "middle": [], |
| "last": "Yao", |
| "suffix": "" |
| }, |
| { |
| "first": "Yong", |
| "middle": [], |
| "last": "Rui", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "CVPR", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Jun Xu, Tao Mei, Ting Yao, and Yong Rui. 2016. MSR-VTT: A large video description dataset for bridging video and language. In CVPR.", |
| "links": null |
| }, |
| "BIBREF54": { |
| "ref_id": "b54", |
| "title": "Learning multimodal attention lstm networks for video captioning", |
| "authors": [ |
| { |
| "first": "Jun", |
| "middle": [], |
| "last": "Xu", |
| "suffix": "" |
| }, |
| { |
| "first": "Ting", |
| "middle": [], |
| "last": "Yao", |
| "suffix": "" |
| }, |
| { |
| "first": "Yongdong", |
| "middle": [], |
| "last": "Zhang", |
| "suffix": "" |
| }, |
| { |
| "first": "Tao", |
| "middle": [], |
| "last": "Mei", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "ACM MM", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Jun Xu, Ting Yao, Yongdong Zhang, and Tao Mei. 2017. Learning multimodal attention lstm networks for video captioning. In ACM MM.", |
| "links": null |
| }, |
| "BIBREF55": { |
| "ref_id": "b55", |
| "title": "Towards an integrated understanding of speaking rate in conversation", |
| "authors": [ |
| { |
| "first": "Jiahong", |
| "middle": [], |
| "last": "Yuan", |
| "suffix": "" |
| }, |
| { |
| "first": "Mark", |
| "middle": [], |
| "last": "Liberman", |
| "suffix": "" |
| }, |
| { |
| "first": "Christopher", |
| "middle": [], |
| "last": "Cieri", |
| "suffix": "" |
| } |
| ], |
| "year": 2006, |
| "venue": "International Conference on Spoken Language Processing", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Jiahong Yuan, Mark Liberman, and Christopher Cieri. 2006. Towards an integrated understanding of speaking rate in conversation. In International Con- ference on Spoken Language Processing.", |
| "links": null |
| }, |
| "BIBREF56": { |
| "ref_id": "b56", |
| "title": "Weakly-supervised video object grounding from text by loss weighting and object interaction", |
| "authors": [ |
| { |
| "first": "Luowei", |
| "middle": [], |
| "last": "Zhou", |
| "suffix": "" |
| }, |
| { |
| "first": "Nathan", |
| "middle": [], |
| "last": "Louis", |
| "suffix": "" |
| }, |
| { |
| "first": "Jason", |
| "middle": [ |
| "J" |
| ], |
| "last": "Corso", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "BMVC", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Luowei Zhou, Nathan Louis, and Jason J Corso. 2018a. Weakly-supervised video object grounding from text by loss weighting and object interaction. In BMVC.", |
| "links": null |
| }, |
| "BIBREF57": { |
| "ref_id": "b57", |
| "title": "Towards automatic learning of procedures from web instructional videos", |
| "authors": [ |
| { |
| "first": "Luowei", |
| "middle": [], |
| "last": "Zhou", |
| "suffix": "" |
| }, |
| { |
| "first": "Chenliang", |
| "middle": [], |
| "last": "Xu", |
| "suffix": "" |
| }, |
| { |
| "first": "Jason", |
| "middle": [ |
| "J" |
| ], |
| "last": "Corso", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "AAAI", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Luowei Zhou, Chenliang Xu, and Jason J Corso. 2018b. Towards automatic learning of procedures from web instructional videos. In AAAI.", |
| "links": null |
| }, |
| "BIBREF58": { |
| "ref_id": "b58", |
| "title": "End-to-end dense video captioning with masked transformer", |
| "authors": [ |
| { |
| "first": "Luowei", |
| "middle": [], |
| "last": "Zhou", |
| "suffix": "" |
| }, |
| { |
| "first": "Yingbo", |
| "middle": [], |
| "last": "Zhou", |
| "suffix": "" |
| }, |
| { |
| "first": "Jason", |
| "middle": [ |
| "J" |
| ], |
| "last": "Corso", |
| "suffix": "" |
| }, |
| { |
| "first": "Richard", |
| "middle": [], |
| "last": "Socher", |
| "suffix": "" |
| }, |
| { |
| "first": "Caiming", |
| "middle": [], |
| "last": "Xiong", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "CVPR", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Luowei Zhou, Yingbo Zhou, Jason J Corso, Richard Socher, and Caiming Xiong. 2018c. End-to-end dense video captioning with masked transformer. In CVPR.", |
| "links": null |
| } |
| }, |
| "ref_entries": { |
| "FIGREF0": { |
| "num": null, |
| "type_str": "figure", |
| "uris": null, |
| "text": "The AT+Video model. Both the encoder and decoder layers perform cross-modal attention." |
| }, |
| "FIGREF1": { |
| "num": null, |
| "type_str": "figure", |
| "uris": null, |
| "text": "Example generations from AT+Video in cases where it performs well, okay, and poorly." |
| }, |
| "FIGREF2": { |
| "num": null, |
| "type_str": "figure", |
| "uris": null, |
| "text": "The performance of the oracle methods increases as they are given access to an increasing number of object types." |
| }, |
| "TABREF0": { |
| "num": null, |
| "html": null, |
| "type_str": "table", |
| "content": "<table><tr><td>Input:</td><td/></tr><tr><td>Target:</td><td>Cut up ginger and grate into the bowl</td></tr><tr><td>Input:</td><td>...best quality olive oil</td></tr><tr><td/><td>I can find...</td></tr><tr><td>Target:</td><td>Heat some olive oil in a sauce pan</td></tr><tr><td>Input:</td><td>... that's perfection in my book right there,</td></tr><tr><td/><td>that's...</td></tr><tr><td>Target:</td><td>Put the dish on a plate and serve</td></tr></table>", |
| "text": "similarly find that presenting explicit subgoals alongside how-to videos im-...knob of ginger and cut off a little bit and then just zest it..." |
| }, |
| "TABREF2": { |
| "num": null, |
| "html": null, |
| "type_str": "table", |
| "content": "<table/>", |
| "text": "The performance of several state-of-the-art, video-only models, with lower (constant prediction) and upper (human estimate) bounds." |
| }, |
| "TABREF4": { |
| "num": null, |
| "html": null, |
| "type_str": "table", |
| "content": "<table><tr><td>: Caption generation performance: AT+Video is</td></tr><tr><td>a multimodal model that adds visual frame features to</td></tr><tr><td>AT. A bolded value in a column indicates a statistically-</td></tr><tr><td>significant improvement, whereas an underline indi-</td></tr><tr><td>cates a statistical tie for best (p < .01).</td></tr></table>", |
| "text": "" |
| } |
| } |
| } |
| } |