# Method Consider a video $\mathit{V}$ consisting of $N_v$ frames described by sentence $\mathit{S}$. Our Video-to-Commonsense (V2C) framework can be used for generating commonsense descriptions $\mathit{C}$ under two settings. In the first setting (**V2C-Completion**), we use ground-truth captions to guide commonsense-enriched caption generation. This task can be viewed as providing supplementary explanations to the caption. In the second setting (**V2C-Generation**), we first learn to generate captions from videos, $\mathbf{g}(\mathit{V})$, and then use them to generate commonsense descriptions. $$\begin{align} \small \begin{split} \textbf{V2C-Completion} \quad \mathit{C} &= \mathbf{f}(\mathit{V}, \mathit{S}).\\ \small \textbf{V2C-Generation} \quad \mathit{C} &= \mathbf{f}(\mathit{V}, \mathbf{g}(\mathit{V})). \end{split} \end{align}$$
The V2C-Transformer model architecture contains: (a) Video Encoder designed to take video frames as input and encode them into frame-wise representations, (b) Decoder module consisting of a Caption Decoder and a Commonsense Decoder, and (c) Transformer Decoder module containing a stack of N consecutive transformer blocks (shown inside the dashed area).
The proposed Video2Commonsense Transformer is a cross-modal model that generates captions and commonsense-enriched descriptions from videos. Our approach (Figure [2](#fig:architecture){reference-type="ref" reference="fig:architecture"}) adopts the "encoder-decoder" design: a video encoder that extracts global representations of the input video, and a transformer decoder that produces relevant commonsense knowledge along with captions. We obtain per-frame ResNet-152 [@he2016deep] features for video $\mathit{V}$ and process them using an LSTM model [@sundermeyer2012lstm], a standard architecture for modeling long temporal sequences, and use the last hidden states of the LSTM as the video representations. We concatenate all previous hidden states from each LSTM module as a final global video encoding $\mathbf{v}$, to provide the model with explicit context using the temporal attention mechanism. The video encoding is used as input to two decoder networks that use a transformer language model [@radford2018improving] to generate a caption and commonsense description, using an inference mechanism similar to @bosselut2019comet. Our model is a two-stage process that first predicts the current events directly from videos, and then produces the corresponding commonsense captions. During training, the caption decoder $\mathbf{D}_{\textsc{CAP}}$ takes the video encoding ($\mathbf{v}$) and ground truth caption ($\mathbf{s}$) as input to generate caption encoding ($\mathbf{\hat{s}}$), while the commonsense decoder $\mathbf{D}_{\textsc{CMS}}$ uses the concatenation of video and caption encoding to obtain the commonsense description ($\mathbf{c}$), as shown in Figure [1](#fig:pipeline){reference-type="ref" reference="fig:pipeline"} (b). This arrangement enables the attention module in commonsense decoder to attend to both the video and caption context. $$\begin{equation} \small \mathbf{\hat{s}} = \mathbf{D}_{\textsc{CAP}}(\mathbf{v}, \mathbf{s}), \quad \mathbf{c} = \mathbf{D}_{\textsc{CMS}}(\mathbf{v}, \mathbf{\hat{s}}). \end{equation}$$ **Transformer Decoder** is composed of a stack of transformer blocks (dashed area in (c) Figure [2](#fig:architecture){reference-type="ref" reference="fig:architecture"}), whose main component is a self-attention architecture. It takes as input the summation of word embedding and the positional encoding offset by 1 position through masked multi-head attention, which prevents the future words been seen. In our model, we deploy two stacked decoder architectures for both caption decoding and commonsense knowledge decoding. The Transformer Block consists of consecutive linear transformation: a multi-head attention module (denoted as $\mathcal{H}_{\textsc{M-Att}}$), a two-layer feed forward network ($\mathcal{H}_{\textsc{FFN}}$), a layer normalization operation, and a residual connection. To enable our transformer decoder to generate commonsense descriptions by using both the visual and textual content, we modify the multi-head attention module (which acts as the basic unit in recent transformer based language generation models [@radford2018improving; @radford2019language]) as a cross-modal module. $\mathcal{H}_{\textsc{M-Att}}$ takes the input of the embedding of key (K), value (V) and query (Q). The key and value in transformer block are the video encoding (caption decoder) or concatenation of video/caption encoding (commonsense decoder), while the query is the output from the previous transformer block. In the masked multi-head attention module, K, V and Q are the identical vectors of input embedding. For a self-attention block with $h$ heads, $$\begin{equation} \small \mathcal{H}_{\textsc{M-Att}}(\textsc{K}, \textsc{V}, \textsc{Q}) = \mathcal{H}_{\textsc{FFN}}([x_1,\dots, x_h]), \end{equation}$$ where $x_i$ is computed by scaled dot-product attention operation, for head-index $i$, key-dimension $d_k$n, and transformation parameters $\textsc{w}_i$. $$\begin{equation} \small \begin{split} \textbf{for } \mathbf{D}_{\textsc{CAP}}, &\quad {x_i} = \textsc{Softmax}(\frac{\textsc{w}^\textsc{q}_i \textsc{Q}\cdot \textsc{w}^\textsc{k}_i \textsc{K}^\prime}{\sqrt{d_k}})\textsc{w}^\textsc{v}_i \textsc{V}, \\ \textbf{for } \mathbf{D}_{\textsc{CMS}}, &\quad {x_i} = \textsc{Softmax}(\frac{\textsc{w}^\textsc{q}_i [\mathbf{v}, \mathbf{s}]\cdot \textsc{w}^\textsc{k}_i [\mathbf{v}, \mathbf{s}]^\prime}{\sqrt{d_k}})\textsc{w}^\textsc{v}_i \textsc{V}. \end{split} \nonumber \end{equation}$$
The overall three-step pipeline (retrieval from ATOMIC, BERT re-ranking, and human labeling) to construct our V2C dataset.
For the V2C task we need video clips annotated with commonsense descriptions about the agents in the video, as shown in Figure [1](#fig:pipeline){reference-type="ref" reference="fig:pipeline"}. While there are video captioning datasets such as MSR-VTT [@xu2016msr], the captions in these datasets describe only the observable objects in the image, but do not describe latent and commonsense aspects. We are the first to curate such a dataset with annotations describing the intention of agent to perform an action, the effect of the action and the attribute of the agent given the action. contains around 10k videos each 10 to 30 seconds long, belonging to 20 categories covering a variety of topics such as sports, music, news, and home videos. Each video is accompanied by 20 human-annotated textual descriptions on average. For training and benchmarking the novel V2C task, we further complement MSR-VTT with event-level commonsense annotations, i.e. event descriptions with intentions, effects and attributes. We remove captions and videos that do not have clear human activities. This is because having such videos leads to an imbalance in the number of captions for each video, thus making it inappropriate to just evaluate caption generation using BLEU scores. [@sap2018atomic] is an atlas of everyday commonsense knowledge and contains 880k triplets about causes and effects of human activities, organized as *if-then* relations, annotated by crowd-sourced workers. This data can be categorized based on causal relations, thereby giving us the categories "cause\", "effect\" and "attribute\", e.g., "*if* X wants to relax, *then* he will play video game.\" Since inferential knowledge in A[tomic]{.smallcaps} only covers human activities, we first retain only those captions in Msr-vtt that describe human activities. We then select three queries from A[tomic]{.smallcaps} most similar to the caption, and extract the commonsense descriptions corresponding to these queries. In order to select a more reasonable subset of commonsense descriptions, we first train a ranking model. We use the BERT [@devlin2018bert] architecture for the ranking model, trained on the ATOMIC dataset for a binary classification task, to predict the relevance of a candidate commonsense description with respect to the event. We select the top three relevant intentions, effects, and attributes for each caption. This allows us to obtain a preliminary set of 9 commonsense annotations per video directly from the A[tomic]{.smallcaps} dataset, relevant to the caption, albeit with noise and annotations that are not relevant to the video. Since we do not use the video to retrieve commonsense descriptions from ATOMIC, we employ human workers to annotate our dataset. We recruit two sets of human workers to watch the video, read the caption and select/annotate the relevant commonsense descriptions for each video. The first set is Amazon Mechanical Turkers (AMT) who select relevant descriptions. The second set is skilled human annotators, screened from a set of university students proficient in English, who are asked to provide annotations in their own words, and remove or edit irrelevant annotations that were provided by ATOMIC and AMT workers. This makes our annotations not only grounded in the video, but also more descriptive, linguistically diverse, and of higher quality (see Figure [3](#fig:v2cdataset){reference-type="ref" reference="fig:v2cdataset"}). The descriptions from ATOMIC, although not relevant to the video in some cases, give our workers an idea about the format of annotations desired. The skilled humans reported that $95\%$ of the captions were relevant, and $65\%$ of the ATOMIC descriptions were useful in understanding the annotation task. Through this procedure, we obtain 6819 videos for training and 2906 videos for testing, a total of 121,651 captions ($\sim$``{=html}12 captions/video), each caption accompanied with 5 commonsense knowledge annotations (V2C-Raw set). In experiment, we use video captioning technique to conduct the V2C completion task on V2C-Raw set. In addition, we instruct human annotators to select and rewrite one raw phrase into complete sentences that complement the captions. In total we have 3 complete sentences per video for intention/effect/attribute respectively, and this yields a subset that allows our model to generate complete story-like sentences (V2C-Clean Set). Table [\[tab:atomic_generations\]](#tab:atomic_generations){reference-type="ref" reference="tab:atomic_generations"} shows examples from the newly compiled dataset. We conduct rigorous human evaluation to evaluate the quality of our V2C dataset ("Gold Annotations" in Table [\[tab:humanevaluation\]](#tab:humanevaluation){reference-type="ref" reference="tab:humanevaluation"}). Details about the dataset creation process and quality control mechanisms can be found in the Appendix.