Method
We model our document as a non-projective dependency parse tree by constraining inter-token attention as weights of the dependency tree. We use the Matrix tree theorem [@koo2007structured; @tutte1984graph] to carry out the same.
As shown in Figure 1{reference-type="ref" reference="fig:1"}, we feed our input tokens ($w_i$) to a bi-LSTM encoder to obtain hidden state representations $h_i$. We decompose the hidden state vector into two parts: $d_i$ and $e_i$, which we call the structural part and the semantic part, respectively: $$\begin{align} [e_i,d_i] &= h_i \end{align}$$
For every pair of two input tokens, we transform their structural parts $d$ and try to compute the probability of a parent-child relationship edge between them in the dependency tree. For tokens $j$ and $k$, this is done as: $$\begin{align*}
u_{j} = \tanh(W_{p}d_j);\ \ \ \
u_{k} = \tanh(W_{c}d_k)
\end{align*}$$ where $W_p$ and $W_c$ are learned. Next, we compute an inter-token attention function $f_{jk}$ as follows: $$\begin{align*}
f_{jk} &= u_{k}^{T}W_{a}u_{j}
\end{align*}$$ where $W_a$ is also learned. For a document with $K$ tokens, $f$ is a $K\times K$ matrix representing inter-token attention. We model each token as a node in the dependency tree and define the probability of an edge between tokens at positions $j$ and $k$, $P(z_{jk}=1)$, which is given as, $$\begin{align*}
A_{jk} &= \begin{dcases}
0 & \text{if $j=k$}\
\exp(f_{jk}) & \text{otherwise}
\end{dcases} \
L_{jk} &= \begin{dcases}
\sum {j^{'}=1}^K A{j'k} & \text{if $j=k$}\
-A_{jk} & \text{otherwise}
\end{dcases} \
f_j^r &= W_r d_j \
\bar{L}{jk} &= \begin{dcases}
\exp(f_j^r) & j=1 \
L{jk} & j>1 \
\end{dcases} \
P(z_{jk}=1) &= (1-\delta(j,k))A_{jk}\bar{L}^{-1}{kk}
-(1 -\delta(j,1))A{jk}\bar{L}^{-1}{kj}
\end{align*}$$ where $\delta(x,y) = 1$ when $x = y$ . We denote $P(z{jk}=1)$ by $a_{jk}$ (structural attention). Let $a_{j}^{r}$ be the probability of the $j^{th}$ token to be the root: $$\begin{align}
a_j^r &= \exp ({W_{r}d_j}) \bar{L}^{-1}{j1}
\end{align}$$ We use this (soft) dependency tree formulation to compute a structural representation $r$ for each encoder token as, $$\begin{align*}
s_i &= a_i^r e{\mathrm{root}} + \sum_{k=1}^n a_{ki}e_k;\ \
c_i = \sum_{k=1}^n a_{ik} e_i \
r_i &= \tanh(W_r[e_i,s_i,c_i])
\end{align*}$$
Thus, for encoder step $i$, we now obtain a structure infused hidden representation $r_i$. We then compute the contextual attention for each decoding time step $t$ as, $$\begin{align*} {e_{\mathrm{struct}}}{i}^{t} &= v^t \tanh(W_r r_i + W_s s_t + b{\mathrm{attn}}) \ a_{\mathrm{struct}}^{t} &= \mathrm{softmax}({e_{\mathrm{struct}}}^t) \end{align*}$$
Now, using $a_{\mathrm{struct}}^{t}$, we can compute a context vector similar to standard attentional model by weighted sum of the hidden state vectors as $c^{t}{struct} = \sum{i=1}^n a_{\mathrm{struct}i}^t h_i$. At every decoder time step, we also compute the basic contextual attention vector $a^t$ (without structure incorporation), as discussed previously. We use $c^{t}{struct}$ to compute $P_{vocab}$ and $p_{gen}$. We, however, use the initial attention distribution $a_t$ to compute $P(w)$ in order to facilitate token level pointing.
The structural attention model, while efficient, requires $\mathcal{O}(K^2)$ memory to compute attention, where $K$ is the length of the input sequence making it memory intensive for long documents. Moreover, in CQA, answers reflect different opinions of individuals. In such a case, concatenating answers results in user specific information loss. To model discourse structure better in case of conflicting (where one answer contradicts other), overlapping or varying opinion, we introduce a hierarchical encoder model based on structural attention (Figure 2{reference-type="ref" reference="fig:2"}). We feed each answer independently to a bi-LSTM encoder and obtain token level hidden representations $h_{idx,tidx}$ where $idx$ is the document (or answer) index and $tidx$ is the token index. We then transform these representations into structure infused vectors $r_{idx,tidx}$ as described previously. For each answer in the CQA question thread, we pool the token representations to obtain a composite answer vector $r_{idx}$. We consider three types of pooling: average, max and sum, out of which sum pooling performs the best after initial experiments. We feed these structure infused answer embeddings to an answer-level bi-LSTM encoder and obtain higher-level encoder hidden states $h_{idx}$ from which we calculate structure infused embeddings $g_{idx}$. At the decoder time step $t$, we calculate contextual attention at answer as well as token level as follows: $$\begin{align*} {e_\mathrm{ans}}{idx}^{t} &= v^{t} \tanh(W{g}g_{idx} + W_{s}s_{t} + b_{\mathrm{attn}}) \ a_{\mathrm{ans}}^{t} &= \mathrm{softmax}({e_\mathrm{ans}}^{t}) \ {e_{\mathrm{token}}}{idx,tidx}^{t} &= v^{t} \tanh(W{h}h_{idx,tidx} + W_{s}s_{t} + b_{\mathrm{attn}}) \ {a_\mathrm{token}}^{t} &= \mathrm{softmax}({e_\mathrm{token}}^{t}) \end{align*}$$
We use the answer-level attention distribution $a_{\mathrm{ans}}^{t}$ to compute the context vector at every decoder time step which we use to calculate $p_{vocab}$ and $p_{gen}$ as described before.
To enable copying, we use $a_\mathrm{token}$. The final probability of predicting word $w$ is given by, $$\begin{align*} p(w) &= p_{gen} p_{vocab}(w) + (1 - p_{gen}) \sum_{i:w_i=w}{a_\mathrm{token}}_{i}^t \end{align*}$$
We primarily use two datasets to evaluate the performance:
(i) The CNN/Dailymail[^1] dataset [@hermann2015teaching; @nallapati2016abstractive] is a news corpora containing document-summary pairs. Bulleted extracts from the CNN and Dailymail news pages online are projected as summaries to the remaining documents. The scripts released by [@nallapati2016abstractive] are used to extract approximately $250k$ training pairs, $13k$ validation pairs and $11.5k$ test pairs from the corpora. We use the non-anonymized form of the data to provide fair comparison to the experiments, conducted by [@summ2]. It is a factual, English corpora with an average document length being $781$ tokens and an average summary length being $56$ tokens. We use two versions of this dataset -- one with 400 word articles, and the other with 800 word articles. Most research reporting results on this dataset are on CNN/Dailymail-400. We also consider a 800 token version of this dataset as longer articles harbor more intra-document structural dependencies which would allow us to better demonstrate the benefits of structure incorporation. Moreover, longer documents resemble real-world datasets. (ii) We also use the CQA dataset[^2] [@chowdhury2018cqasumm] which is generated by filtering the Yahoo! Answers L6 corpora to find question threads where the best answer can serve as a summary for the remaining answers. The authors use a series of heuristics to arrive at a set of $100k$ question thread-summary pairs. The summaries are generated by modifying best answers and selecting most question-relevant sentences from them. The remaining answers serve as candidate documents for summarization making up a large-scale, diverse and highly abstract dataset. On an average, the corpus has $12$ answers per question thread, with $65$ words per answer. All summaries are truncated at hundred words. We split the $100k$ dataset into $80k$ training instances, $10k$ validation and $10k$ test instances. We additionally extract the upvote information corresponding to every answer from the L6 dataset and assume that upvotes have a high correlation with the relative importance and relevance of an answer. We then rank answers in decreasing order of upvotes before concatenating them as required for several baselines. Since Yahoo! Answers is an unstructured and unmoderated question-answer repository, this has turned out to be a challenging summarization dataset [@chowdhury2018cqasumm]. Additionally, we include analysis on MultiNews [@fabbri2019multi], a news based MSD corpora to aid similar studies. It is the first large scale MSD news dataset consisting of 56,216 article summary pairs, crowd-sourced from various different news websites.
We compare the performance of the following SDS and MDS models.
Lead3: It is an extractive baseline where the first 100 tokens of the document (in case of SDS datasets) and concatenated ranked documents (in case of MDS datasets) are picked to form the summary.
KL-Summ: It is an extractive summarization method introduced by [@haghighi2009exploring] that attempts to minimize KL-Divergence between candidate documents and generated summary.
LexRank: It is an unsupervised extractive summarization method [@erkan2004lexrank]. A graph is built with sentences as vertices, and edge weights are assigned based on sentence similarity.
TextRank: It is an unsupervised extractive summarization method which selects sentences such that the information being disseminated by the summary is as close as possible to the original documents [@mihalcea2004textrank].
Pointer-Generator (PG): It is a supervised abstractive summarization model [@summ2], as discussed earlier. It is a strong and popularly used baseline for summarization.
Pointer-Generator + Structure Infused Copy (PG+SC): Our implementation is similar to one of the methods proposed by [@song2018structure]. We explicitly compute the dependency tree of sentences and encode a structure vector based on features like POS tag, number of incoming edges, depth of the tree, etc. We then concatenate this structural vector for every token to its hidden state representation in pointer-generator networks.
Pointer-Generator+MMR (PG+MMR): It is an abstractive MDS model [@lebanoff2018adapting] trained on the CNN/Dailymail dataset. It combines Maximal Marginal Relevance (MMR) method with pointer-generator networks, and shows significant performance on DUC-04 and TAC-11 datasets.
Hi-MAP: It is an abstractive MDS model by [@fabbri2019multi] extending PG and MMR.
Pointer-Generator + Structural Attention (PG+SA): It is the model proposed in this work for SDS and MDS tasks. We incorporate structural attention with pointer generator networks and use multi-level contextual attention to generate summaries.
Pointer-Generator + Hierarchical Structural Attention (PG+HSA): We use multi-level structural attention to additionally induce a document-level non-projective dependency tree to generate more insightful summaries.