Dataset Preview
Duplicate
The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
The dataset generation failed
Error code:   DatasetGenerationError
Exception:    ArrowInvalid
Message:      JSON parse error: Missing a closing quotation mark in string. in row 5
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/json/json.py", line 145, in _generate_tables
                  dataset = json.load(f)
                File "/usr/local/lib/python3.9/json/__init__.py", line 293, in load
                  return loads(fp.read(),
                File "/usr/local/lib/python3.9/json/__init__.py", line 346, in loads
                  return _default_decoder.decode(s)
                File "/usr/local/lib/python3.9/json/decoder.py", line 340, in decode
                  raise JSONDecodeError("Extra data", s, end)
              json.decoder.JSONDecodeError: Extra data: line 2 column 1 (char 21419)
              
              During handling of the above exception, another exception occurred:
              
              Traceback (most recent call last):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1995, in _prepare_split_single
                  for _, table in generator:
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/json/json.py", line 148, in _generate_tables
                  raise e
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/json/json.py", line 122, in _generate_tables
                  pa_table = paj.read_json(
                File "pyarrow/_json.pyx", line 308, in pyarrow._json.read_json
                File "pyarrow/error.pxi", line 154, in pyarrow.lib.pyarrow_internal_check_status
                File "pyarrow/error.pxi", line 91, in pyarrow.lib.check_status
              pyarrow.lib.ArrowInvalid: JSON parse error: Missing a closing quotation mark in string. in row 5
              
              The above exception was the direct cause of the following exception:
              
              Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1529, in compute_config_parquet_and_info_response
                  parquet_operations = convert_to_parquet(builder)
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1154, in convert_to_parquet
                  builder.download_and_prepare(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1027, in download_and_prepare
                  self._download_and_prepare(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1122, in _download_and_prepare
                  self._prepare_split(split_generator, **prepare_split_kwargs)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1882, in _prepare_split
                  for job_id, done, content in self._prepare_split_single(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 2038, in _prepare_split_single
                  raise DatasetGenerationError("An error occurred while generating the dataset") from e
              datasets.exceptions.DatasetGenerationError: An error occurred while generating the dataset

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

text
string
meta
dict
\section{Introduction} The proliferation of social media has provided a locus for use, and thereby collection, of figurative and creative language data, including irony ~\cite{ghosh2015semeval}. According to the Merriam-Webster online dictionary,~\footnote{\url{https://www.merriam-webster.com/dictionary/irony.}} \textit{irony} refers to ``the use of word to express something other than and especially the opposite of the literal meaning." A complex, controversial, and intriguing linguistic phenomenon, irony has been studied in disciplines such as linguistics, philosophy, and rhetoric. Irony detection also has implications for several NLP tasks such as sentiment analysis, hate speech detection, fake news detection, etc ~\cite{ghosh2015semeval}. Hence, automatic irony detection can potentially improve systems designed for each of these tasks. In this paper, we focus on learning irony. More specifically, we report our work submitted to the FIRE 2019 Arabic irony detection task (IDAT@FIRE2019).~\footnote{\url{https://www.irit.fr/IDAT2019/}} We focus our energy on an important angle of the problem--the small size of training data. Deep learning is the most successful under supervised conditions with large amounts of training data (tens-to-hundreds of thousands of examples). For most real-world tasks, we hard to obtain labeled data. Hence, it is highly desirable to eliminate, or at least reduce, dependence on supervision. In NLP, pre-training language models on unlabeled data has emerged as a successful approach for improving model performance. In particular, the pre-trained multilingual \textbf{B}idirectional \textbf{E}ncoder \textbf{R}epresentations from \textbf{T}ransformers (BERT)~\cite{devlin2018bert} was introduced to learn language regularities from unlabeled data. Multi-task learning (MTL) is another approach that helps achieve inductive transfer between various tasks. More specifically, MTL leverages information from one or more source tasks to improve a target task~\cite{caruana1993,caruana1997multitask}. In this work, we introduce Transformer representations (BERT) in an MTL setting to address the data bottleneck in IDAT@FIRE2019. To show the utility of BERT, we compare to a simpler model with gated recurrent units (GRU) in a single task setting. To identify the utility, or lack thereof, of MTL BERT, we compare to a single task BERT model. For MTL BERT, we train on a number of tasks simultaneously. Tasks we train on are \textit{sentiment analysis}, \textit{gender detection}, \textit{age detection}, \textit{dialect identification}, and \textit{emotion detection}. Another problem we face is that the BERT model released by Google is trained only on Arabic Wikipedia, which is almost exclusively Modern Standard Arabic (MSA). This introduces a language variety mismatch due to the irony data involving a number of dialects that come from the Twitter domain. To mitigate this issue, we further pre-train BERT on an in-house dialectal Twitter dataset, showing the utility of this measure. To summarize, we make the following contributions: \begin{itemize} \item In the context of the Arabic irony task, we show how a small-sized labeled data setting can be mitigated by training models in a multi-task learning setup. \item We view different varieties of Arabic as different domains, and hence introduce a simple, yet effective, `in-domain' training measure where we further pre-train BERT on a dataset closer to task domain (in that it involves dialectal tweet data). \end{itemize} \section{Methods}~\label{methods} \subsection{GRU} For our baseline, we use gated recurrent units (GRU)~\cite{cho2014learning}, a simplification of long-short term memory (LSTM)~\cite{hochreiter1997long}, which in turn is a variation of recurrent neural networks (RNNs). A GRU learns based on the following: \begin{equation} \textbf{\textit{h}}^{(t)} = \left( 1-\textbf{\textit{z}}^{(t)} \right) \textbf{\textit{h}}^{(t-1)} + \textbf{\textit{z}}^{(t)} \textbf{\textit{$\widetilde{h}$}}^{(t)} \\ \end{equation} where the \textit{update state} $\textbf{\textit{z}}^{(t)}$ decides how much the unit updates its content: \begin{equation} \textbf{\textit{z}}^{(t)} = \sigma \left( W_z \textbf{\textit{x}}^{(t)} + U_z \textbf{\textit{h}}^{(t-1)} \right) \\ \end{equation} where W and U are weight matrices. The candidate activation makes use of a \textit{reset gate} $\textbf{\textit{r}}^{(t)}$: \begin{equation} \textbf{\textit{$\widetilde{h}$}}^{(t)} = tanh\left( W \textbf{\textit{x}}^{(t)} + \textbf{\textit{r}}^{(t)} \odot \left(U \textbf{\textit{h}}^{(t-1)} \right)\right) \\ \end{equation} where $\odot$ is a Hadamard product (element-wise multiplication). When its value is close to zero, the reset gate allows the unit to \textit{forget} the previously computed state. The reset gate $\textbf{\textit{r}}^{(t)}$ is computed as follows: \begin{equation} \textbf{\textit{$\widetilde{r}$}}^{(t)} = \sigma \left( W_r \textbf{\textit{x}}^{(t)} + U_r \textbf{\textit{h}}^{(t-1)} \right) \\ \end{equation} \subsection{BERT} BERT~\cite{devlin2018bert} is based on the Transformer~\cite{vaswani2017attention}, a network architecture that depends solely on encoder-decoder attention. The Transformer attention employs a function operating on \textit{queries}, \textit{keys}, and \textit{values}. This attention function maps a query and a set of key-value pairs to an output, where the output is a weighted sum of the values. \textit{Encoder} of the Transformer in~\cite{vaswani2017attention} has 6 attention layers, each of which is composed of two sub-layers: (1) \textit{multi-head attention} where queries, keys, and values are projected \textit{h} times into linear, learned projections and ultimately concatenated; and (2) fully-connected \textit{feed-forward network (FFN)} that is applied to each position separately and identically. \textit{Decoder} of the Transformer also employs 6 identical layers, yet with an extra sub-layer that performs multi-head attention over the encoder stack. The architecture of BERT~\cite{devlin2018bert} is a multi-layer bidirectional Transformer encoder \cite{vaswani2017attention}. It uses masked language models to enable pre-trained deep bidirectional representations, in addition to a binary \textit{next sentence prediction} task captures context (i.e., sentence relationships). More information about BERT can be found in~\cite{devlin2018bert}. \subsection{Multi-task Learning} In multi-task learning (MTL), a learner uses a number of (usually relevant) tasks to improve performance on a target task ~\cite{caruana1993,caruana1997multitask}. The MTL setup enables the learner to use cues from various tasks to improve the performance on the target task. MTL also usually helps regularize the model since the learner needs to find representations that are not specific to a single task, but rather more general. Supervised learning with deep neural networks requires large amounts of labeled data, which is not always available. By employing data from additional tasks, MTL thus practically augments training data to alleviate need for large labeled data. Many researchers achieve state-of-the-art results by employing MTL in supervised learning settings~\cite{guo2018soft,liu2019multi}. In specific, BERT was successfully used with MTL. Hence, we employ multi-task BERT (following~\cite{liu2019multi}). For our training, we use the same pre-trained BERT-Base Multilingual Cased model as the initial checkpoint. For this MTL pre-training of BERT, we use the same afore-mentioned single-task BERT parameters. We now describe our data. \section{Data}\label{data} The shared task dataset contains 5,030 tweets related to different political issues and events in the Middle East taking place between 2011 and 2018. Tweets are collected using pre-defined keywords (i.e. targeted political figures or events) and the positive class involves ironic hashtags such as \#sokhria, \#tahakoum, and \#maskhara (Arabic variants for ``irony"). Duplicates, retweets, and non-intelligible tweets are removed by organizers. Tweets involve both MSA as well as dialects at various degrees of granularity such as \textit{Egyptian}, \textit{Gulf}, and \textit{Levantine}. IDAT@FIRE2019 \cite{idat2019} is set up as a binary classification task where tweets are assigned labels from the set \{\textit{ironic}, \textit{non-ironic}\}. A total of 4,024 tweets were released by organizers as training data. In addition, 1,006 tweets were used by organizers as test data. Test labels were not release; and teams were expected to submit the predictions produced by their systems on the test split. For our models, we split the 4,024 released training data into 90\% TRAIN ($n$=3,621 tweets; `ironic'=1,882 and `non-ironic'=1,739) and 10\% DEV ($n$=403 tweets; `ironic'=209 and `non-ironic'=194). We train our models on TRAIN, and evaluate on DEV. Our multi-task BERT models involve six different Arabic classification tasks. We briefly introduce the data for these tasks here: \begin{itemize} \item \textbf{Author profiling and deception detection in Arabic (APDA).}~\cite{rangel2019ADPA}~\footnote{\url{https://www.autoritas.net/APDA/}}. From APDA, we only use the corpus of author profiling (which includes the three profiling tasks of \textit{age, gender}, and \textit{variety)}. The organizers of APDA provide 225,000 tweets as training data. Each tweet is labelled with three tags (one for each task). To develop our models, we split the training data into 90\% \textit{training} set ($n$=202,500 tweets) and 10\% \textit{development} set ($n$=22,500 tweets). With regard to age, authors consider tweets of three classes: \{\textit{Under 25}, \textit{Between 25 and 34}, and \textit{Above 35}\}. For the Arabic varieties, they consider the following fifteen classes: \{\textit{Algeria}, \textit{Egypt}, \textit{Iraq}, \textit{Kuwait}, \textit{Lebanon-Syria}, \textit{Lybia}, \textit{Morocco}, \textit{Oman}, \textit{Palestine-Jordan}, \textit{Qatar}, \textit{Saudi Arabia}, \textit{Sudan}, \textit{Tunisia}, \textit{UAE}, \textit{Yemen}\}. Gender is labeled as a binary task with \textit{\{male,female\}} tags. \item \textbf{LAMA+DINA Emotion detection.} Alhuzali et al. \cite{alhuzali-etal-2018-enabling} introduce \textit{LAMA}, a dataset for Arabic emotion detection. They use a first-person seed phrase approach and extend work by Abdul-Mageed et al. ~\cite{mageed2016dina} for emotion data collection from 6 to 8 emotion categories (i.e. \textit{anger}, \textit{anticipation}, \textit{disgust}, \textit{fear}, \textit{joy}, \textit{sadness}, \textit{surprise} and \textit{trust}). We use the combined LAMA+DINA corpus. It is split by the authors as 189,902 tweets \textit{training} set, 910 as \textit{development}, and 941 as \textit{test}. In our experiment, we use only the training set for out MTL experiments. \item \textbf{Sentiment analysis in Arabic tweets.} This dataset is a shared task on Kaggle by Motaz Saad~\footnote{\url{https://www.kaggle.com/mksaad/arabic-sentiment-twitter-corpus}}. The corpus contains 58,751 Arabic tweets (46,940 \textit{training}, and 11,811 \textit{test}). The tweets are annotated with positive and negative labels based on an emoji lexicon. \end{itemize} \section{Models}\label{models} \subsection{GRU}\label{subsec:gru} We train a baseline GRU network with our irony TRAIN data. This network has only one layer unidirectional GRU, with 500 unites and a linear, output layer. The input word tokens are embedded by the trainable word vectors which are initialized with a standard normal distribution, with $\mu=0$, and $\sigma=1$, i.e., $W \sim N(0,1)$. We use Adam \cite{kingma2014adam} with a fixed learning rate of $1e-3$ for optimization. For regularization, we use dropout~\cite{srivastava2014dropout} with a rate of 0.5 on the hidden layer. We set the maximum sequence sequence in our GRU model to 50 words, and use all 22,000 words of training set as vocabulary. We employ batch training with a batch size of 64 for this model. We run the network for 20 epochs and save the model at the end of each epoch, choosing the model that performs highest on DEV as our best model. We report our best result on DEV in Table ~\ref{tab:res}. Our best result is acquired with 12 epochs. As Table ~\ref{tab:res} shows, the baseline obtains \textit{$accuracy=73.70\%$} and \textit{$F_1=73.47$}. \subsection{Single-Task BERT}\label{subsec:bert} We use the BERT-Base Multilingual Cased model released by the authors~\cite{devlin2018bert}~\footnote{\url{https://github.com/google-research/bert/blob/master/multilingual.md}.}. The model is trained on 104 languages (including Arabic) with 12 layers, 768 hidden units each, 12 attention heads. The entire model has 110M parameters. The model has 119,547 shared WordPieces vocabulary, and was pre-trained on the entire Wikipedia for each language. For fine-tuning, we use a maximum sequence size of 50 tokens and a batch size of 32. We set the learning rate to $2e-5$ and train for 20 epochs. For single-task learning, we fine-tune BERT on the training set (i.e., TRAIN) of the irony task exclusively. We refer to this model as \textit{BERT-ST}, ST standing for `single task.' As Table ~\ref{tab:res} shows, BERT-ST unsurprisingly acquires better performance than the baseline GRU model. On accuracy, BERT-ST is 7.94\% better than the baseline. BERT-ST obtains 81.62 $F_1$ which is 7.35 better than the baseline. \subsection{Multi-Task BERT} \label{subsec:multi-bert} We follow the work of Liu et al. \cite{liu2019multi} for training an MTL BERT in that we fine-tune the afore-mentioned BERT-Base Multilingual Cased model with different tasks jointly. First, we fine-tune with the three tasks of author profiling and the irony task simultaneously. We refer to this model trained on the 4 tasks simply as BERT-MT4. BERT-MT5 refers to the model fine-tuned on the 3 author profiling tasks, the emotion task, and the irony task. We also refer to the model fine-tuned on all six tasks (adding the sentiment task mentioned earlier) as BERT-MT6. For MTL BERT, we use the same parameters as the single task BERT listed in the previous sub-section (i.e., \textit{Single-Task BERT}). In Table ~\ref{tab:res}, we present the performance on the DEV set of only the irony detection task.~\footnote{We do not list acquired results on other tasks, since the focus of this paper is exclusively the IDAT@FIRE2019 shared task.} We note that all the results of multitask learning with BERT are better than those with the single task BERT. The model trained on all six tasks obtains the best result, which is 2.23\% accuracy and 2.25\% $F_1$ higher than the single task BERT model. \begin{table}[] \centering \caption{Model Performance} \label{tab:res} \begin{tabular}{@{}lcc@{}} \toprule \textbf{Model} & \textbf{Acc} & \textbf{F1} \\ \midrule \textbf{GRU} & 0.7370 & 0.7347 \\ \midrule \textbf{BERT-ST} & \textbf{0.8164} & \textbf{0.8162} \\ \midrule \textbf{BERT-MT4} & 0.8189 & 0.8187 \\ \textbf{BERT-MT5} & 0.8362 & 0.8359 \\ \textbf{BERT-MT6} & \textbf{0.8387} & \textbf{0.8387} \\ \midrule \textbf{BERT-1M-MT5} & \textbf{0.8437} & \textbf{0.8434} \\ \textbf{BERT-1M-MT6} & 0.8362 & 0.8360 \\ \bottomrule \end{tabular} \end{table} \subsection{In-Domain Pre-Training} Our irony data involves dialects such as Egyptian, Gulf, and Levantine, as we explained earlier. The BERT-Base Multilingual Cased model we used, however, was trained on Arabic Wikipedia, which is mostly MSA. We believe this dialect mismatch is sub-optimal. As Sun et al.~\cite{sun2019fine} show, further pre-training with domain specific data can improve performance of a learner. Viewing dialects as constituting different domains, we turn to dialectal data to further pre-train BERT. Namely, we use 1M tweets randomly sampled from an in-house Twitter dataset to resume pre-training BERT before we fine-tune on the irony data.~\footnote{A nuance is that we require each tweet in the 1M dataset to be $>20$ words long, and so this process is not entirely random.} We use BERT-Base Multilingual Cased model as an initial checkpoint and pre-train on this 1M dataset with a learning rate of $2e-5$, for 10 epochs. Then, we fine-tune on MT5 (and then on MT6) with the new \textit{further-pre-trained} BERT model. We refer to the new models as BERT-1M-MT5 and BERT-1M-MT6, respectively. As Table~\ref{tab:res} shows, BERT-1M-MT5 performs best: BERT-1M-MT5 obtains 84.37\% accuracy (0.5\% less than BERT-MT6) and 83.34 $F_1$ (0.47\% less than BERT-MT6). \subsection{IDAT@FIRE2019 Submission} For the shared task submission, we use the predictions of BERT-1M-MT5 as our first submitted system. Then, we concatenate our DEV and TRAIN data to compose a new training set (thus using all the training data released by organizers) to re-train BERT-1M-MT5 and BERT-MT6 with the same parameters. We use the predictions of these two models as our second and third submissions. Our second submission obtains 82.4 $F_1$ on the official test set, and ranks $4th$ on this shared task. \section{Related Work}\label{rel} \textbf{Multi-Task Learning.} MTL has been effectively used to model several NLP problems. These include, for example, syntactic parsing~\cite{luong2015multi}, sequence labeling~\cite{sogaard2016deep,rei2017semi}, and text classification~\cite{liu2016recurrent}. \textbf{Irony in different languages.} Irony detection has been investigated in various languages. For example, Hee et al.~\cite{van2018semeval} propose two irony detection tasks in English tweets. Task A is a binary classification task (\textit{irony} vs. \textit{non-irony}), and Task B is multi-class identification of a specific type of irony from the set \{\textit{verbal, situational, other-irony, non-ironic}\}. They use hashtags to automatically collect tweets that they manually annotate using a fine-grained annotation scheme. Participants in this competition construct models based on logistic regression and support vector machine (SVM)~\cite{rohanian2018wlv}, XGBoost ~\cite{rangwani2018nlprl}, convolutional neural networks (CNNs)~\cite{rangwani2018nlprl}, long short-term memory networks (LSTMs)~\cite{wu2018thu_ngn}, etc. For the Italian language, Cignarella et al. propose the IronTA shared task~\cite{cignarella2018overview}, and the best system~\cite{cimino2018multi} is a combination of bi-directional LSTMs, word $n$-grams, and affective lexicons. For Spanish, Ortega-Bueno1 et al.~\cite{ortega2019overview} introduce the IroSvA shared task, a binary classification task for tweets and news comments. The best-performing model on the task,~\cite{gonzalez2019elirf}, employs pre-trained Word2Vec, multi-head Transformer encoder and a global average pooling mechanism. \textbf{Irony in Arabic.} Arabic is a widely spoken collection of languages ($\sim$ 300 million native speakers)~\cite{mageedYouTweet2018,zhang2019no}. A large amount of works in Arabic are those focusing on other text classification tasks such as sentiment analysis~\cite{abdul2014samar,al2019comprehensive,abdul2017not,abdul2017modeling}, emotion~\cite{alhuzali-etal-2018-enabling}, and dialect identification~\cite{zhang2019no,elaraby2018deep,bouamor2018madar,bouamor2019madar}. Karoui et al.~\cite{karoui2017soukhria} created a Arabic irony detection corpus of 5,479 tweets. They use pre-defined hashtags to obtain irony tweets related to the US and Egyptian presidential elections. IDAT@FIRE2019~\cite{idat2019} aims at augmenting the corpus and enriching the topics, collecting more tweets within a wider region (the Middle East) and over a longer period (between 2011 and 2018). \section{Conclusion}\label{conc} In this paper, we described our submissions to the Irony Detection in Arabic shared task (IDAT@FIRE2019). We presented how we acquire effective models using pre-trained BERT in a multi-task learning setting. We also showed the utility of viewing different varieties of Arabic as different domains by reporting better performance with models pre-trained with dialectal data rather than exclusively on MSA. Our multi-task model with domain-specific BERT ranks $4th$ in the official IDAT@FIRE2019 evaluation. The model has the advantage of being exclusively based on deep learning. In the future, we will investigate other multi-task learning architectures, and extend our work with semi-supervised methods. \section{Acknowledgement} We acknowledge the support of the Natural Sciences and Engineering Research Council of Canada (NSERC), the Social Sciences Research Council of Canada (SSHRC), and Compute Canada (\url{www.computecanada.ca}). \bibliographystyle{splncs04}
{ "timestamp": "2019-11-01T01:09:41", "yymm": "1909", "arxiv_id": "1909.03526", "language": "en", "url": "https://arxiv.org/abs/1909.03526" }
\section{Introduction} Transferring quantum states from one place to another is an essential task in quantum information processing. The ubiquitous noise and device imperfections are however unavoidable and often limit the range for which a quantum state can be transferred with a good fidelity. Devising a scheme to effectively transfer quantum states over a long distance while minimizing the loss in fidelity has therefore been an active study since the last decade. Up to this date, various quantum state transfer (QST) protocols in many different platforms have been proposed, such as via strong coupling with photons \cite{qs1,qs3,qss1} or coherent transfer along a chain of qubits \cite{qs4,qs5,qs6,qs7,qs8,qs9,qs2}. In most cases, QST relies on the time-evolution of a specifically designed Hamiltonian, and as such perfect QST may require a very precise control over some of the system parameters, which may pose some difficulties in its large scale implementation. Ref.~\cite{qs8} first proposed to use adiabatic control to facilitate robust quantum state transfer, but the dynamical phase induced by the adiabatic control field therein needs to be eliminated. In a seemingly separate area, topological phases of matter have emerged as a new paradigm for designing novel devices that are naturally immune to local disorder or imperfection. For example, topological insulators and superconductors possess the so-called edge states at their boundaries whose properties are insensitive to the specific details of the system \cite{Kit,TI,TI2,TI3}. Such inherent robustness of edge states makes them an ideal candidate for storing and processing quantum information. Indeed, the use of edge states for quantum computing has been extensively studied and become an active research area on its own \cite{Kit,ising,Ivanov,tqc,tqc2,tqc3,RG,RG2,RG6}. In addition, the time evolution of symmetry-protected topological edge states often induces trivial dynamical phases (e.g. zero dynamical phase), a feature that can often simplify protocol designs for quantum information processing. In recent years, several proposals to utilize edge states for QST have also emerged \cite{tqs,tqs2,tqs3}. In Ref.~\cite{tqs,tqs2}, chiral edge states of a topological material are used to transfer quantum information stored in one qubit to another distant qubit. The ability to control the coupling between the input and output qubits with the edge states is thus necessary in such proposals, which may not be straightforward to implement and may induce additional errors. An alternative approach to harnessing topological phenomena for QST would be to design a transfer protocol which directly controls the system in which the qubits are encoded. This was first explored in Ref.~\cite{tqs3}. There, logical qubits are encoded at the edges of a superconducting Xmon qubit chain with dimerized coupling. By adiabatically tuning the qubit-qubit couplings in a prescribed manner, a logical qubit located at one end of the chain can be transferred to the other end \cite{tqs3}. Owing to the topological nature of the edge states, both logical qubit encoding and QST aspects of the protocol are inherently robust against common perturbations or disorder \cite{tqs3}. It is thus natural to generalize such a proposal for transferring multiple edge-states-based qubits from one end to the other. This possibility has also been explored in Ref.~\cite{tqs3} by using trimerized instead of dimerized qubit-qubit couplings, although its implementation may be a challenge due to the extremely small energy gap in the protocol. In this paper, we present another approach for transferring entangled qubits which are encoded in the edge states of a periodically driven (Floquet) topological system. Our study is further motivated by the capability of Floquet topological phases to exhibit features with no static analogue, such as the existence of flat edge states at quasienergy (the analogue of energy in Floquet systems) $\pi/T$ \cite{RG,RG2,cref2,cref3,DG,RG3,RG4,RG5,LW,LW2,LW3} and anomalous edge states which do not satisfy the usual bulk-edge correspondence \cite{aes}. The former feature is particularly important in our present context, as the coexistence of flat (dispersionless) edge states at both quasienergy zero and $\pi/T$ naturally provides more channels for qubit encoding \cite{RG,RG2,RG6} and QST, as detailed below. Using the same Xmon qubit chain platform as that proposed in Ref.~\cite{tqs3}, periodic driving enables the existence of a pair of edge states at one end of the system, which allows the creation of entangled qubits. Adiabatic manipulation protocol of the qubit-qubit couplings can then be devised to transfer such entangled qubits from one end to the other while maintaining a very good fidelity due to its topological protection. The main difference between our approach and that of Ref.~\cite{tqs3} is twofold. First, entangled qubits can be prepared in a minimal setup with dimerized qubit-qubit couplings in our approach. Second, during the adiabatic manipulation, the edge states remain pinned at quasienergy zero and $\pi/T$. As a result, large quasienergy gaps between the logical qubits and the rest of the qubits are maintained throughout the protocol, which is necessary for adiabaticity to hold. Our proposal thus demonstrates that while Floquet topological phases enable more qubits to be encoded as compared with their static counterpart under the same physical constraints, such qubits can also be transferred from one place to another using an approach similar to that in typical static systems. This paper is structured as follows. In Sec.~\ref{model}, we introduce the model studied in this paper, briefly review the emergence of zero and $\pi$ modes in the model and the topological invariants that characterize them, and set up some notation. In Sec.~\ref{state prep}, we propose a protocol to generate entangled qubits from the ground states of the underlying static model by utilizing the properties of the zero and $\pi$ modes that exist after the periodic driving is turned on. In Sec.~\ref{QST}, we present a protocol to transfer an entangled qubit from one end to another along a Y-shaped chain of periodically driven Xmon qubits. In Sec.~\ref{disc}, we verify the robustness of our QST protocol in the presence of disorder and imperfection, then we also show how our proposal can be adapted to improve previous QST protocol in Ref.~\cite{tqs3}, enabling high fidelity QST even for long qubit chains. Finally, we conclude our paper and present potential future directions in Sec.~\ref{conc}. \section{Model and qubit encoding} \label{model} \begin{figure}[ht] \centering \includegraphics[angle =0,width=0.45\textwidth]{schematic.pdf} \caption{A chain of superconducting qubits with dimerized couplings arranged in a Y-junction geometry. In a certain regime of system parameters, a pair of zero and $\pi$ edge modes may emerge at one end of the L and M branches, which in \emph{the ideal case} appear as antisymmetric and symmetric superpositions of the first (or the last) two qubits respectively (see Eq.~(\ref{zpi})).} \label{fig:Y junction} \end{figure} We consider a chain of superconducting qubits arranged in a $Y$-junction geometry with dimerized nearest-neighbor time-periodic couplings, as depicted in Figure \ref{fig:Y junction}. Each circle represents a superconducting Xmon qubit and is categorized into sublattice A or B based on how it couples with its two neighbouring qubits. The dashed and solid lines mark two different coupling strengths between two such qubits, which are referred to as intra- and inter-lattice coupling respectively. L, R and M label the three branches of the system, each of which is described by a Hamitonian of the following form, \[ H^{(c)}(t) = \begin{cases} H_1^{(c)}(t) & \text{for}\ (m-1)T<t<(m - 1/2)T, \\ H_2^{(c)}(t) & \text{for}\ (m-1/2)T<t<m T. \end{cases} \] \begin{align} H_1^{(c)}(t) &= \sum_{j}\left(-J^{c}_{\mathrm{intra},j}\sigma^{c\dagger}_{B,j} \sigma^{c}_{A,j} - J^{c}_{\mathrm{inter},j} \sigma^{c\dagger}_{B,j} \sigma^{c}_{A,j+1} + h.c.\right),\\ H_2^{(c)}(t) &= \sum_{j}\left(-j^{c}_{\mathrm{intra},j}\sigma^{c\dagger}_{B,j} \sigma^{c}_{A,j} - j^{c}_{\mathrm{inter},j} \sigma^{c\dagger}_{B,j} \sigma^{c}_{A,j+1} + h.c.\right), \label{ham} \end{align} where $c\in\{L,R,M\}$ labels one of the three branches, A and B are the indices of the sublattice site, $\sigma^{\dagger}_{S,j} = \ket{e}\bra{g}_{S,j}$ is the qubit raising operator at sublattice $S$ of unit cell $j$, $|g\rangle_j$ and $|e\rangle_j$ are the ground and excited states of the $j$-th Xmon qubit, $J_{\mathrm{intra}}$ ($J_{\mathrm{inter}}$) and $j_{\mathrm{intra}}$ ($j_{\mathrm{inter}}$) are the intra-lattice (inter-lattice) coupling strength in $H_1$ and $H_2$ respectively. Unless otherwise specified, we take $J^{c}_{\mathrm{intra},j} = J_1^{c}=J_1, J^{c}_{\mathrm{inter},j} = J_2^{c}=J_2, j^{c}_{\mathrm{intra},j} = j_1^{c}=j_1$, and $j^{c}_{\mathrm{inter},j} = j_2^{c}=j_2$ in the following. The spectral properties of such a time-periodic system are characterized by quasienergies, defined as the eigenphase of the one-period propagator (Floquet operator \cite{Flo1,Flo2}), \begin{eqnarray} U \ket{\epsilon} &=& \exp(-i\epsilon T) \ket{\epsilon}, \nonumber \\ U &\equiv & \mathcal{T} \exp(\int_{t_0}^{t_0+T} -\frac{\mathrm{i}H(t)}{\hbar} dt), \label{flo} \end{eqnarray} where $\mathcal{T}$ is the time-ordering operator, $\epsilon$ is the quasienergy and $\ket{\epsilon}$ is the Floquet eigenstate with quasienergy $\epsilon$. By construction, $\epsilon$ is only defined modulo $\frac{2\pi}{T}$. As such, edge states may form not only in the gap around quasienergy zero which are commonly found in static systems, but also in the gap around quasienergy $\pi/T$. In the momentum space, the two Hamiltonians defined in Eq.~(\ref{ham}) take the form \begin{equation} \mathcal{H}_\alpha^{(c)}(k,t) = -(h_a + h_b\cos(k))\tau_x +h_b\sin(k) \tau_y \;, \label{ham2} \end{equation} where $\alpha=1,2$ and $\tau$'s are Pauli matrices acting in the sublattice space. It then follows that our system possesses a chiral symmetry defined by the operator $\Gamma=\tau_z$, i.e., $\left\lbrace\mathcal{H}_\alpha^{(c)}(k,t),\tau_z\right\rbrace=0$, which pins its edge states at exactly zero and/or $\pi/T$ quasienergies (termed zero and $\pi$ modes respectively), whose numbers are determined by the topological invariants defined in \cite{cref2,cref3}. We can calculate these topological invariants by first writing down our momentum space Floquet operator $U(k)$ in the symmetric time frame (accomplished by taking $t_0=T/2$ in Eq.~(\ref{flo})), \begin{align} \label{sym time} U(k) &= F(k)G(k),\nonumber\\ F(k) &= \exp(-i H_2(k)/2) \times \exp(-i H_1(k)/2),\nonumber\\ G(k) &= \exp(-i H_1(k)/2) \times \exp(-i H_2(k)/2), \end{align} such that $F(k)=\Gamma^\dagger G(k)^\dagger \Gamma$ (we take $\hbar=\frac{T}{2}=1$ from here onwards). $F$ can then be represented by a $2\times2$ matrix in the canonical ($\Gamma=\tau_z$) basis as \begin{align} F(k) = \begin{pmatrix} a(k) & b(k) \\ c(k) & d(k) \end{pmatrix}, \end{align} and the topological invariants \begin{align} &\upsilon_{0} = \frac{1}{2\pi i } \int_{-\pi}^{\pi} dk\left(b^{-1}\frac{d}{dk}b\right),\\ &\upsilon_{\pi} = \frac{1}{2\pi i } \int_{-\pi}^{\pi} dk\left(d^{-1}\frac{d}{dk}d\right), \label{zpi} \end{align} directly count the number of quasienergy zero and $\pi$ modes respectively. In Fig.~\ref{fig:number of zero mode}, we numerically compute $\upsilon_{0}$ and $\upsilon_{\pi}$ under some representative parameter values, which we have also analytically verified in Appendix~\ref{zpical}. \begin{figure}[ht] \centering \includegraphics[angle =0,width=0.5\textwidth]{zeropi.pdf} \caption{The phase diagram of the topological invariants $\upsilon_{0}$ and $\upsilon_{\pi}$ as a function of $J_1$ and $j_2$, where $J_2 = j_1 = 0$.} \label{fig:number of zero mode} \end{figure} In the following, we will mostly be interested in the yellow and purple regime of Fig.~\ref{fig:number of zero mode}, corresponding to the presence and absence of both zero and $\pi$ modes respectively. In particular, we initialize our system such that branches L and M are in the yellow regime, whereas branch R is in the purple regime. For analytical solvability, we will further consider the following parameter values (referred to as \emph{the ideal case}) for the yellow regime: $J_1 = i\pi/2, j_2 = i \pi, J_2 = j_1 = 0$ and for the purple regime: $J_1 = i \pi / 2, j_2 = J_2 = j_1 = 0$, in the presentation of our state preparation and QST protocols, where such imaginary couplings are indeed realizable in superconducting qubit setups \cite{qs2}. We will however show, through some numerical calculations, that such fine tuning is not necessary in the actual implementation of our protocols. In the ideal case, one pair of zero and $\pi$ modes is localized at the first two qubits (the first unit cell) of the L branch, \begin{align} \ket{0}^{(L)} &= \left(\ket{eg}_{1}^{(L)} - \ket{ge}_{1}^{(L)} \right)\otimes G^{\prime},\nonumber\\ \ket{\pi}^{(L)} &= \left(\ket{eg}_{1}^{(L)} + \ket{ge}_{1}^{(L)} \right)\otimes G^{\prime}, \label{zpi2} \end{align} where subscript is the site index and $G^{\prime}=\prod_{i\ne 1} \ket{gg}_i$ denotes the ground state of the other Xmon qubits in the system. For brevity, $G^{\prime}$ is suppressed in the rest of this paper. In a similar fashion, note that there is also another pair of zero and $\pi$ modes in the M branch localized at the last two qubits (see Fig.~\ref{fig:Y junction}), which we will not discuss further in what follows. Finally, we note that the zero and $\pi$ edge modes defined above already represent two maximally entangled states between two qubits. Consequently, the task of preparing an entangled state then reduces to the task of preparing a Floquet edge state, the latter of which can be accomplished via a protocol introduced in the next section. \section{Entangled qubits generation} \label{state prep} \begin{figure}[ht] \centering \includegraphics[angle =0,width=0.5 \textwidth]{braiding.pdf} \caption{Schematic of the adiabatic protocol resulting in a $\pi/4$ rotation in the subspace spanned by the zero and $\pi$ modes. Here we focus on the L branch and highlight only the evolution of zero mode (red filled circles), which at the end transforms into a superposition of zero and $\pi$ modes (red and green half-filled circles). Dashed (solid) lines represent the qubit-qubit coupling appearing in $H_1$ ($H_2$). } \label{fig:braiding} \end{figure} Suppose that our system starts in a ground state of a static Hamiltonian $H=H_2^{(L)}+H_2^{(M)}+H_1^{(R)}$, before we switch on the periodic driving at some time $t_0$. Without loss of generality, we may assume that this corresponds to the initial state of $|\psi_0\rangle = |eg\rangle_1^{(L)}\propto |0\rangle^{(L)}+ |\pi\rangle^{(L)}$, which is thus a simple product state. Our objective in this section is to devise a protocol based on a series of adiabatic variations of some qubit-qubit couplings in the system, such that the above initial state evolves to either $|0\rangle^{(L)}$ or $|\pi\rangle^{(L)}$ at the end of the protocol. This can be accomplished by adapting the protocol introduced by two of us in Ref.~\cite{RG} which amounts to transforming zero and $\pi$ modes as $\ket{0}^{(L)}\rightarrow (\ket{0}^{(L)} - \ket{\pi}^{(L)})/\sqrt{2}$ and $\ket{\pi}^{(L)}\rightarrow (\ket{0}^{(L)} + \ket{\pi}^{(L)})/\sqrt{2}$. To this end, it suffices to restrict our attention to branch L, so that we will remove the $(L)$ index in the following. We will now present our protocol in three steps, which is also summarized in Fig.~\ref{fig:braiding}. \textit{In step 1}, we adiabatically deform the Hamiltonian stroboscopically (by slowly varying it at every period) to move the zero and $\pi$ modes from the first to the third unit cell in the $L$ branch. This is accomplished by setting $j_{\mathrm{inter},2} = j_2 \cos\phi$ and at the same time introducing a new coupling $h_2^{(1)} = j_2\sin\phi\ \sigma^{\dagger}_{A,1} \sigma_{B,2} + h.c.$ into $H_2$, where $\phi$ is the adiabatic parameter which is swept from 0 to $\pi/2$. It can be shown that the zero and $\pi$ modes at any stroboscopic time take the form \cite{RG}, \begin{align} \ket{0} &= \cos \phi \left(\ket{eg}_{1} - \ket{ge}_{1} \right) - \sin \phi \left(\ket{eg}_{3} - \ket{ge}_{3} \right),\label{zero_braid}\\ \ket{\pi} &= \cos \phi \left(\ket{eg}_{1} + \ket{ge}_{1} \right) - \sin \phi \left(\ket{eg}_{3} + \ket{ge}_{3} \right). \end{align} At the end of this step, $\ket{0}$ adiabatically changes from $(\ket{eg}_{1} - \ket{ge}_{1}) $ to $\left(-\ket{eg}_{3} + \ket{ge}_{3} \right)$, whereas $\ket{\pi}$ transforms from $(\ket{eg}_{1} + \ket{ge}_{1})$ to $-\left(\ket{eg}_{3} + \ket{ge}_{3} \right)$, i.e., both zero and $\pi$ modes are now shifted to the third unit cell as intended. \\ \textit{In step 2}, starting from the end of step 1, we continue to adiabatically deform the system's Hamiltonian by taking $j_{\mathrm{inter},1} = j_2 \cos \phi$ and introducing a new term $h_2^{(2)} = j_2 \sin \phi\ \sigma^{\dagger}_{A,2} \sigma_{A,3} + h.c.$ into $H_2$, where $\phi$ again changes slowly every period from $0$ to $\pi/2$ at the end of this step. We can again show that at any stroboscopic time \cite{RG}, \begin{align} \ket{0} &= -\sin \phi \left(\ket{eg}_{1} + \ket{ge}_{1} \right) + \cos \phi \left(-\ket{eg}_{3} + \ket{ge}_{3} \right),\\ \ket{\pi} &= \sin \phi \left(-\ket{eg}_{1} + \ket{ge}_{1} \right) - \cos \phi \left(\ket{eg}_{3} + \ket{ge}_{3} \right). \end{align} That is, $\ket{0}$ transforms to $-\ket{eg}_{1} - \ket{ge}_{1} $, whereas $\ket{\pi}$ transforms to $\left(-\ket{eg}_{1} + \ket{ge}_{1} \right)$ at the end of this step. \\ \textit{In step 3}, we recover the system's original Hamiltonian by returning $j_{\mathrm{inter},1}$ and $j_{\mathrm{inter},2}$ back to their original values as $j_{\mathrm{inter},1} = j_{\mathrm{inter},2} = j_2 \cos \phi$ and slowly decreasing $h_2^{(1)}$ and $h_2^{(2)}$ to $0$ as $h_2^{(1)} \rightarrow h_2^{(1)} \cos^2\phi$ and $h_2^{(2)} \rightarrow h_2^{(2)} \cos^2\phi$ with $\phi$ being slowly swept from $0$ to $\pi/2$. However, in order to induce a nontrivial rotation in the subspace spanned by zero and $\pi$ modes, we only tune the adiabatic parameter $\phi$ \emph{every other period}. As demonstrated in Ref.~\cite{RG}, this leads to the transformation $\ket{0}\rightarrow (\ket{0} - \ket{\pi})/\sqrt{2}$ and $\ket{\pi}\rightarrow (\ket{0} + \ket{\pi})/\sqrt{2}$ at the end of this step, thus completing our protocol.\\ \begin{figure}[ht] \centering \includegraphics[angle =0,width=0.5 \textwidth]{overlap.pdf} \caption{Time evolution of the overlap $|\langle 0|\psi(t)\rangle |$ and $|\langle \pi|\psi(t)\rangle |$ for a state initially prepared as a superposition of zero mode and $\pi$ mode, i.e., $|\psi(0)\rangle = \frac{1}{\sqrt{2}} \left(\ket{0} + \ket{\pi}\right)$. Each step takes 200 periods to complete.} \label{fig:overlap} \end{figure} To explicitly demonstrate how an entangled state can be generated via the protocol above, in Fig.~\ref{fig:overlap} we numerically plot the overlap between the state $|\psi(t)\rangle$, initially prepared in the product state $|eg\rangle_1$, and the zero ($\pi$) mode of the original system, both of which represent a maximally entangled two qubit states (see Eq.~(\ref{zpi2})). In particular, since its overlap with the zero mode becomes unity at the end of the adiabatic protocol, our state transforms into $|0\rangle =|eg\rangle_1 - |ge\rangle_1$. To generate another entangled state $|\pi\rangle = |eg\rangle_1+|ge\rangle_1$, we can simply perform exactly the same protocol two more times. In principle, we can also prepare a more generic entangled state by exploiting the dynamical evolution of the Floquet operator. In particular, since $|0\rangle$ and $|\pi\rangle$ pick up different dynamical phases during their time evolution, reading-out our state at a specific time $t^*$ transforms an initially prepared product state $|eg\rangle_1 = (|0\rangle + |\pi\rangle)/\sqrt{2}$ into a desired arbitrary entangled state $\propto (\ket{eg}_1 + \alpha(t^*) \ket{ge}_1)$. In general, however, this mechanism will lead to a lower fidelity as compared with that based on the protocol described above, and as such distillation protocol might also need to be supplemented in the actual implementation of such an arbitrary entangled state generation. \section{Quantum state transfer} \label{QST} \begin{figure}[htb] \centering \includegraphics[angle =0,width=0.45 \textwidth]{QST_schematic.pdf} \caption{Schemetic diagram of the QST protocol. The green line represents the coupling added in \textit{Phase I} of QST and the blue line represents the coupling added in \textit{Phase II}. Again, dashed lines represent the coupling in $H_1$ and solid lines present the coupling in $H_2$.} \label{fig:QST_schematic} \end{figure} In the previous section, we have shown how entangled qubits arise as either a zero or $\pi$ mode. In this section, we propose a protocol to transfer these zero and $\pi$ modes (and thus any entangled qubits) from the leftmost end (L branch) to the rightmost end (R branch) of the chain. To this end, the importance of using Y-junction geometry to facilitate such a QST is now clear. That is, in a strictly one-dimensional chain of such Xmon qubits, both zero and $\pi$ modes necessarily emerge at its both ends. As a result, transferring a zero or $\pi$ mode from one end to the other necessarily leads to the interference with zero or $\pi$ mode located at the other end, thus destroying the transferred information. By utilizing a Y-junction geometry, we can instead adjust our system such that one pair of zero and $\pi$ modes is located at the end of L branch, whereas the other is at the end of M branch. By encoding our information in the zero and $\pi$ modes originally located in L branch, we can faithfully transfer such information to the other end of R branch, thus completing our QST procedure. Such a transfer can then be accomplished by performing a series of adiabatic manipulations, which can be divided into two phases below and summarized in Fig.~\ref{fig:QST_schematic}: \textit{Phase I}: Transferring zero and $\pi$ modes from the end of the L branch to the middle point in $(N_L - 1)/2$ steps, $N_L$ being the number of qubits in the L branch. In the $x^{\text{th}}$ step, a new term is introduced in $H_2$, i.e., $h_x^{(3)} =j_2 \sin \phi_x\ \sigma^{(L)\dagger}_{A,2x-1} \sigma^{(L)}_{B,2x} + h.c.$, and we set the coupling strength $j_{\rm{inter},2x}=j_2 \cos\phi_x$, where $\phi_x$ is the parameter adiabatically increasing from 0 to $\pi/2$. As detailed in Appendix~\ref{ph1}, the zero and $\pi$ modes at any stroboscopic time within the $x$-th step are found as \begin{align} \ket{0}^{(L)} = \frac{1}{\sqrt{2}} \bigg[&\cos{\phi_x} \left(\ket{eg}^{(L)}_{2x-1} - \ket{ge}^{(L)}_{2x-1}\right) +\nonumber\\ &\sin{\phi_x} \left(\ket{eg}^{(L)}_{2x+1} - \ket{ge}^{(L)}_{2x + 1}\right) \bigg]\;,\\ \ket{\pi}^{(L)} = \frac{1}{\sqrt{2}} \bigg[&\cos{\phi_x} \left(\ket{eg}^{(L)}_{2x - 1} + \ket{ge}^{(L)}_{2x-1}\right) + \nonumber\\ &\sin{\phi_x} \left(\ket{eg}^{(L)}_{2x + 1} + \ket{ge}^{(L)}_{2x + 1}\right) \bigg]. \end{align} At the end of this phase, i.e., after completing the $(N_L-1)/2$-th step, both zero and $\pi$ modes are transferred to the middle point of the Y-junction, i.e., \begin{align} \ket{0}^{(L)}: &\ \ket{eg}_{1}^{(L)} - \ket{ge}_{1}^{(L)}\rightarrow \ket{eg}_{N_L}^{(L)} - \ket{ge}_{N_L}^{(L)},\\ \ket{\pi}^{(L)}: &\ \ket{eg}_{1}^{(L)} + \ket{ge}_{1}^{(L)} \rightarrow \ket{eg}_{N_L}^{(L)} + \ket{ge}_{N_L}^{(L)}. \end{align} \textit{Phase II}: Transferring zero and $\pi$ modes from the middle point to the end of R branch in $N_R$ steps, $N_R$ being the number of qubits in the R branch. In the $x$-th step, a new term, $h_x^{(4)} =j_2 \sin \phi_x\ \sigma^{(R)\dagger}_{A,x-1} \sigma^{(R)}_{B,x} + h.c.$, is introduced in $H_2$, where $\phi_x$ is the adiabatic parameter swept slowly at every period from $0$ to $\pi/2$, and $\sigma^{(R)}_{A,0} \equiv \sigma^{(L)}_{A,N_L}$. As detailed in Appendix~\ref{ph2}, the zero and $\pi$ modes at any stroboscopic time within the $x$-th are \begin{align} \ket{0}^{(L)} = \frac{1}{\sqrt{2}} \bigg[&\cos(\frac{\pi}{2} \sin\phi_x) \left(\ket{eg}^{(R)}_{x} - \ket{ge}^{(R)}_{x} \right) +\nonumber\\ &\sin(\frac{\pi}{2}\sin\phi_x)(-\ket{eg}^{(R)}_{x+1} + \ket{ge}^{(R)}_{x+1}) \bigg], \\ \ket{\pi}^{(L)} = \frac{1}{\sqrt{2}} \bigg[&\cos(\frac{\pi}{2} \sin\phi_x) \left(\ket{eg}^{(R)}_{x} + \ket{ge}^{(R)}_{x} \right) +\nonumber\\ & \sin(\frac{\pi}{2}\sin\phi_x)(\ket{eg}^{(R)}_{x+1} + \ket{ge}^{(R)}_{x+1}) \bigg]. \end{align} At the end of this phase, i.e., after completing the $N_R$-th step, the zero and $\pi$ modes are perfectly transferred to the right end of branch R, \begin{align} &\ket{0}^{(L)}: (-1)^{N_L+N_R}(\ket{eg}_{N_R}^{(R)} - \ket{ge}_{N_R}^{(R)})\otimes G^{\prime},\\ &\ket{\pi}^{(L)}: (\ket{eg}_{N_R}^{(R)} + \ket{ge}_{N_R}^{(R)})\otimes G^{\prime}. \end{align} While the above protocol is presented in \emph{the ideal case}, i.e, based on special parameter values discussed in Sec.~\ref{model}, the actual implementation of our protocol does not rely on such fine tuning. Indeed, as long as the zero and $\pi$ modes in the system remain well-separated in quasienergies from the bulk states during the adiabatic manipulations (see e.g. Fig.~\ref{fig:fidelity T band}(a)), the above QST protocol is still expected to work with a good fidelity. We will discuss this aspect further in the next section. \section{Discussion} \label{disc} \begin{figure}[ht] \centering \includegraphics[angle =0,width=0.4 \textwidth]{fidelity_T_Band.pdf} \caption{(a) The quasienergy spectrum of our system during QST. Notice that quasienergy zero and $\pi/T$, which correspond to our zero and $\pi$ modes respectively, remain well-separated from the other quasienergies at all times. (b) Fidelity ($|\langle \psi_i|\psi_d\rangle |$) against the disorder strength. In the 10 qubits system, we take $N_L = 6, N_R = 4, N_M = 8$, whereas in the 40 qubits system, we take $N_L = 22, N_R = 18, N_M = 8$. The parameters used for the nonideal case (the orange dotted line) are $J_1 = 1.5 i , j_2^{L} = j_2^{M} = 3 i ,j_2^{R} = -0.1 i, J_2 = j_1 = 0$. Each step takes 40 (70) periods to complete in the 10 (40) qubits system and each data point is averaged over $100$ disorder realizations.} \label{fig:fidelity T band} \end{figure} In practice, perfect modulation of the coupling strengths is impossible. As such, we will now examine the robustness of the QST protocol presented in Sec.~\ref{QST} against coupling disorders, which are implemented by adding each of the following terms to $H_1$ and $H_2$ respectively, \begin{align} \Delta H_1 &= \sum_{c\in C}\sum_{m} \left( \delta_{1,m} J_1~\sigma^{c\dagger}_{B,m}\sigma^{c}_{A,m} +h.c.\right),\nonumber\\ \Delta H_2 &= \bigg[\sum_{c\in \{L,M\}}\sum_{m} ( \delta_{2,m} j_2~\sigma^{c\dagger}_{B,m}\sigma^{c}_{A,m+1} + h.c.) + \nonumber \\& \sum_{m^{\prime}}(\delta_{3,m} j_2 \sigma^{L\dagger}_{A,2m^{\prime}-1} \sigma^{L}_{B,2m^{\prime}} + h.c.)+ \nonumber\\ &\sum_{m^{\prime \prime}} (\delta_{4,m} j_2 \sigma^{R\dagger}_{A,m^{\prime\prime}-1} \sigma^{R}_{B,m^{\prime\prime}} + h.c. )\bigg], \end{align} where $\delta_{i,m}$ is a uniform random number taken $\in [-0.5W,0.5W]$ and $W$ is the disorder strength. In addition to disorders introduced in the original system, we further consider the presence of disorders during the numerical implementation of QST protocol introduced in Sec.~\ref{QST}. This is accomplished by modifying the newly added couplings $h_x^{(3)} =\sin \phi_x j_2(1 + \delta_x^{(3)}) \sigma^{(R)\dagger}_{A,2x-1} \sigma^{(R)}_{B,2x} + h.c.$ and $h_x^{(4)} =\sin \phi_x j_2(1+\delta_x^{(4)}) \sigma^{(R)\dagger}_{A,2x-2} \sigma^{(R)}_{B,2x - 1} + h.c.$. By denoting the transferred state as $\ket{\psi_i}$ in the ideal case and $\ket{\psi_d}$ in the case with disorders, we numerically calculate fidelity $F = \abs{\braket{\psi_i}{\psi_d}}$ as a function of the disorder strength in Fig.~\ref{fig:fidelity T band}(b). In addition to the robustness of our QST protocol against small to moderate disorders, the orange line of Fig.~\ref{fig:fidelity T band}(b) also demonstrates the good performance of our QST protocol at other system parameters, such as $J_1 = 1.5 i , j_2^{L} = j_2^{M} = 3 i ,j_2^{R} = -0.1 i, J_2 = j_1 = 0$, which deviate rather significantly from \emph{the ideal case} in which our QST protocol is analytically solvable \\ \begin{figure}[htb] \centering \includegraphics[angle =0,width= 0.4 \textwidth]{compare.pdf} \caption{(a) Comparison of the fidelity against disorder strength between our proposed step-by-step protocol in Sec.~\ref{QST} and the direct-transfer protocol. Both system has the size $N_L = 30, N_R = 30, N_M = 8$ and the total time for both QST is 6600 periods. (b) Quasienergy spectrum of the direct-transfer protocol during the QST process. Notice that the quasienergy gap vanishes somewhere during the process, leading to the corruption of the transferred information and thus lower fidelity. Quasienergy spectrum of the step-by-step protocol can be referred to FIG. \ref{fig:fidelity T band} (a).} \label{fig:compare} \end{figure} We also note that Phase I and Phase II of our QST protocol can in principle be sped up by performing the actions in all $(N_L-1)/2$ and $N_R$ steps, respectively, in one go. That is, starting with $|0\rangle^{(L)}$ and $|\pi\rangle^{(L)}$ localized at the left end of the L branch, one can introduce $h^{(3)}=\sum_x h_x^{(3)}$ in $H_2$ and take $j_{\rm inter, 2}=j_{\rm inter, 4}=\cdots = j_{\rm inter, N_L-1}=j_2 \cos\phi$ simultaneously to move $|0\rangle^{(L)}$ and $|\pi\rangle^{(L)}$ to the middle point in Phase I, followed directly with the introduction of $h^{(4)}=\sum_x h_x^{(4)}$ in $H_2$ to further send $|0\rangle^{(L)}$ and $|\pi\rangle^{(L)}$ to the right end of R branch in Phase II. While this approach works very well for sufficiently small systems, increasing the number of qubits will inevitably cause the transferred state to become more delocalized in the middle of such a direct-transfer protocol, leading to the unavoidable closing of the quasienergy spectrum as shown in Fig.~\ref{fig:compare}(b). By contrast, the step-by-step QST protocol introduced in Sec.~\ref{QST} ensures that the transferred state remains localized at all times, thus maintaining large quasienergy gaps between zero or $\pi$ modes and the bulk quasienergies, even at a very large number of qubits. In such cases, the step-by-step protocol is expected to perform better as compared with the direct-transfer protocol, which we have also verified in Fig.~\ref{fig:compare}(a) for the case of $68$ qubits. \begin{figure}[ht] \centering \includegraphics[angle =0,width=0.4\textwidth]{comparison_ZSL.pdf} \caption{(a) The energy band spectrum of a single qubit transfer along a chain of 21 Xmon qubits of Ref.~\cite{tqs3} by performing the transfer in one go. (b) Same as in panel (a), but we divide the QST process in $20$ steps; (c) Comparison of the fidelity as a function of the disorder strength between the two protocols in panels (a) and (b). The total adiabatic time for both QST protocols is $t_{\rm tot} = \pi/(0.01g).$ } \label{fig:Comparison_ZSL} \end{figure} Inspired from the above analysis, we may also propose an improvement to the QST protocol introduced in \cite{tqs3}. In particular, Ref.~\cite{tqs3} proposes a similar QST protocol by using a chain of Xmon qubits with time-independent coupling. Due to the lack of $\pi$ modes in static systems, however, the use of dimerized coupling in such a chain only enables the transfer of a single qubit from one end to the other, which Ref.~\cite{tqs3} proposed to accomplish in one step by simultaneously modulating all the qubit-qubit couplings. While their results show a good QST fidelity at small number of qubits, the same problem of vanishing energy gap will also arise at larger number of qubits. As such, the idea of breaking down QST process into steps in the spirit of our protocol in Sec.~\ref{QST} can also be adapted to enable high fidelity transfer of one qubit in such a static system scenario. To this end, we recall the static Hamiltonian used in Ref.~\cite{tqs3} describing the dimerized qubit-qubit couplings in a chain of Xmon qubits, \begin{align} \hat{H} = \sum_{j = 1}^{N}\left(J_0^{j}\hat{\sigma}_{A,j}^{\dagger}\hat{\sigma}_{B,j} + J_1^{j}\hat{\sigma}_{B,j}^{\dagger}\hat{\sigma}_{A,j+1}+ h.c. \right). \label{ZSL} \end{align} In Ref.~\cite{tqs3}, QST is accomplished by adiabatically tuning all $J_1^j$ and $J_0^j$ simultaneously as $J_i^{j} = g\left( 1 + (-1)^{i} \cos \theta \right)$, with $\theta$ being the adiabatic parameter swept from $0$ to $\pi$. In our proposed improvement, we may instead break down the QST protocol into $(N-1)$ steps. In the $x$-th step, we take $J_i^{x} = g\left( 1 + (-1)^{i} \cos \theta_x \right)$ while keeping the other coupling strengths constant, with $\theta_{x}$ being the same adiabatic parameter swept from $0$ to $\pi$. This amounts to transferring a qubit from the $x$-th unit cell to the $(x+1)$-th unit cell, so that after the $(N-1)$-th step, the qubit originally at the left end of the lattice is perfectly transferred to the right end. In this improved QST protocol, the gap in the energy spectrum is significantly larger than the original protocol, as illustrated in Fig.~\ref{fig:Comparison_ZSL}(a) and (b). To further test the robustness of this improved protocol, we again consider the presence of disorders by adding the following term to the original Hamiltonian of Eq.~\ref{ZSL}, \begin{align} \hat{H}_{d} = \sum_{j} \left(J_{0}^{j} \delta_{0,j} \hat{\sigma}_{A,j}^{\dagger}\hat{\sigma}_{B,j} + J_{1}^{j} \delta_{1,j} \hat{\sigma}_{B,j}^{\dagger}\hat{\sigma}_{A,j+1} + h.c. \right), \end{align} where $\delta_{0,j}$ and $\delta_{1,j}$ are random numbers taken $\in [-0.5W,0.5W]$ and $W$ is the disorder strength. The fidelity as a function of the disorder strength is plotted in Fig \ref{fig:Comparison_ZSL}(C). To have a fair comparison, the total transfer time is the same in for the two protocols, $t_{\rm tot} = \pi/(0.01g)$. It is clear that our proposed protocol indeed improves the robustness of such a system during QST. To conclude, breaking down QST process into steps is one of our main results in this paper, which can be applied to improve the fidelity of adiabatic-based QST for large system sizes, both in time-periodic and static settings. This in turns enables us, at least in principle, to transfer qubits over an arbitrarily large distance. \section{Concluding Remarks} \label{conc} In this paper, we have proposed an innovative scheme to realize high-fidelity and long-distance transfer of an entangled state along a Y-shaped topologically non-trivial qubit chain in the presence of periodic driving. Before the state is transferred, a maximal entangled state is prepared through an adiabatic process, in which a key step is to introduce a nontrivial rotation between zero and $\pi$ modes (both being topological edge states of the qubit chain). In the ideal situation, our QST can perfectly transfer an entangled state from one branch to another branch. In a more realistic situation, where disorder effects are introduced, the transfer fidelity is found to be robust against random noise, due to the inherent robustness of encoding qubits built from Floquet zero and $\pi$ edge modes. Furthermore, one important property of our QST scheme is that the gap between the involved zero and $\pi$ modes and the bulk states in the quasienergy spectrum does not scale down to zero as the size of the qubit chain increases. Thus, our scheme enables us to transfer the entangled qubits over long distance without the loss of topological protection or adiabaticity. Inspired by our QST scheme, we have also improved the QST protocol proposed in \cite{tqs3}. Indeed, one simple modification over the original protocol greatly enhances its robustness against disorder and also makes it possible to realize long-distance QST, but for single-qubit states only. The potential applications of our QST protocol should lie in solid-state based quantum information processing and quantum computation, where entangled qubits need to be transferred in certain solid state devices over a not-necessarily short distance. Given that topological edge modes, especially those of periodically driven systems, are already found to have great potential in implementing quantum computation protocols \cite{RG,RG2,RG6}, it is stimulating to see by now that Floquet topological edge modes can further facilitate entangled state transfer along solid-state-based qubit chains. \vspace{0.3cm} \noindent {\bf Acknowledgements:} J.G. acknowledges fund support by the Singapore NRF Grant No. NRF-NRFI2017- 04 (WBS No. R-144-000-378- 281) and by the Singapore Ministry of Education Academic Research Fund Tier-3 (Grant No. MOE2017-T3-1-001 and WBS. No. R-144-000-425-592).
{ "timestamp": "2019-09-10T02:19:56", "yymm": "1909", "arxiv_id": "1909.03646", "language": "en", "url": "https://arxiv.org/abs/1909.03646" }
\section{Introduction}\label{s:s_1} \IEEEPARstart{S}{ingle} image super-resolution (SISR) is a classical but challenging ill-posed inverse problem in low-level computer vision, aiming at restoring a high-resolution (HR) image from a single low-resolution (LR) input image. It is widely used in various areas such as medical imaging, satellite imaging and security imaging~\cite{yang2014single,park2003super}. Early methods for SISR are mainly interpolation-based, including Bicubic interpolation~\cite{keys1981cubic} and Lanczos resampling~\cite{duchon1979lanczos}. Then more powerful reconstruction-based methods often adopt sophisticated prior knowledge to restrict the possible solution space, with the advantage of generating flexible and sharp details~\cite{dai2009softcuts,sun2008image,yan2015single,marquina2008image}. Learning-based methods are now mainstream algorithms for SISR, utilizing substantial data to learn statistical relationships between LR and HR pairs. Markov Random Field (MRF)~\cite{freeman2002example} was firstly adopted by Freeman \emph{et~al.} to exploit the abundant real-world images to synthesize visually pleasing image textures. Neighbor embedding methods~\cite{chang2004super} proposed by Chang \emph{et~al.} took advantage of similar local geometry between LR and HR to restore HR image patches. Inspired by the sparse signal recovery theory, researchers applied sparse coding methods~\cite{yang2010image,zeyde2010single,timofte2013anchored,timofte2014a+,yang2016consistent} to SR. Random forest~\cite{schulter2015fast} has also been used to improve the reconstruction performance. Recently, remarkable performance has been achieved for SR by deep models, especially deep network architectures, which are elaborated for high-level tasks in computer vision. Notably, residual network (ResNet) and densely connected network (DenseNet) are two widely-used architectures, which use skip connections to alleviate gradient problems and degradation phenomena in training. Chen \emph{et~al.}~\cite{chen2017dual} analyzed ResNet and DenseNet in the HORNN framework~\cite{soltani2016higher} and concluded that ResNet enables feature re-usage while DenseNet enables feature exploration, both important to learn powerful representations. Through extensive experiments, \cite{veit2016residual} and~\cite{huang2016deep} implied that ResNet shows an ensemble-like behavior within its structure. Yang \emph{et~al.}~\cite{yang2017deep} showed that ResNet applied in SR would lead to output with a layer-by-layer progressive effect, and Huang \emph{et~al.}~\cite{huang2017densely} argued that this might restrict ResNet from reaching more feasible solutions. Although DenseNet explores as many new features as possible by directly utilizing any former features, its excessive skip connections among intermediate layers increase the number of parameters and burden the hardware during training. In this paper, we propose Linear Compressing Based Skip-Connecting Network (LCSCNet), as a framework for SR, which takes advantages of ResNet's parameter-economic feature re-usage and DenseNet's distinguishing feature exploration, as well as mitigating difficulties of restricted structures of ResNet and parameter burden of DenseNet. As the network depth grows, the features produced by different intermediate layers would be hierarchical with different receptive fields. Among deep SR models, DRCN~\cite{kim2016deeply} and MemNet~\cite{Tai-MemNet-2017} used these intermediate features with multi-supervised methods, in which each feature corresponded to a raw SR output, and then fused these intermediate SR outputs by a list of trained scalars. Such a fusion strategy has two flaws: 1) once the weight scalars are determined in training, it will not change with different inputs; 2) using a single scalar to weight SR output fails to take pixel-wise differences into consideration, i.e., it would be better to weight different parts distinguishingly in an adaptive way. To overcome these shortcomings, inspired by the gate units in LSTM~\cite{hochreiter1997long}, we develop an adaptive element-wise fusion strategy in a progressive constructive way to maintain the element-wise convex weighted pattern, aiming at making better use of hierarchical information with different receptive fields. In the end, we composite the Basic LCSCNet architecture with the adaptive element-wise fusion strategy gracefully for SR. Analysis and experiments in the following sections will illustrate the rationality of the proposed methods. The main contributions of this work are three-fold: 1) We propose an accurate and efficient Linear Compressing Based Skip-Connecting Network (LCSCNet) architecture, which inherits the advantage of DenseNet in treating features of different levels distinguishingly while reducing its parameter size by exploiting the parameter-economic strength of ResNet. Moreover, we develop an Enhanced LCSCNet (E-LCSCNet) to further alleviate difficulties of training large-scale networks. 2) Differently from the traditional stationary fusion strategy, we take the input differences as well as the element-wise variation into consideration and propose an adaptive element-wise fusion strategy to further utilize hierarchical information. 3) When compared with the state-of-the-art models trained on the widely-used 291 dataset and those light networks trained on the DIV2K dataset, our proposed framework achieves the state-of-the-art performance. When compared with large models trained on DIV2K, our E-LCSCNet is among the state-of-the-art with apparent parametric efficiency. The rest of the paper is organized as follows. Section~\ref{s:s_2} reviews recent related work. Section~\ref{s:s_3} presents a detailed description of the proposed architecture, mainly on the configuration of Basic LCSCNet and the adaptive element-wise fusion algorithm. Section~\ref{s:s_4} illustrates several intriguing properties of LCSCNet, which could explain the rationality of LCSCNet. Section~\ref{s:s_5} conducts ablation studies to further probe into the proposed framework. Section~\ref{s:s_6} presents experimental results in comparison with other relevant methods. Section~\ref{s:s_8} concludes the paper and envisages some future work. \begin{figure*}[htbp] \centering \subfloat[Overall architecture of Basic LCSCNet (E-LCSCNet)]{ \includegraphics[scale=0.5]{./subover.png}} \\ \subfloat[Overall architecture of LCSCNet (E-LCSCNet)]{ \includegraphics[scale=0.5]{./over.png}} \caption{\small The overall architectures of (a) Basic LCSCNet (E-LCSCNet) and (b) LCSCNet (E-LCSCNet). In (b), $\otimes$ means element-wise multiplication; $\{Y_{1},\dots,Y_{N}\}$ are the intermediate HR outputs reconstructed from $\{F_{1},\dots,F_{N}\}$. When E-LSCSNet is employed, red lined parts are activated. For fair comparison, the upsampling and reconstruction part of (Basic) LCSCNet varies with the training dataset: For models trained on the 291 dataset, this part is the traditional deconv layer consisting of ``nearst-neighborhood upsampling + conv-ReLU + conv-ReLU + conv"; for models trained on the DIV2K dataset, we use ESPCN~\cite{shi2016real} instead. To be specific, we only use ESPCN as U\&RNet in Section~\ref{s:s_5::E} and the E-LCSCNet in Table~\ref{chart:big}.} \label{fig:F3} \end{figure*} \section{Related Work}\label{s:s_2} Because our proposed methods include the Basic LCSCNet architecture and the adaptive element-wise fusion strategy, in this section we review related work mainly from the aspects of basic SISR reconstruction and sub-output fusion. \subsection{Basic SISR Reconstruction} Dong \emph{et~al.} pioneeringly proposed a three-layer super-resolution convolutional neural networks (SRCNN) \cite{dong2014learning}, predicting the end-to-end nonlinear mapping between LR and HR spaces. This first trial significantly outperformed other algorithms at that time. To combine the benefits of the natural sparsity of images and deep neural network architectures, Wang \emph{et~al.} proposed the Cascaded Sparse Coding Network (CSCN)~\cite{wang2015deep}, which had a higher visual quality than previous work. After SRCNN, Dong \emph{et~al.} further proposed FSRCNN~\cite{dong2016accelerating} improving SRCNN mainly by leveraging deconvolution layers, which reduced computation significantly by increasing the resolution only at the end of network. In the meantime, the Efficient Sub-Pixel Convolution Neural Network (ESPCN)~\cite{shi2016real} was proposed by Shi \emph{et~al.}, replacing the traditional deconvolution layer by an efficient sub-pixel convolution layer and further reducing computation. Inspired by the success that very deep neural networks with sophisticated architectures and training strategies achieved in some high-level tasks in computer vision~\cite{simonyan2014very}, Kim \emph{et~al.} employed the VGG architecture and high learning rate with gradient clipping to stack a very deep (20 layers) convolutional neural network (VDSR)~\cite{kim2016accurate} and gained a remarkable improvement. Mao \emph{et~al.} proposed a deep fully convolutional auto-encoder network with symmetric skip connections~\cite{mao2016image}. To handle the issue of large numbers of parameters brought by very deep architectures, Kim \emph{et~al.} proposed the Deeply-Recursive Convolutional Network (DRCN)~\cite{kim2016deeply}, which was also 20-layer but with 16 recursions among its intermediate layers. To further exploit the advantages from deepening neural networks, motivated by the success of~\cite{he2016deep}, Tai \emph{et~al.} proposed the Deep Recursive Residual Network (DRRN)~\cite{tai2017image}, a 54-layer convolutional neural network for SR, in which they utilized the residual network architecture (ResNet)~\cite{he2016identity} in both global and local manners. Inspired by the Dense Connected Network (DenseNet) \cite{huang2017densely} proposed by Huang \emph{et~al.}, Tong \emph{et~al.} introduced dense skip connections to their deep architecture~\cite{tong2017image}. Based on the correlations among the HR outputs with different scale factors and a heuristic methodology, Lai \emph{et~al.} proposed the Laplacian Pyramid Super-Resolution Network (LapSRN)~\cite{LapSRN} to progressively reconstruct the sub-band residuals of higher-resolution images, which was especially effective for large scale factors. Motivated by explicitly mining persistent memory through an adaptive learning process and further mitigating the difficulties of training deeper networks, Tai \emph{et~al.} proposed an 80-layer network for image restoration, named as Persistent Memory Network (MemNet)~\cite{Tai-MemNet-2017}. Very recently, to further explore the power of example-based SISR with abundant training data, a new dataset DIV2K~\cite{Agustsson_2017_CVPR_Workshops} consisting of 800 2K resolution images was established. Based on this powerful dataset, many new architectures were proposed for performance improvement. Among them, by removing Batch-Normalization (BN)~\cite{ioffe2015batch} and applying residual scaling, Lim \emph{et~al.} proposed the Enhanced Deep Residual Network (EDSR)~\cite{lim2017enhanced}, which significantly improved performance. Then the Deep Back-Projection Network (DBPN)~\cite{haris2018deep} was proposed by Haris \emph{et~al.} to combine the merits of deep neural networks with the back-projection procedure, proven to be very effective for large scale factors. By making full use of local and global information from deep architectures, the Residual Dense Network (RDN)~\cite{zhang2018residual} proposed by Zhang \emph{et~al.} exhibits comparable performance to EDSR, with fewer parameters. \subsection{Sub-output Fusion} Features from different depths with different receptive fields specialize in different patterns in SISR. From the perspective of ensemble learning, a better result can be acquired by adaptively fusing the outputs from different-level features. Based on this concept, several fusion strategies were proposed. Among them, two representative weighted-summation methods were the vectorized weighted fusion strategy~\cite{kim2016deeply,Tai-MemNet-2017} and MSCN~\cite{liu2016learning}. In the vectorized weighted fusion, a trainable positive vector whose $\ell_{1}$ norm is 1 is applied, and each element in this vector controls how much of the current sub-output contributes to the final one. To regularize each sub-output and stabilize training, multi-supervised training is adopted. In MSCN, an extra CNN module takes LR as input and outputs several tensors with the same shape as the HR. These tensors can be viewed as adaptive element-wise weights for raw HR outputs. Then the weight module and the basic SISR module are trained jointly by optimizing the fused results in an end-to-end manner. Both of the two fusing strategies above have shortcomings. The vectorized approach does not take the diversity of input and pixel-wise differences into consideration, while in MSCN the summation of coefficients at each pixel is not normalized, which is incongruous. Therefore, in this paper we aim to propose a normalized adaptive element-wise fusion strategy to overcome the shortcomings of the two previous fusion methods. The above-mentioned deep methods mainly minimized the mean squared error (MSE), which tended to be blurry, over-smoothing and perceptually unsatisfying, especially in the case of large scale factors. Recently, some inspiring deep learning-based works concentrated on the exploration of more effective loss functions for SR. In~\cite{bruna2015super} and~\cite{johnson2016perceptual}, the perceptual loss using high-level feature maps of VGG made HR outputs more visually pleasing; \cite{sonderby2016amortised} introduced amortized MAP inference to the loss function to get more plausible results; \cite{ledig2016photo} and~\cite{sajjadi2016enhancenet} used the adversarial loss to produce photo-realistic HR outputs. Although these methods produced high-quality images with rich texture details, the details in their outputs may be quite different from original images. As we mainly aim to develop efficient deep models with fewer pixel-wise errors, our work does not belong to this group. Readers can refer to~\cite{yang2018deep} for an elaborated survey on deep learning based SISR. \section{Linear Compressing Based Skip-Connecting Network (LCSCNet and E-LCSCNet)}\label{s:s_3} Our work has two main technical contributions: an (enhanced) linear compressing based skip-connecting structure for developing extremely deep efficient neural networks, and an adaptive fusion strategy for further utilizing intermediate features. In order to better clarify the contribution and function for each of them, here we briefly specify four architectures used in later discussions and ablation studies: \emph{\textbf{Basic LCSCNet:} as shown in Fig.\ref{fig:F3}(a) (without red-line parts), it firstly extracts features and then sends them to a series of LCSCBlocks, and the final results are obtained from the upsampling and reconstruction part;} \emph{\textbf{Basic E-LCSCNet:} quite similar to Basic LCSCNet except for the replacement of LCSCBlock by E-LCSCBlock (Fig.\ref{fig:F3}(a)) and the extra additive skip connections with initial features;} \emph{\textbf{LCSCNet} and \textbf{E-LCSCNet:} applying the proposed adaptive fusion strategy to Basic LCSCNet and Basic E-LCSCNet respectively, as shown in Fig.\ref{fig:F3}(b).} Because the structure of the above two basic networks are quite simple and we mainly use LCSCNet (E-LCSCNet) to compare with other state-of-the-art works, we will focus on the detailed descriptions on LCSCNet (E-LCSCNet). As shown in Fig.\ref{fig:F3}(b), our LCSCNet and E-LCSCNet both mainly consist of four parts: 1) a preliminary feature extraction net (PFENet), 2) linear compressing based skip-connecting blocks (LCSCBlocks) or enhanced linear compressing based skip-connecting blocks (E-LCSCBlocks) for deep feature exploration, 3) a upsampling and reconstruction net (U\&RNet), and 4) an adaptive element-wise fusion of all intermediate outputs. Many previous works~\cite{kim2016accurate, tai2017image, Tai-MemNet-2017, LapSRN} learned the residue between HR and its bicubic interpolation and argued that this helps stabilize training and improves performance. When we compare LCSCNet with these works, as shown in Fig.\ref{fig:F3}, the input $I_{in}$ is LR, and the output $I_{out}$ is the residue. Meanwhile, many recent works~\cite{lim2017enhanced, zhang2018residual} just learned the mapping between LR and HR. When we compare E-LCSCNet with these methods, we also follow this routine for fairness. Our PFENet uses a single $3\times3$ convolution layer to conduct preliminary feature extraction: \begin{equation} F_{0} = f_{PFE}(I_{in}), \label{con:over_1} \end{equation} where $F_{0}$ denotes extracted features from the LR input. \begin{figure}[htbp] \centering \includegraphics[scale=0.38]{./new_lcscunit.png} \caption{\small The configuration of LCSCUnit.} \label{fig:F4} \end{figure} \subsection{Configurations of LCSCUnit, LCSCBlock and E-LCSCBlock} \subsubsection{LCSCUnit and LCSCBlock} The features extracted by the PFENet are then transmitted to the second part of overall network, which uses LCSCBlocks to explore complicated features progressively. An LCSCBlock comprises a fixed number of linear compressing based skip-connecting units (LCSCUnit) with the same configuration. The basic configuration of the LCSCUnit is depicted in Fig.\ref{fig:F4}, where $LU_{i,j}$ denotes the $j$-th unit in the $i$-th LCSCBlock, $Y_{in}$ denotes the input feature maps of this unit and $Y_{out}$ denotes the output feature maps, both maps with $n$ channels. In Fig.\ref{fig:F4}, the upper convolution operator named as linear compressing (LC) layer is of size $1\times1$ with $n_{1}$ output channels. We denote the LC layer in $LU_{i,j}$ as $K^{L}_{i,j}$. Motivated by~\cite{he2016identity}, the nonlinear operator in the lower part of Fig.\ref{fig:F4} consists of two parts: ReLU and the convolution operator denoted as $K^{NL}_{i,j}$ of size $3\times3$ with $n_{2}$ output channels. Here the superscripts $^{L}$ and $^{NL}$ denote the convolution kernels for linear and nonlinear transformations, respectively. Then the output of $K^{L}_{i,j}$ and $K^{NL}_{i,j}$ are concatenated to form a $n$-channel output feature maps. For simplicity, bias is omitted and convolution is replaced by matrix multiplication\footnote{\emph{e.g.} the convolution operation X$\ast$Y is rewritten as XY for simplicity.}, then the whole process of LCSCUnit can be formulated as \begin{equation} Y_{out} = concat \big (K^{L}_{i,j}Y_{in}, \ K^{NL}_{i,j}ReLU(Y_{in}) \big ). \label{eq:lcscunit} \end{equation} Furthermore, features and convolution kernels in LCSCUnits can be separated by their properties. As for features, $Y_{out}$ can be divided into $n_1$-channel $Y_{out}^{L}$ and $n_2$-channel $Y_{out}^{NL}$, where superscripts $^{L}$ and $^{NL}$ in features denote features produced by linear and nonlinear operations, respectively. For convolution kernels, $K_{i,j}^{L}$ can be divided according to the output channel into $K_{i,j}^{L,L}$ and $K_{i,j}^{L,NL}$, where superscript ${}^{L,L}$ means the part of the linear-transforming kernel $K_{i,j}^{L}$ operating on $Y_{in}^{L}$ and ${}^{L,NL}$ means the part operating on $Y_{in}^{NL}$. Notably, although the LC layer with $1\times 1$ convolution resembles the bottleneck layer that is widely used to reduce dimensions of feature maps~\cite{lin2013network,huang2017densely}, the main difference between them is that the bottleneck layer is placed before the nonlinear operator in a cascading manner, while the LC layer parallels the nonlinear operator. Skip connections in a neural network structure create short paths from early layers to latter layers, which are considered as an effective way to ease the difficulties in training deep neural networks. In all LCSCNet, we implement skip connections mainly by the LC layer in each basic unit. In LCSCUnit, there is a parameter which controls the proportion of the number of linear output channel $n_{1}$ and the nonlinear output channel $n_{2}$. This parameter, which can affect the performance of the network, is defined as \begin{equation} \rho=\frac{n_{2}}{n_{1} + n_{2}}. \end{equation} We find that a fixed $\rho$ for each LCSCUnit throughout the network can already offer a quite good performance. Alternatively, we can set up LCSCUnits with different $\rho$, and the LCSCUnits with the same $\rho$ are connected consecutively and can be divided into different LCSCBlocks. For simplicity, we let each LCSCBlock contain the same number of LCSCUnit. Suppose there are $N$ LCSCBlocks stacked to explore deep features, and $M$ LCSCUnits in an LCSCBlock. Let $LB_{d}^{\rho_{d}}$ denote the $d$-th LCSCBlock with specific $\rho_{d}$, $F_{d-1}$ denote its input features and $F_{d}$ its output features. The mapping of LCSCUnits in this block are denoted by $\{LU_{d,1}(\cdot), LU_{d,2}(\cdot), \dots, LU_{d,M}(\cdot)\}$, then the whole process of this block can be formulated as \begin{equation} F_{d}\!=\!LB_{d}^{\rho_{d}}(F_{d\!-\!1})\!=\!LU_{d,M}(LU_{d,M\!-\!1}(\cdots(LU_{d,1}(F_{d\!-\!1}))\cdots)), \label{con:over_2} \end{equation} and it follows that \begin{equation} F_{N} = LB^{\rho_{N}}_{N}(LB^{\rho_{N-1}}_{N-1}(\cdots(LB^{\rho_{1}}_{1}(F_{0}))\cdots)). \label{con:dfe_1} \end{equation} Furthermore, we investigate how the ordinal position of blocks with different $\rho$ effects the final performance. Detailed discussions and relative comparative experiments will be demonstrated in Section~\ref{s:s_5::C}. \subsubsection{E-LCSCBlock} As mentioned in~\cite{lim2017enhanced}, the simplest way to enhance performance via increasing the number of parameters is to increase the width of deep architectures. However, a deep wide network is extremely hard to train. Inspired by the long-term memory connection in~\cite{Tai-MemNet-2017}, we find that if we further concatenate the input and the output of LCSCBlock and then use a $1 \times 1$ bottleneck layer to maintain the compactness of the output channel, it will alleviate the difficulty of training a large LCSCNet. We denote the LCSCBlock with such a long-term memory connection as E-LCSCBlock. Compared with (\ref{con:over_2}), the mapping of E-LCSCBlock can be written as \begin{equation} \begin{split} ELB_{d}^{\rho_{d}}(F_{d-1})=bottle(concat(F_{d-1}, LB_{d}^{\rho_{d}}(F_{d-1}))), \end{split} \label{con:new_1} \end{equation} where $ELB_{d}^{\rho_{d}}$ denotes the $d$-th E-LCSCBlock with specific $\rho_{d}$, and $bottle(\cdot)$ denotes the $1 \times 1$ bottleneck layer. Moreover, we find that deep models of moderate scales using E-LCSCBlocks also perform slightly better than the ones using LCSCBlocks. Further discussions and ablation studies on E-LCSCBlock will be presented in Section~\ref{s:s_5::E}. \subsection{Upsampling and Reconstruction Net (U\&RNet)} Sajjadi \emph{et~al.}~\cite{sajjadi2016enhancenet} reported that adding convolution layers after the nearest-neighbor upsamping layer can help alleviate artifacts in SR. We follow this way in our models trained on the 291 dataset using the nearest-neighbor upsampling layer followed by three $3 \times 3$ convolution kernels (except the last one) with ReLU. When we develop models aiming to compare with models trained on DIV2K, we use ESPCN as the U\&RNet, as EDSR and RDN did, for fair comparison. In LCSCNet, the deep features $\{F_{1},F_{2},\dots,F_{N}\}$, explored hierarchically in its second part by LCSCBlocks, are then sent to U\&RNet $UR(\cdot)$, which maps feature $F_{d}$ to output $Y_{d}$: \begin{equation} \begin{split} Y_{d}=UR(F_{d}),~1\leq d \leq N. \end{split} \label{con:over_3} \end{equation} In E-LCSCNet, like EDSR and RDN, even without directly learning the residue between the HR and its bicubic version, the global residual learning is implemented by adding initial features $F_{0}$ to $F_{d}$ before upsampling. That is, the U\&RNet $EUR(\cdot)$ in E-LCSCNet has input and output as \begin{equation} \begin{split} Y_{d}=EUR(F_{d}+F_{0}),~1\leq d \leq N. \end{split} \label{con:eur} \end{equation} \subsection{Adaptive Element-wise Fusion Strategy} \begin{figure*}[htbp] \centering \includegraphics[scale=0.45]{./fusion_new.png} \caption{\small A sketch of the adaptive element-wise fusion strategy, where $N=4$ and $M_{i}$ $(i=1,2,3)$ are the current fused outputs.} \label{fig:F5} \end{figure*} \begin{algorithm}[ht] \caption{Adaptive Element-wise Fusion Strategy.} \begin{algorithmic}[1] \Require Intermediate outputs $\{Y_{1},Y_{2},\dots,Y_{N} \}$. \Ensure The final fused feature maps $M$. \State Initialize $M$ with $Y_{1}$: $M=Y_{1}$; \For{each $i\in [1,N-1]$} \State Concatenate SR output $X=concat(M, Y_{i+1})$ \State Convolve with 1$\times$1 tensor: $\alpha_{i} = C_{i}X$, $C_{i}$ is the $i$-th 1$\times$1 tensor \State Use sigmoid activation: $\alpha_{i} = sigmoid(\alpha_{i})$ \State Update $M=\alpha_{i}M+(I-\alpha_{i})Y_{i+1}$ \EndFor \State return $M$ \end{algorithmic}\label{alg1} \end{algorithm} Feature maps of different receptive fields are sensitive to features of different sizes, which are often fused to enhance the performance in various computer vision tasks. In our case, we develop an adaptive element-wise fusion strategy. With $N$ intermediate results $\{Y_{1},Y_{2},\dots,Y_{N} \}$ mapped from $\{F_{1},F_{2},\dots,F_{N} \}$ through U\&RNet, a list of weight tensors $\{W_{1},W_{2},\dots,W_{N}\}$ with the same size of output are determined by $\{Y_{1},Y_{2},\dots,Y_{N} \}$, which control how much of each raw result contributes to the final fused output. Here the adaptive weight tensors satisfy two traits:\\ Trait 1: Each adaptive tensor is determined by all intermediate outputs together, which can be formulated as \begin{equation} W_{i} = f_{i}(Y_{1},Y_{2},\dots,Y_{N}) , \ i = 1, 2, \dots, N, \label{eq:trait1} \end{equation} where $f_{i}$ is the mapping from $\{Y_{1}, Y_{2}, \dots, Y_{N}\}$ to $W_{i}$;\\ Trait 2: The value of each point in the weight tensor is between 0 and 1, and \begin{equation} \sum_{i=1}^{N}W_{i}=I, \label{eq:trait2} \end{equation} where $I$ is the tensor with all elements being 1. The final fused output $M$ is a convex weighted average of intermediate outputs $\{Y_{1},Y_{2},\dots,Y_{N} \}$: \begin{equation} \begin{split} I_{out}=M=\sum_{d=1}^{N}W_{d}Y_{d}. \end{split} \label{con:over_4} \end{equation} Inspired by the gate unit in LSTM, by adopting a series of $1\times1$ convolution kernels followed by sigmoid activation functions, we develop a heuristic algorithm to construct the fused output $M$, in which weight tensors satisfy the above two traits, as summarized in Algorithm~\ref{alg1}. A sketch for Algorithm~\ref{alg1} is plotted in Fig.\ref{fig:F5}, in which intermediate variable tensor $\alpha_{i} \ (i=1,\ldots,N-1)$ is generated progressively given current SR outputs $\{Y_{1}, \dots, Y_{i}\}$, 1$\times$1 convolution kernel $C_{i}$ and sigmoid activation function. The use of sigmoid activation functions ensures the element-wise value of $\alpha_{i}$ to be between 0 and 1. The updating step (Step 6) ensures the output to be a convex weighted average of current inputs $\{Y_{1}, \dots, Y_{i}\}$. From Algorithm~\ref{alg1}, $\{W_{1}, W_{2}, \dots,W_{N}\}$ can be obtained as \begin{equation} \begin{split} W_{k}=\left\{ \begin{array}{lrc} \displaystyle \prod_{i=1}^{N-1}\alpha_{i}, & & {k = 1}; \\ \displaystyle \big (I-\alpha_{k-1} \big ) \big (\prod_{i=k}^{N-1}\alpha_{i} \big ), & & {2 \leq k < N}; \\ \displaystyle I-\alpha_{N-1}, & & {k=N}, \end{array} \right. \end{split} \label{eq:wk} \end{equation} where $\alpha_{N-1}$ contains the information of $\{Y_{1}, Y_{2}, \dots, Y_{N}\}$. As (\ref{eq:wk}) shows that every $W_{k}$ contains $\alpha_{N-1}$, the first trait in (\ref{eq:trait1}) is satisfied; with simple algebra, the second trait in~(\ref{eq:trait2}) is also verified, and hence the rationality of the proposed methods. \subsection{Loss Function for Training} During training, we minimize the $\ell_{1}$ loss $L_{1}(x,y) = |x-y|$ over the training set of $M$ samples. Let $X^{(i)}$ denote the $i$-th ground-truth HR label in the training set and $I_{out}^{(i)}$ denote the corresponding output of network. Then the loss function $l$ is \begin{equation} l(I_{out}, X)=\frac{1}{M}\sum_{i=1}^{M}L_{1}(I_{out}^{(i)},X^{(i)}). \end{equation} When we apply the adaptive element-wise fusion strategy, we use the multi-supervised methods mentioned in~\cite{kim2016deeply,Tai-MemNet-2017} to train our model. The loss function of multi-supervised LCSCNet can be formulated as \begin{equation} L(\Theta)=l(I_{out}, X) + \beta \sum_{d=1}^{N}l(Y_{d}, X), \label{con:n1} \end{equation} where $I_{out}$ and $\{Y_{1},\dots, Y_{N} \}$ are defined in (\ref{con:over_4}), and $\beta$ is a trade-off parameter. \section{Discussions}\label{s:s_4} In this section, we mainly discuss the motivation and characteristics of Basic LCSCNet by showing its connections to DenseNet and its differences from ResNet and DenseNet. \subsection{Basic LCSCNet as an Efficient Variant of DenseNet} \begin{figure*} \centering \subfloat[Original DenseBlock]{ \includegraphics[scale=0.55]{D_1.png}} \label{fig:D_1}\\ \subfloat[DenseBlock with adjacent skip connection, equivalent to (a)]{ \includegraphics[scale=0.38]{D_2.png}} \label{fig:D_2}\hfill \subfloat[DenseBlock with bottleneck (B-DenseBlock)]{ \includegraphics[scale=0.3]{D_3.png}} \label{fig:D_3}\\ \subfloat[Move forward every bottleneck layer]{ \includegraphics[scale=0.3]{D_4.png}} \label{fig:D_4}\hfill \subfloat[Equivalent form of (d), Basic LCSCNet]{ \includegraphics[scale=0.3]{D_5.png}} \label{fig:D_5}\hfill \caption{\small Sketch on how a DenseBlock can be simplified into a Basic LCSCNet. For a better understanding, the channel number of each feature is marked beside the feature, and the kernel size of each convolution kernel is marked beside the kernel in form of ``input\_chanenel $\times$ kernel\_width $\times$ kernel\_height $\times$ output\_channel''.} \label{fig:D} \end{figure*} In this sub-section, we illustrate that Basic LCSCNet can be transformed from DenseNet with small changes on topology: we first show the redundancy in DenseNet and then introduce Basic LCSCNet as a remedy for this redundancy. Skip connection in DenseNet is implemented by directly concatenating all former features to be the input of current layer. For illustration, a 4-layer DenseBlock is depicted in Fig.\ref{fig:D}(a), in which $Y_{0}$ is a $k$-channel input feature, $Y_{i}$ is the newly-explored feature after nonlinear mapping $f_{i}$ ($i=1,2,3$), where $f_{i}$ consists of a ReLU followed by a $3\times 3$ convolution kernel with $k_{0}$ output channels ($k_{0}$ is also called growth rate in DenseNet). The last nonlinear mapping $f_{4}$ acts as a transition layer; $C$ means a concatenation operator in the channel dimension. To have a better understanding of DenseBlock in Fig.\ref{fig:D}(a), we can simplify Fig.\ref{fig:D}(a) into its equivalent form in Fig.\ref{fig:D}(b), where $Y_{i}^{'}=concat(Y_{0},\dots,Y_{i})$ $(i=0,1,2,3)$. By denoting the concatenation of former features as $Y_{i}^{'}$, excessive skip connections in Fig.\ref{fig:D}(a) are simplified into concise adjacent skip connections. For simplicity, unless otherwise specified, we take the DenseBlock in the form of Fig.\ref{fig:D}(b) as the basic DenseBlock structure. As shown in Fig.\ref{fig:D}(a) and Fig.\ref{fig:D}(b), when depth increases, the number of parameters of the convolution kernel in DenseNet also increases. To reduce the parameter amount, the authors of DenseNet applied the bottleneck layer\footnote{Many works add nonlinear activation before a $1\times 1$ convolution kernel to make the bottleneck layer; here we take the $1\times 1$ convolution kernel as the bottleneck layer.} before every nonlinear mapping and called it B-DenseNet. Fig.\ref{fig:D}(c) is the bottleneck version of Fig.\ref{fig:D}(b), where $B_{i}$ ($i=1,2,3$) is the $1\times 1$ convolution kernel with $b$ output channels. We can see that the parameter amount of every convolution kernel $f_{i}$ for nonlinear mapping is a constant now, and only the parameter amount of bottleneck layer with fewer parameters increases with depth. Although B-DenseBlock has reduced the parameter amount to a great extent, the parameter amount of each basic unit in B-DenseBlock still increases with depth. To further control the parameter amount, we make the parameter amount of each unit in B-DenseBlock a constant. One simple but effective solution is to move forward the bottleneck layer in each unit, reducing the number of channels of input feature to $b$ by the bottleneck layer before they are sent to the concatenation part, as shown in Fig.\ref{fig:D}(d). We can set $k=b+k_{0}$ to make channels of each feature unchanged. In this case, if we re-depict Fig.\ref{fig:D}(d) by allocating the bottleneck layer to each branch and using a nonlinear mapping $f_{i}^{'}$ to replace $B_{i} \circ f_{i}$, the structure of Basic LCSCNet reemerges, as shown in Fig.\ref{fig:D}(e). From the analysis above we can see that the $N$-layer Basic LCSCNet with an extra transition layer and the $(N+1)$-layer B-DenseNet share a strong relationship. This transition layer can be replaced by subsequent nonlinear operators and omitted. If it is replaced by a compressing layer located at the end of DenseBlock, it becomes BC-DenseNet. Since this compressing layer is to compress features generated by each block, when we simplify B-DenseUnit into LCSCUnit, the output channel of each LCSCUnit is already a constant, it is unnecessary to compress features again. From this perspective, BC-DenseNet can also be transferred to Basic LCSCNet in a similar way. Now look back into Fig.\ref{fig:F4}: the nonlinear output channel is just the growth rate in DenseNet, denoting how many new features are explored, and $1-\rho=\frac{n_{1}}{n_{1}+n_{2}}$ acts as some kind ``compress ratio'' denoting how many former features have flowed to the current stage through skip connections. \subsection{Differences from ResNet and DenseNet} This sub-section aims to illustrate the differences between Basic LCSCNet and ResNet/DenseNet as well as the novelty of our proposed network. It is still an open problem to compare different deep architectures. When different ways of skip connections are employed to alleviate training difficulties, the output features explored by nonlinear mapping with different skip connections have different constitutions. We suppose that by comparing different constitutions of the feature maps, we could get some useful information about the properties of different skip-connection architectures. \subsubsection{Feature maps of ResNet} We use the structure in~\cite{he2016identity}. Let $Y_{k}$ and $Y_{k+1}$ denote the input and output of block $k$, respectively, and let $f_{k}^{R}(\cdot)$ denote the nonlinear transformation in block $k$. Then the mathematical formulation of block $k$ is \begin{equation} Y_{k+1}=Y_{k} + f_{k}^{R}(Y_{k}) =Y_{j} + \sum_{i=j}^{k}f_{i}^{R}(Y_{i}),\ 1 \leq j \leq k. \label{con:d2} \end{equation} From (\ref{con:d2}), we can see that in ResNet, skip connection is implemented by element-wise summation between adjacent features. Compared with traditional plain architecture, any former maps $Y_{j} \ (j=1,\ldots,k)$ can be added to the current state $Y_{k+1}$, creating many short paths for more ``smooth'' gradient flow during back-propagation. Moreover, it is extremely concise because no extra parameter is required for this skip connection. \subsubsection{Feature maps of DenseNet} We employ the structure shown in Fig.\ref{fig:D}(b) to illustrate the properties of feature maps in DenseNet. Let $Y_{k}^{'}$ and $Y_{k+1}^{'}$ denote the input and output of unit $k$ in a DenseBlock, respectively, and $f_{k}^{D}(\cdot)$ the nonlinear transformation. Then the formulation of unit $k$ is \begin{equation} \begin{split} Y_{k+1}^{'} = concat(Y_{k}^{'}, f_{k}^{D}(Y_{k}^{'})), \end{split} \label{con:dense_1} \end{equation} and it follows that \begin{equation} \begin{split} Y_{k+1}^{'} = concat(Y_{j}^{'},f_{j}^{D}(Y_{j}^{'}),\dots,f_{k}^{D}(Y_{k}^{'})) ,~1 \leq j \leq k. \end{split} \label{con:dense_2} \end{equation} Like ResNet, all the former features in DenseNet can be fused into the current stage, but instead of summation, all the feature maps are concatenated in the channel dimension. Such a skip connection has both advantages and disadvantages compared with ResNet. One obvious advantage is that when features produced in DenseNet are sent to follow-up convolution kernels to explore new features, the features from different stages use different convolution kernels, while in ResNet the reused parts and newly-explored ones share the same convolution kernel. From this perspective, connecting features by element-wise summation may restrict a network from reaching better solutions in some cases. As for disadvantage, concatenating features need more following convolution kernels. As shown in Fig.\ref{fig:D}(a) and Fig.\ref{fig:D}(b), the parameter amount of DenseUnit increases with depth. When a DenseNet is very deep, even a small growth rate may lead to a large parameter amount. \begin{table*}[htbp] \centering \caption{\small Quantitative comparisons on $\times 3$ SISR among the ResNet, B-DenseNet, BC-DenseNet and Basic LCSCNet of the same depth. {\color{blue}Blue} indicates the least parameters. {\color{red}Red} indicates the best quantitative performance.} \label{chart:RDL} \begin{tabular}{c|ccccc} \hline \thead{\textbf{Model}} & \thead{\textbf{Parameters}} & \thead{\textbf{Set5}} & \thead{\textbf{Set14}} & \thead{\textbf{BSD100}} & \thead{\textbf{Urban100}}\\ \hline \thead{ResNet} & \thead{118.1K} & \thead{33.90/0.9233} & \thead{29.84/0.8328} & \thead{28.85/0.7987} & \thead{27.12/0.8303}\\ \hline \thead{B-DenseNet} & \thead{219.8K} & \thead{33.98/0.9241} & \thead{29.87/{\color{red}0.8338}} & \thead{28.87/\color{red}0.7997} & \thead{\color{red}27.25/0.8326}\\ \hline \thead{BC-Dense\_B3\_U10} & \thead{102.7K} & \thead{33.90/0.9234} & \thead{29.90/0.8336} & \thead{{\color{red}28.88}/0.7991} & \thead{27.22/0.8310}\\ \hline \thead{BC-Dense\_B5\_U6} & \thead{90.4K} & \thead{33.92/0.9234} & \thead{{\color{red}29.90}/0.8334} & \thead{28.87/0.7990} & \thead{27.21/0.8307}\\ \hline \thead{Basic LCSCNet} & \thead{\color{blue}68.9K} & \thead{\color{red}33.99/0.9241} & \thead{29.87/0.8337} & \thead{28.87/0.7994} & \thead{27.24/0.8324}\\ \hline \end{tabular} \end{table*} \subsubsection{Feature maps of Basic LCSCNet} From the analysis above, we can conclude that the feature re-usage of ResNet benefits from its concise skip connection between adjacent basic blocks and the new feature exploration of DenseNet mainly benefits from its little relevance between newly-explored feature maps and former ones. We have already seen that in Basic LCSCNet, former features are firstly compressed and then concatenated with the newly-explored features. Now we examine how the former features are combined in the current stage. Let $Y_{k}$ and $Y_{k+1}$ denote the input and output of the $k$-th LCSCUnit, and $K_{k}^{L}$ and $K_{k}^{NL}$ its convolution kernels. From Fig.\ref{fig:F4} and (\ref{eq:lcscunit}), we can derive the formulation of $1 \times 1$ convolution in the LC layer as \begin{align} &Y_{k+1}^{L}(c_{o}) \nonumber \\ &=\sum_{c_{i}=1}^{n}K_{k}^{L}(c_{o},c_{i})Y_{k}(c_{i}) \nonumber \\ &=\sum_{c_{i}=1}^{n_{1}}K_{k}^{L}(c_{o},c_{i})Y_{k}^{L}(c_{i})+ \sum_{c_{i}=n_{1}+1}^{n}K_{k}^{L}(c_{o},c_{i})Y_{k}^{NL}(c_{i}-n_{1}) \nonumber \\ &=\sum_{c_{i}=1}^{n_{1}}K_{k}^{L,L}(c_{o},c_{i})Y_{k}^{L}(c_{i})+ \sum_{c_{i}=1}^{n_{2}}K_{k}^{L,NL}(c_{o},c_{i})Y_{k}^{NL}(c_{i}), \label{con:d4} \end{align} where $c_{i}$ denotes the input channel and $c_{o}$ the output channel. For simplicity, (\ref{con:d4}) can be rewritten as \begin{equation} \begin{split} Y_{k+1}^{L}=K_{k}^{L,L}Y_{k}^{L}+K_{k}^{L,NL}Y_{k}^{NL}. \end{split} \label{con:d5} \end{equation} Applying the same approach to the convolution kernel $K_{k}^{NL}$ in nonlinear transformation, we have \begin{equation} \begin{split} Y_{k+1}^{NL}=K_{k}^{NL,L}ReLU(Y_{k}^{L})+K_{k}^{NL,NL}ReLU(Y_{k}^{NL}), \end{split} \label{con:d6} \end{equation} where $K_{k}^{NL,L}$ is the part of $K_{k}^{NL}$ only operating on $Y_{k}^{L}$ and $K_{k}^{NL,NL}$ only on $Y_{k}^{NL}$. A `global' form of (\ref{con:d5}) is \begin{gather} Y_{k+1}^{L}=P_{k+1}^{L}+P_{k+1}^{NL},\\ P_{k+1}^{L}=(\prod_{i=1}^{k}K_{i}^{L,L})Y_{1}^{L},\\ P_{k+1}^{NL}=\sum_{i=1}^{k-1}(K_{i}^{L,NL}\prod_{j=i+1}^{k}K_{j}^{L,L})Y_{i}^{NL}+K_{k}^{L,NL}Y_{k}^{NL}. \label{con:d7} \end{gather} From (\ref{con:d5}), we can see that $Y_{k+1}^{L}$ restores the information of all former feature maps in the form of weighted summation. From (\ref{con:d6}), we can see that $Y_{k+1}^{NL}$ is the new features explored by new nonlinear transformation. Among the deep features produced by deep architectures, newly-explored parts are thought to be more important. In Basic LCSCNet, we concatenate newly-explored features with the former ones like DenseNet, ensuring that features of different kinds can be treated differently. Meanwhile, as former features in the current stage are mainly aimed to create paths for training deep networks, instead of concatenating each former features separately, we compress all the former features and then concatenate them with the newly-explored ones, making it quite parameter-economic like ResNet. \section{Ablation Studies}\label{s:s_5} \subsection{Comparison with ResNet and DenseNet}\label{s:s_5::A} In this sub-section, we replace LCSCUnit in our basic LCSCNet by ResBlock or DenseUnit with the bottleneck layer. The three networks for comparison are all 34-layer, where Basic LCSCNet and DenseNet both have 30 units while ResNet has 15 blocks. As discussed before, the growth rate in DenseNet plays a similar role to the channel number of the nonlinear output. To compare fairly, if we set all output feature channels to 64 and $\rho$ of every LCSCUnit to 0.5, then the growth rate of DenseNet is 32 and the output channel of the bottleneck is 64. As for BC-DenseNet, for example, BC-Dense\_B3\_U10 means dividing the network into 3 blocks uniformly and add a compressing layer at the end of each block, whose output channel is 64. Here we train the above three models with the 291 dataset for $\times 3$ scale and the results are shown in Table~\ref{chart:RDL}. We use PSNR/SSIM~\cite{wang2004image} to measure reconstruction, and parameter amounts to measure storage efficiency. We can see that Basic LCSCNet has the least parameters and the competitive performance to DenseNet both better than ResNet. \subsection{Efficiency Brought by the LC layer} \label{s:s_5::B} \begin{table}[htbp] \centering \setlength{\tabcolsep}{1mm}{ \caption{\small Quantitative comparisons on $\times 3$ SISR between the original Basic LCSCNet and the Basic LCSCNet with $3 \times 3$ LC layers. {\color{red}Red} indicates the best quantitative performance.} \label{chart:lc_layer} \begin{tabular}{c|cccc} \hline \thead{\textbf{Model}} & \thead{\textbf{Set5}} & \thead{\textbf{Set14}} & \thead{\textbf{BSD100}} & \thead{\textbf{Urban100}}\\ \hline \thead{Basic LCSCNet} & \thead{\color{red}33.99/0.9241} & \thead{29.87/\color{red}0.8337} & \thead{\color{red}28.87/0.7994} & \thead{\color{red}27.24/0.8324}\\ \hline \thead{Basic LCSCNet \\ of $3 \times 3$ LC} & \thead{33.94/0.9238} & \thead{{\color{red}29.88}/0.8334} & \thead{28.87/0.7989} & \thead{27.17/0.8320} \\ \hline \end{tabular} } \end{table} Here we discuss the rationale behind implementing the LC layer with $1 \times 1$ convolution and its advantage on parameter efficiency. It is known that increasing receptive fields is essential for exploring deeper features. From Section~\ref{s:s_4} we can see that the LC layer helps transport the previous features and does not produce newly-explored features directly. Hence, we do not need to use $3 \times 3$ convolution to increase receptive fields in the LC layer and $1 \times 1$ convolution is sufficient. To support this view, we apply $3 \times 3$ convolution to the LC layer of the Basic LCSCNet mentioned in Section~\ref{s:s_5::A}. From Table~\ref{chart:lc_layer}, we can see that the LC layer with $3 \times 3$ convolution indeed does not achieve better performance. The usage of $1 \times 1$ convolution as the LC layer also makes the proposed architecture more parameter-economic compared with ResNet and DenseNet. Firstly we compare Basic LCSCNet with ResNet. A basic unit in ResNet with $n_1$ input channels, $n_2$ output channels and a $k\times k$ nonlinear transformation convolution kernel has $n_1 n_2 k^2$ parameters. The number of parameters of a basic unit in Basic LCSCNet, with $n_1$ input channels, $n_2$ output channels, a $k\times k$ nonlinear transformation convolution kernel and parameter $\rho_0$, is $n_1n_2(k^2\rho_0+1-\rho_0)$. The ratio of parameter amounts of these two units with the same $n_1$, $n_2$ and $k$ is \begin{equation} p_{L/R}(n_1,n_2,k)=\rho_0+\frac{1}{k^2}(1-\rho_0). \label{con:L/R} \end{equation} As illustrated before, good performance can be obtained when $\rho_0$ is around 0.5. In practice, the size of a convolution kernel for feature extraction is usually an odd bigger than 3. So when $\rho_0$ is 0.5, $p_{L/R}(n_1,n_2,k)<55.7\%$, which means the parameter amount of Basic LCSCNet is just half of the ResNet's. As for DenseNet, the parameter amount of a basic unit increases with depth. We take B-DenseNet as example: if the output channel of nonlinear mapping in LCSCUnit and DenseUnit is both $n_{2}$, the output channel of $1\times 1$ compressing layer is both $n_{1}$, the nonlinear kernel size is $k\times k$, then the parameter amount of LCSCUnit is always $(n_{1}+n_{2})(k^{2}n_{2}+n_{1})$, while the parameter amount of the $p$-th DenseUnit is $(pn_{2}n_{1}+k^{2}n_{1}n_{2})$. If such Basic LCSCNet and DenseNet both have $L$ nonlinear mapping layers, the ratio of parameter amounts of the two networks with the same $n_{1}$, $n_{2}$ and $k$ is \begin{equation} \begin{split} p_{L/D}(L;n_1,n_2,k)=\frac{2}{2k^{2}+L+1}(\frac{1}{\rho_0}+k^{2}\frac{1}{1-\rho_0}). \end{split} \label{con:L/D} \end{equation} From (\ref{con:L/D}), we can see the advantage of Basic LCSCNet is more remarkable when the network goes deeper. When we compare Basic LCSCNet with an $L$-layer BC-DenseNet of $N$ blocks, if the transition layer is omitted for simplicity, the ratio can be obtained by replacing $L$ with $\frac{L}{N}$ in (\ref{con:L/D}). \subsection{Investigation into Parameter $\rho$}\label{s:s_5::C} \begin{table*}[htbp] \centering \caption{\small Average $\times3$ PSNR/SSIM for Basic LCSCNets with different $\rho$ on the Set5, Set14, BSD100 and Urban100 datasets, respectively. {\color{red} Red} color indicates the best performance.} \label{chart:C_1} \begin{tabular}{c|ccccccc} \hline \thead{\boldsymbol{$\rho$}} & \thead{\textbf{0}} & \thead{\textbf{0.25}} & \thead{\textbf{0.375}} & \thead{\textbf{0.5}} & \thead{\textbf{0.625}} & \thead{\textbf{0.75}} & \thead{\textbf{1}} \\ \hline \thead{Set5} & \thead{32.66/0.9103} & \thead{33.86/0.9229} & \thead{33.97/{\color{red} 0.9242}} & \thead{{\color{red} 33.99}/0.9241} & \thead{33.94/0.9241} & \thead{33.92/0.9237} & \thead{31.78/0.8941}\\ \hline \thead{Set14} & \thead{29.27/0.8208} & \thead{29.82/0.8330} & \thead{29.90/0.8337} & \thead{29.87/0.837} & \thead {\color{red} {29.93/0.8340}} &\thead{29.85/0.8333} & \thead{28.57/0.8012}\\ \hline \thead{BSD100} & \thead{28.41/0.7858} & \thead{28.85/0.7984} & \thead{\color{red}{28.88/0.7997}} & \thead{28.87/0.7994} & \thead{28.87/0.7994} & \thead{28.85/0.7990} & \thead{27.92/0.7648}\\ \hline \thead{Urban100} & \thead{26.21/0.8011} & \thead{27.16/0.8296} & \thead{\color{red}{27.25/0.8329}} & \thead{27.24/0.8324} & \thead{27.24/0.8321} & \thead{27.20/0.8312} & \thead{25.50/0.7761}\\ \hline \end{tabular} \end{table*} \subsubsection{Fixed $\rho$ throughout the network} In this situation, we find when $\rho$ is around 0.5, the best performance could be achieved. Table~\ref{chart:C_1} shows 34-layer Basic LCSCNets for $\times$3 scale with different fixed $\rho$. As here we mainly focus on the effect of $\rho$, the experiments are conducted without adaptive element-wise fusion. Firstly, we consider two special cases of $\rho$. When $\rho$ is 0, the feature exploration part is a linear transformation; if the upsampling and reconstruction part is taken into account, the whole network has just two nonlinear convolution layers, whose fitting capacity for complex functions is relatively poor. In contrast, when $\rho$ is 1, Basic LCSCNet becomes the traditional feedforward neural network without skip connections, which is difficult to train. Hence $\rho$ balances the fitting capacity and the training ease of Basic LCSCNet. As Table~\ref{chart:C_1} shows, when $\rho=0.25$, the performance is suboptimal because of the restricted fitting capacity; when $\rho=0.75$, the performance is suboptimal mainly because the LC layers output fewer feature maps. As we discussed before, the output feature maps of LC layers restore the information of former features, insufficiency of which leads to insufficient skip connections and thus training difficulty increase and performance decline. \begin{table}[htbp] \centering \setlength{\tabcolsep}{1mm}{ \caption{\small The effect of ordinal position of block with different $\rho$ on average $\times3$ PSNR/SSIM for the Set5, Set14, BSD100 and Urban100 datasets. Each block has the same number of LCSCUnits. } \label{chart:C_2} \begin{tabular}{c|cc|cc} \hline \thead{\boldsymbol{$\rho$} \textbf{list}} & \thead{\textbf{[0.5,0.75]}} & \thead{\textbf{[0.75,0.5]}} & \thead{\textbf{[0.5,0.625,0.75]}} & \thead{\textbf{[0.75,0.625,0.5]}} \\ \hline \thead{Set5} & \thead{33.97/0.9240} & \thead{34.02/0.9244} & \thead{33.89/0.9234} & \thead{33.95/0.9239} \\ \hline \thead{Set14} & \thead{29.91/0.8341} & \thead{29.91/0.8343} & \thead{29.86/0.8332} & \thead{29.86/0.8336} \\ \hline \thead{BSD100} & \thead{28.88/0.7998} & \thead{28.89/0.8001} & \thead{28.86/0.7993} & \thead{28.88/0.7994} \\ \hline \thead{Urban100} & \thead{27.28/0.8336} & \thead{27.31/0.8343} & \thead{27.20/0.8317} & \thead{27.25/0.8323} \\ \hline \end{tabular}} \end{table} \subsubsection{Different $\rho$ throughout the network} In this situation, the LCSCUnits with the same $\rho$ form an LCSCBlock and different LCSCBlocks have different $\rho$s. We find that as depth increases, the $\rho$ of an LCSCBlock should be decreased slightly to improve performance. Table~\ref{chart:C_2} shows the relevant experimental results on the 34-layer Basic LCSCNets with different ordinal positions of $\rho$ list for $\times$3 scale. One possible reason for this phenomenon is that as depth increases, exploring higher-level features becomes harder, so there is less room for newly-explored features. Meanwhile, as information on former feature maps accumulates, more room is needed for reusing former features. \subsection{Ablation Studies on Different Fusion Strategies} \label{s:s_5::D} Table~\ref{chart:fusion_trait} compares properties of the vectorized fusion~\cite{kim2016deeply}, MSCN~\cite{liu2016learning} and our proposed fusion method. We can see that our method incorporates the advantages of the vectorized fusion and MSCN. We also compare these fusion strategies quantitatively. We train 34-layer LCSCNets for $\times$2 scale with $\rho$ list [0.75, 0.6875, 0.625, 0.5625, 0.5], and every six LCSCUnits with the same $\rho$ form a LCSCBlock. In Table~\ref{chart:C_3}, Basic LCSCNet, LCSCNet\_S, LCSCNet\_M and LCSCNet denote the LCSCNet without any fusion, with vectorized fusion, with MSCN and with our proposed method, respectively. Here we note that our implementation of MSCN is slightly different from the original one. In the original MSCN, the input of the weight module is the bicubic of LR, while in our LCSCNet we use the upsampled LR input. This small difference should have little influence on final results. As Table~\ref{chart:C_3} shows, when combined with Basic LCSCNet, our fusion strategy performs better than the other two fusion benchmarks. \begin{table}[htbp] \centering \setlength{\tabcolsep}{1mm}{ \caption{\small Brief comparisons among different fusion strategies.} \label{chart:fusion_trait} \begin{tabular}{c|c|c|c} \hline \thead{} & \thead{\footnotesize{Vectorized~fusion\cite{kim2016deeply}}} & \thead{\footnotesize{MSCN\cite{liu2016learning}}} & \thead{\footnotesize{Our~method}} \\ \hline \thead{\footnotesize Adaptiveness} & \thead{$\times$} & \thead{$\surd$} & \thead{$\surd$} \\ \hline \thead{\footnotesize Pixel-wise} & \thead{$\times$} & \thead{$\surd$} & \thead{$\surd$} \\ \hline \thead{\footnotesize Normalization} & \thead{$\surd$} & \thead{$\times$} & \thead{$\surd$} \\ \hline \thead{\footnotesize Multi-supervised \\ training} & \thead{$\surd$} & \thead{$\times$} & \thead{$\surd$} \\ \hline \end{tabular} } \end{table} \begin{table}[htbp] \centering \setlength{\tabcolsep}{1mm}{ \caption{\small Average $\times2$ PSNR/SSIM for LCSCNets with different fusions on the Set5, Set14, BSD100 and Urban100 datasets, respectively. {\color{red} {Red}} indicates the best results. } \label{chart:C_3} \begin{tabular}{c|cccc} \hline \thead{} & \thead{\textbf{Basic LCSCNet}} & \thead{\textbf{LCSCNet\_S}} & \thead{\textbf{LCSCNet\_M}} & \thead{\textbf{LCSCNet}} \\ \hline \thead{Set5} & \thead{37.77/0.0.9558} & \thead{37.80/\color{red} 0.9560} & \thead{37.79/0.9559} & \thead{{\color{red} 37.84}/0.9559} \\ \hline \thead{Set14} & \thead{33.23/0.9140} & \thead{33.26/0.9144} & \thead{33.25/0.9142} & \thead{\color{red} 33.31/0.9144} \\ \hline \thead{BSD100} & \thead{32.06/0.8980} & \thead{32.05/0.8981} & \thead{32.07/0.8981} & \thead{\color{red} 32.08/0.8984} \\ \hline \thead{Urban100} & \thead{31.15/0.9182} & \thead{31.26/0.9197} & \thead{31.23/0.9190} & \thead{\color{red} 31.31/0.9200} \\ \hline \end{tabular}} \end{table} \subsection{Ablation Studies on LCSCBlock and E-LCSCBlock} \label{s:s_5::E} \begin{figure}[htbp] \centering \includegraphics[scale=0.5]{./compare.png} \caption{\small Convergence comparison between deep wide (Basic) LCSCNet and (Basic) E-LCSCNet on the DIV2K validation set for scale $\times 2$.} \label{fig:elcsc} \end{figure} Firstly, we show that when we use LCSCUnits to construct deep models of moderate scales, the advantage of E-LCSCBlock is mild. With the 291 dataset, we train 34-layer, 44-layer and 54-layer Basic LCSCNets for $\times 3$ scale, $\rho$ of each unit is 0.5, and the number of feature channels is 64. For comparison, we use 10 LCSCUnits to constitute an E-LCSCBlock and train 37-layer, 48-layer and 59-layer Basic E-LCSCNets, respectively. As Table~\ref{chart:EandL} shows, every Basic E-LCSCNet performs better than its corresponding Basic LCSCNet. \begin{table*} \centering \caption{\small Average $\times 3$ PSNR/SSIM for Basic LCSCNet and its corresponding Basic E-LCSCNet on Set5, Set14, BSD100 and Urban100. All the models are of moderate scales (Parameter amount $<$ 150K).} \label{chart:EandL} \begin{tabular}{c|cc|cc|cc} \hline \thead{} & \thead{LC\_34} & \thead{E-LC\_37} & \thead{LC\_44} & \thead{E-LC\_48} & \thead{LC\_54} & \thead{E-LC\_59} \\ \hline \thead{Set5} & \thead{33.99/0.9241} & \thead{34.01/0.9248} & \thead{34.02/0.9244} & \thead{34.05/0.9251} & \thead{34.03/0.9244} & \thead{34.08/0.9248} \\ \hline \thead{Set14} & \thead{29.87/0.8337} & \thead{29.92/0.8349} & \thead{29.85/0.8334} & \thead{29.90/0.8345} & \thead{29.88/0.8340} & \thead{29.89/0.8339} \\ \hline \thead{BSD100} & \thead{28.87/0.7994} & \thead{28.89/0.8002} & \thead{28.87/0.7996} & \thead{28.90/0.8004} & \thead{28.89/0.7998} & \thead{28.89/0.7998} \\ \hline \thead{Urban100} & \thead{27.24/0.8324} & \thead{27.28/0.8340} & \thead{27.23/0.8326} & \thead{27.29/0.8343} & \thead{27.27/0.8330} & \thead{27.32/0.8347} \\ \hline \end{tabular} \end{table*} Then we show that when we aim to develop an extremely deep and wide network, E-LCSCBlock can make up the deficiencies of LCSCBlock. With the DIV2K dataset we train a Basic LCSCNet for $\times 2$ scale of $\rho$ list [0.75, 0.71875, 0.6875, 0.65625, 0.625, 0.59375, 0.5625, 0.53125, 0.5], every sixteen LCSCUnits with the same $\rho$ form a LCSCBlock, and the output channel of each feature is 128. Its convergence curve is the blue one in Fig.\ref{fig:elcsc}, indicating quite poor performance. For comparison, we train the LCSCNet with the same setting, and its performance (the green curve in Fig.\ref{fig:elcsc}) is significantly better than Basic LCSCNet. We contribute this obvious improvement to the extra short paths created by the adaptive fusion strategy, which suggests that more short paths may further help in this case. The experimental results shown in Fig.\ref{fig:elcsc} also support this view: when we evolve (Basic) LCSCNet into (Basic) E-LCSCNet, the performance of deep architecture booms. \section{Experimental Results}\label{s:s_6} \subsection{Comparison with State-of-the-Art Models} It is well known that the training set and the parameter amount largely influence the final performance of a model. To compare with various representative models fairly, we divide these models into three categories: models trained on the 291 dataset~\cite{yang2010image,martin2001database}, light models (Params $<$ 2M) trained on the DIV2K dataset~\cite{Agustsson_2017_CVPR_Workshops} and large models (Params $>$ 10M) on DIV2K. When compared with models on the 291 dataset such as VDSR~\cite{kim2016accurate}, DRCN~\cite{kim2016deeply}, LapSR~\cite{LapSRN}, DRRN~\cite{tai2017image} and MemNet~\cite{Tai-MemNet-2017}, we train a 76-layer LCSCNet with the proposed fusion strategy, $\rho$ list is also [0.75, 0.71875, 0.6875, 0.65625, 0.625, 0.59375, 0.5625, 0.53125, 0.5] but every eight units with the same $\rho$ form a block, denoted by LCSC\_76\_291. When compared with light models on DIV2K and similarly large datasets such as SelNet~\cite{choi2017deep}, SRDenseNet~\cite{tong2017image}, CARN~\cite{ahn2018fast} and FALSR-A~\cite{chu2019fast}, because the fusion part is quite computation-consuming, our light models was developed just based on Basic E-LCSCNet. Our light models share the same $\rho$ list with LCSC\_76\_291, but every six units with the same $\rho$ form a block, denoted by BE-LCSC\_L. When compared with large models on DIV2K such as EDSR~\cite{lim2017enhanced} and RDN~\cite{zhang2018residual}, the E-LCSCNet mentioned in Section~\ref{s:s_5::E} is adopted. \begin{table*}[htbp] \centering \footnotesize \caption{\footnotesize Quantitative comparisons among mainstream deep models for SISR. To compare fairly, we divide models into three categories: models trained on 291, light models (Params $<$ 2M) trained on DIV2K, and large models trained on DIV2K. For each scale, we compare the models within the same category, and the best performance is highlighted in {\color{red} Red}. In DRCN, MemNet, LCSC\_76\_291 and E-LCSCNet, extra Mult\&Adds of the multi-supervised fusion part are added after the Mult\&Adds of the basic structure.} \label{chart:big} \begin{tabular}{c|cccccccc} \hline \thead{\textbf{Scale}} & \thead{\textbf{Model}} & \thead{\textbf{Training data}} & \thead{\textbf{Params}} & \thead{\textbf{Mult\&Adds}} & \thead{\textbf{Set5}} & \thead{\textbf{Set14}} & \thead{\textbf{BSD100}} & \thead{\textbf{Urban100}} \\ \hline \multirow{12}{*}{\thead{$\times 2$}} & \thead{\thead{VDSR} \\ \thead{DRCN} \\ \thead{LapSRN} \\ \thead{DRRN} \\ \thead{MemNet} \\ \thead{LCSC\_76\_291}} & \thead{\thead{291} \\ \thead{291} \\ \thead{291} \\ \thead{291} \\ \thead{291} \\ \thead{291}} & \thead{\thead{665K} \\ \thead{1774K} \\ \thead{813K} \\ \thead{297K} \\ \thead{667K}\\ \thead{1844K}} & \thead{\thead{612.6G} \\ \thead{9243.0G+8731.3G} \\ \thead{29.9G} \\ \thead{6796.9G} \\ \thead{2261.8G+3.2G} \\ \thead{407.8G+616.3G}} & \thead{\thead{37.53/0.9587} \\ \thead{37.63/0.9588} \\ \thead{37.52/0.9591} \\ \thead{37.74/0.9591} \\ \thead{37.78/0.9597} \\ \thead{\color{red}37.86/0.9600}} & \thead{\thead{33.03/0.9124} \\ \thead{33.04/0.9118} \\ \thead{33.08/0.9130} \\ \thead{33.23/0.9136} \\ \thead{33.28/0.9142} \\ \thead{\color{red}33.34/0.9146}} & \thead{\thead{31.90/0.8960} \\ \thead{31.85/0.8942} \\ \thead{31.80/0.8950} \\ \thead{32.05/0.8973} \\ \thead{32.08/0.8978} \\ \thead{\color{red}32.10/0.8985}} & \thead{\thead{30.76/0.9140} \\ \thead{30.75/0.9133} \\ \thead{30.41/0.9101} \\ \thead{31.23/0.9188} \\ \thead{31.31/0.9195} \\ \thead{\color{red}31.34/0.9204}} \\ \cline{2-9} & \thead{\thead{SelNet} \\ \thead{CARN} \\ \thead{FALSR-A} \\ \thead{BE-LCSC\_L}} & \thead{\thead{DIV2K} \\ \thead{DIV2K} \\ \thead{DIV2K} \\ \thead{DIV2K}} & \thead{\thead{974K} \\ \thead{1592K} \\ \thead{1021K} \\ \thead{1552K}} & \thead{\thead{225.7G} \\ \thead{222.8G} \\ \thead{234.7G} \\ \thead{358.6G}} & \thead{\thead{37.89/0.9598} \\ \thead{37.76/0.9590} \\ \thead{37.82/0.9595} \\ \thead{\color{red}38.01/0.9600}} & \thead{\thead{33.61/0.9160} \\ \thead{33.52/0.9166} \\ \thead{33.55/\color{red}0.9168} \\ \thead{{\color{red}33.67}/0.9160}} & \thead{\thead{32.08/0.8984} \\ \thead{32.09/0.8978} \\ \thead{32.12/0.8987} \\ \thead{\color{red}32.23/0.9002}} & \thead{\thead{-/-} \\ \thead{31.92/0.9256}\\ \thead{31.93/0.9256} \\ \thead{\color{red}32.31/0.9297}} \\ \cline{2-9} & \thead{\thead{EDSR} \\ \thead{D\_DBPN} \\ \thead{RDN} \\ \thead{E-LCSCNet}} & \thead{\thead{DIV2K} \\ \thead{DIV2K+Flickr} \\ \thead{DIV2K} \\ \thead{DIV2K}} & \thead{\thead{40.7M} \\ \thead{5876.3K} \\ \thead{22.1M} \\ \thead{14.2M}} & \thead{\thead{9379.4G} \\ \thead{3429.0G} \\ \thead{5096.2G} \\ \thead{3126.4G+1251.7G}} & \thead{\thead{38.11/0.9602} \\ \thead{38.09/0.9600} \\ \thead{\color{red}38.24/0.9614} \\ \thead{38.23/0.9608}} & \thead{\thead{33.92/0.9195} \\ \thead{33.87/0.9191} \\ \thead{\color{red}34.01/0.9212} \\ \thead{33.85/0.9180}} & \thead{\thead{32.32/0.9013} \\ \thead{32.27/0.9000} \\ \thead{32.34/0.9017} \\ \thead{\color{red}32.36/0/9018}} & \thead{\thead{32.93/0.9351} \\ \thead{32.55/0.9324} \\ \thead{32.89/\color{red}{0.9353}} \\ \thead{{\color{red}32.93}/0.9351}} \\ \hline \multirow{12}{*}{\thead{$\times 3$}} & \thead{\thead{VDSR} \\ \thead{DRCN} \\ \thead{LapSRN} \\ \thead{DRRN} \\ \thead{MemNet} \\ \thead{LCSC\_76\_291}} & \thead{\thead{291} \\ \thead{291} \\ \thead{291} \\ \thead{291} \\ \thead{291} \\ \thead{291}} & \thead{\thead{665K} \\ \thead{1774K} \\ \thead{813K} \\ \thead{297K} \\ \thead{667K}\\ \thead{1844K}} & \thead{\thead{612.6G} \\ \thead{9243.0G+8731.3G} \\ \thead{29.9G} \\ \thead{6796.9G} \\ \thead{2261.8G+3.2G} \\ \thead{181.3G+616.3G}} & \thead{\thead{33.66/0.9213} \\ \thead{33.82/0.9226} \\ \thead{33.82/0.9227} \\ \thead{34.03/0.9244} \\ \thead{34.09/0.9248} \\ \thead{\color{red}34.13/0.9254}} & \thead{\thead{29.77/0.8314} \\ \thead{29.76/0.8311} \\ \thead{29.79/0.8320} \\ \thead{29.96/0.8349} \\ \thead{\color{red}30.00/0.8350} \\ \thead{29.95/0.8348}} & \thead{\thead{28.82/0.7976} \\ \thead{28.80/0.7963} \\ \thead{28.82/0.7973} \\ \thead{28.95/0.8004} \\ \thead{28.96/0.8001} \\ \thead{\color{red}28.97/0.8014}} & \thead{\thead{27.14/0.8279} \\ \thead{27.15/0.8276} \\ \thead{27.07/0.8272} \\ \thead{27.53/{\color{red}0.8378}} \\ \thead{{\color{red}27.56}/0.8376} \\ \thead{27.53/0.8377}} \\ \cline{2-9} & \thead{\thead{SelNet} \\ \thead{CARN} \\ \thead{BE-LCSC\_L}} & \thead{\thead{DIV2K} \\ \thead{DIV2K} \\ \thead{DIV2K}} & \thead{\thead{1159K} \\ \thead{1592K} \\ \thead{1736K}} & \thead{\thead{120.0G} \\ \thead{118.8G} \\ \thead{179.1G}} & \thead{\thead{34.27/0.9257} \\ \thead{34.29/0.9255} \\ \thead{\color{red}34.39/0.9265}} & \thead{\thead{30.30/0.8399} \\ \thead{30.29/{\color{red}0.8407}} \\ \thead{{\color{red}30.33}/0.8395}} & \thead{\thead{28.97/0.8025} \\ \thead{29.06/0.8034} \\ \thead{\color{red}29.12/0.8065}} & \thead{\thead{-/-} \\ \thead{28.06/0.8493} \\ \thead{\color{red}28.25/0.8540}} \\ \cline{2-9} & \thead{\thead{EDSR} \\ \thead{D\_DBPN} \\ \thead{RDN} \\ \thead{E-LCSCNet}} & \thead{\thead{DIV2K} \\ \thead{DIV2K+Flickr} \\ \thead{DIV2K} \\ \thead{DIV2K}} & \thead{\thead{43.7M} \\ \thead{-} \\ \thead{22.3M} \\ \thead{14.9M}} & \thead{\thead{4471.8G} \\ \thead{-} \\ \thead{2284.7G} \\ \thead{1389.5G+1251.7G}} & \thead{\thead{34.65/0.9280} \\ \thead{-/-} \\ \thead{34.71/\color{red}0.9296} \\ \thead{{\color{red}34.71}/0.9286}} & \thead{\thead{30.52/0.8462} \\ \thead{-/-} \\ \thead{\color{red}30.57/0.8468} \\ \thead{30.56/0.8460}} & \thead{\thead{29.25/0.8093} \\ \thead{-/-} \\ \thead{29.26/0.8093} \\ \thead{\color{red}29.27/0.8104}} & \thead{\thead{28.80/0.8653} \\ \thead{-/-} \\ \thead{28.80/0.8653} \\ \thead{\color{red}28.83/0.8658}} \\ \hline \multirow{12}{*}{\thead{$\times 4$}} & \thead{\thead{VDSR} \\ \thead{DRCN} \\ \thead{LapSRN} \\ \thead{DRRN} \\ \thead{MemNet} \\ \thead{LCSC\_76\_291}} & \thead{\thead{291} \\ \thead{291} \\ \thead{291} \\ \thead{291} \\ \thead{291} \\ \thead{291}} & \thead{\thead{665K} \\ \thead{1774K} \\ \thead{813K} \\ \thead{297K} \\ \thead{667K}\\ \thead{1844K}} & \thead{\thead{612.6G} \\ \thead{9243.0G+8731.3G} \\ \thead{29.9G} \\ \thead{6796.9G} \\ \thead{2261.8G+3.2G} \\ \thead{110.0G+616.3G}} & \thead{\thead{31.35/0.8838} \\ \thead{31.53/0.8854} \\ \thead{31.54/0.8855} \\ \thead{31.68/0.8888} \\ \thead{31.74/0.8893} \\ \thead{\color{red}31.76/0.8899}} & \thead{\thead{28.01/0.7674} \\ \thead{28.02/0.7670} \\ \thead{28.19/0.7720} \\ \thead{28.21/0.7721} \\ \thead{{\color{red}28.26}/0.7723} \\ \thead{28.20/{\color{red}0.7731}}} & \thead{\thead{27.29/0.7251} \\ \thead{27.23/0.7233} \\ \thead{27.32/0.7280} \\ \thead{27.38/0.7284} \\ \thead{{\color{red}27.40}/0.7281} \\ \thead{27.36/{\color{red}0.7293}}} & \thead{\thead{25.18/0.7524} \\ \thead{25.14/0.7510} \\ \thead{25.21/0.7553} \\ \thead{25.44/0.7638} \\ \thead{{\color{red}25.50}/0.7638} \\ \thead{25.38/\color{red}0.7643}} \\ \cline{2-9} & \thead{\thead{SRDenseNet} \\ \thead{SelNet} \\ \thead{CARN} \\ \thead{BE-LCSC\_L}} & \thead{\thead{ImageNet Subset} \\ \thead{DIV2K} \\ \thead{DIV2K} \\ \thead{DIV2K}} & \thead{\thead{2015K} \\ \thead{1417K} \\ \thead{1592K} \\ \thead{1699K}} & \thead{\thead{389.9G} \\ \thead{83.1G} \\ \thead{90.9G} \\ \thead{124.8G}} & \thead{\thead{32.02/0.8934} \\ \thead{32.00/0.8931} \\ \thead{32.13/0.8937} \\ \thead{\color{red}32.20/0.8948}} & \thead{\thead{28.50/0.7782} \\ \thead{28.49/0.7783} \\ \thead{28.60/0.7806} \\ \thead{\color{red}28.66/0.7806}} & \thead{\thead{27.53/0.7337} \\ \thead{27.44/0.7325} \\ \thead{27.58/0.7349} \\ \thead{\color{red}27.62/0.7390}} & \thead{\thead{26.05/0.7819} \\ \thead{-/-} \\ \thead{26.07/0.7837} \\ \thead{\color{red}26.22/0.7908}} \\ \cline{2-9} & \thead{\thead{EDSR} \\ \thead{D\_DBPN} \\ \thead{RDN} \\ \thead{E-LCSCNet}} & \thead{\thead{DIV2K} \\ \thead{DIV2K+Flickr} \\ \thead{DIV2K} \\ \thead{DIV2K}} & \thead{\thead{43.1M} \\ \thead{10.3M} \\ \thead{22.6M} \\ \thead{14.8M}} & \thead{\thead{2890.0G} \\ \thead{5715.4G} \\ \thead{1300.7G} \\ \thead{781.6G+1700.7G}} & \thead{\thead{32.46/0.8968} \\ \thead{32.47/0.8980} \\ \thead{32.47/{\color{red}0.8990}} \\ \thead{{\color{red}32.51}/0.8984}} & \thead{\thead{28.80/{\color{red}0.7876}} \\ \thead{{\color{red}28.82}/0.7860} \\ \thead{28.81/0.7871} \\ \thead{28.81/0/7871}} & \thead{\thead{27.71/0.7420} \\ \thead{27.72/0.7400} \\ \thead{27.72/0.7419} \\ \thead{\color{red}27.73/0.7433}} & \thead{\thead{26.64/0.8033} \\ \thead{26.38/0.7946} \\ \thead{26.61/0.8028} \\ \thead{\color{red}26.64/0.8033}} \\ \hline \end{tabular} \end{table*} \begin{figure}[htbp] \centering \subfloat[\scriptsize{HR}]{ \includegraphics[scale=0.29]{barbara_HR.jpeg}} \label{fig:barbara_HR}\hfill \subfloat[\scriptsize{VDSR}]{ \includegraphics[scale=0.29]{barbara_VDSR.jpeg}} \label{fig:barbara_VDSR}\hfill \subfloat[\scriptsize{DRCN}]{ \includegraphics[scale=0.29]{barbara_DRCN.jpeg}} \label{fig:barbara_DRCN}\hfill \subfloat[\scriptsize{LapSR}]{ \includegraphics[scale=0.29]{barbara_LapSR.jpeg}} \label{fig:barbara_LapSR} \\ \subfloat[\scriptsize{DRRN}]{ \includegraphics[scale=0.29]{barbara_DRRN.jpeg}} \label{fig:barbara_DRRN}\hfill \subfloat[\scriptsize{MemNet}]{ \includegraphics[scale=0.29]{barbara_memnet.jpeg}} \label{fig:barbara_memnet}\hfill \subfloat[\scriptsize{CARN}]{ \includegraphics[scale=0.29]{barbara_CARN.jpeg}} \label{fig:barbara_carn}\hfill \subfloat[\scriptsize{BE-LCSC\_L}]{ \includegraphics[scale=0.29]{barbara_lcsc.jpeg}} \label{fig:barbara_LCSC}\hfill \caption{\small Results for upscaling factor ×3 on image Set14-barbara} \label{fig:barbara} \end{figure} \begin{figure}[htbp] \centering \subfloat[\scriptsize{HR}]{ \includegraphics[scale=0.29]{ppt3_HR.jpeg}} \label{fig:ppt3_HR}\hfill \subfloat[\scriptsize{VDSR}]{ \includegraphics[scale=0.29]{ppt3_VDSR.jpeg}} \label{fig:ppt3_VDSR}\hfill \subfloat[\scriptsize{DRCN}]{ \includegraphics[scale=0.29]{ppt3_DRCN.jpeg}} \label{fig:ppt3_DRCN}\hfill \subfloat[\scriptsize{LapSR}]{ \includegraphics[scale=0.29]{ppt3_LapSR.jpeg}} \label{fig:ppt3_LapSR} \\ \subfloat[\scriptsize{DRRN}]{ \includegraphics[scale=0.29]{ppt3_DRRN.jpeg}} \label{fig:ppt3_DRRN}\hfill \subfloat[\scriptsize{MemNet}]{ \includegraphics[scale=0.29]{ppt3_memnet.jpeg}} \label{fig:ppt3_memnet}\hfill \subfloat[\scriptsize{CARN}]{ \includegraphics[scale=0.29]{ppt3_CARN.jpeg}} \label{fig:ppt3_bic}\hfill \subfloat[\scriptsize{BE-LCSC\_L}]{ \includegraphics[scale=0.29]{ppt3_lcsc.jpeg}} \label{fig:ppt3_LCSC}\hfill \caption{\small Results for upscaling factor 3 on image Set14-ppt} \label{fig:ppt3} \end{figure} \begin{figure} \centering \subfloat[\scriptsize{HR}]{ \includegraphics[scale=0.181]{HR_img19.jpeg}} \label{fig:img19_HR}\hfill \subfloat[\scriptsize{EDSR}]{ \includegraphics[scale=0.181]{edsr_img19.jpeg}} \label{fig:img19_EDSR} \\ \subfloat[\scriptsize{RDN}]{ \includegraphics[scale=0.181]{rdn_img19.jpeg}} \label{fig:img19_RDN}\hfill \subfloat[\scriptsize{E-LCSCNet}]{ \includegraphics[scale=0.181]{elcsc_img19.jpeg}} \label{fig:img19_ELCSC}\\ \caption{\small Results of large models for upscaling factor 3 on Urban100-img019} \label{fig:img19} \end{figure} Quantitative comparisons on BE-LCSC\_L are listed in Table~\ref{chart:big}. Because operations in neural networks for SISR are mainly multiplication along with addition, we use the number of composite multiply-accumulate operations in CARN, denoted by Mult\&Adds, to measure computational efficiency, and we also assume that the HR image is $1280\times720$. From Table~\ref{chart:big}, we can see that among the models trained on 291, LCSC\_76\_291 achieves better accuracy than MemNet. As for efficiency, MemNet has fewer parameters due to its recursive structure, but LCSC\_76\_291 is more computation-efficient than MemNet. When compared with SelNet and CARN, BE-LCSC\_L is moderately computation-consuming but achieves obvious improvement. Among large models on DIV2K, our E-LCSCNet has the fewest Params for every scale. For $\times 2$ scale, our E-LCSCNet holds the same level with RDN but with a clear advantage in Mult\&Adds. For $\times 3$ and $\times 4$ scale, our E-LCSCNet performs better than EDSR and RDN, but is somehow more computation-consuming than RDN due to its fusion part. Representative qualitative comparisons are shown in Figs.\ref{fig:barbara}-\ref{fig:img19}. In Fig.\ref{fig:barbara}, our model restores the grid structure more precisely with fewer artifacts than other models. In Fig.\ref{fig:ppt3}, compared with blurry characters generated by other models, our result has sharper edges. In Fig.\ref{fig:img19}, compared with EDSR and RDN, E-LCSCNet recovers the line with the least blurry. \subsection{Implementation Details} For training LCSCNet, we augment data ($90^{\circ}$, $180^{\circ}$ and $270^{\circ}$ rotation), and then downsample the LR input with the desired scaling factor. Like many methods trained on 291, we only take the luminance component for training. Ground truth for training is the residue between the bicubic of LR image and the original HR image, and all inputs are scaled into [-1, 1]. When trained on 291, training images are split into patches of sizes $18^2/36^2$, $12^2/36^2$ and $21^2/84^2$, respectively. We initialize all the convolution kernels as suggested by \cite{he2015delving}. All intermediate feature maps have 64 channels. For optimization, we use Adam~\cite{kingma2014adam} with its default settings. Learning rate is initialized as $10^{-4}$, and is divided by 10 every 15 epochs over the whole augmented dataset and the training is stopped after 60 epochs. For training, we use Keras~\cite{chollet2015keras}; for testing, we use MatConvNet~\cite{vedaldi2015matconvnet}. The training of BE-LCSC\_L and E-LCSCNet is based on the PyTorch~\cite{paszke2017pytorch} version of EDSR with the same setting of EDSR except that the batch size is 32, and training is terminated after 650 epochs. The codes are available from \url{https://github.com/XuechenZhang123/LCSC}. \section{Conclusion and Future Works}\label{s:s_8} In this paper, we propose the linear compressing based skip-connecting network (LCSCNet) for image SR, which combines the merits of the parameter-economic form of ResNet and the effective feature exploration of DenseNet. Linear compressing layers are adapted to implement skip connections, connecting former features and separating them from the newly-explored features. Compared with previous deep models with skip connections, our LCSCNet can explore relatively more new features with lower computational costs. Based on LCSCNet, to improve the performance of extremely deep and wide networks, the Enhanced LCSCNet is developed. An adaptive element-wise fusion strategy is also proposed, not only for further exploiting hierarchical information from diverse levels of deep models, but also for stabilizing the training deep models by adding extra paths for gradient flows. Comprehensive experiments and discussions are presented in this paper and demonstrate the rationality and superiority of the proposed methods. Future work can be mainly explored from the following two aspects: 1) it would be worthwhile to try to apply LCSCNet and E-LCSCNet or their basic units to other computer vision tasks; and 2) in terms of Mult\&Adds in Table~\ref{chart:big}, we can see the computational cost for this part is still somewhat high despite that we have managed to control its complexity; therefore, further efforts can be made to further improve its efficiency. \section*{Acknowledgment} We would like to thank the authors of~\cite{kim2016accurate, kim2016deeply, LapSRN, tai2017image, Tai-MemNet-2017,ahn2018fast,lim2017enhanced,zhang2018residual} for releasing their source codes and models for comparison. We would also like to thank the Associate Editor and anonymous reviewers for their selfless dedication and constructive suggestions. \bibliographystyle{IEEEtran}
{ "timestamp": "2019-09-10T02:17:09", "yymm": "1909", "arxiv_id": "1909.03573", "language": "en", "url": "https://arxiv.org/abs/1909.03573" }
"\\section*{Introduction}\r\n\r\n\r\n\r\nIt is well-known that the bihamiltonian property proved t(...TRUNCATED)
{"timestamp":"2019-09-11T02:10:41","yymm":"1909","arxiv_id":"1909.03561","language":"en","url":"http(...TRUNCATED)
"\\section*{Introduction}\nMetamaterials offer the possibility of creating a broad array of behavior(...TRUNCATED)
{"timestamp":"2019-09-10T02:15:45","yymm":"1909","arxiv_id":"1909.03528","language":"en","url":"http(...TRUNCATED)
"\\section{Introduction}\r\nIn this paper, we introduce the {\\it alternating multizeta vaues in pos(...TRUNCATED)
{"timestamp":"2019-09-10T02:25:25","yymm":"1909","arxiv_id":"1909.03849","language":"en","url":"http(...TRUNCATED)
"\\section{Introduction}\n\nBinarity is widely present in all kinds of stars \\citep[25\\% of low-ma(...TRUNCATED)
{"timestamp":"2019-09-10T02:21:07","yymm":"1909","arxiv_id":"1909.03692","language":"en","url":"http(...TRUNCATED)
"\\section{\\label{sec:1} Introduction}\r\n\r\nThe progress in nanotechnology of magnetic materials (...TRUNCATED)
{"timestamp":"2019-09-10T02:20:27","yymm":"1909","arxiv_id":"1909.03671","language":"en","url":"http(...TRUNCATED)
"\\section{Introdução}\n\nNa informática, a segurança é uma área em constante evolução, de i(...TRUNCATED)
{"timestamp":"2019-09-10T02:22:40","yymm":"1909","arxiv_id":"1909.03741","language":"pt","url":"http(...TRUNCATED)
"\\section{Introduction}\n\n\n\n\\noindent A central notion of mathematical relativity, frequently u(...TRUNCATED)
{"timestamp":"2020-07-28T02:11:14","yymm":"1909","arxiv_id":"1909.03797","language":"en","url":"http(...TRUNCATED)
End of preview.

No dataset card yet

Downloads last month
4