Dataset Viewer
Auto-converted to Parquet Duplicate
text
stringlengths
1
1.88M
meta
dict
\section{Introduction} Unsupervised representation learning is the area of research that aims to extract units from unlabelled speech that are consistent with the phonemic transcription \cite{clsp,zs15,zs17}. As opposed to text, speech is subject to large variability. Two speech sequences with the same transcription can have significantly different raw speech signals. In order to work on speech sequences in an unsupervised way, there is a need for robust acoustic representations. To address that challenge, recent methods use {\em speech embeddings}, i.e.~fixed-size representations of variable-length speech sequences \cite{herman_cae,nils,settle,riad,emb2,emb3,emb5,cae}.\footnote{A speech sequence is a non-silent part of the speech signal (not necessarily a word). It can be transcribed into a phoneme $n$-gram.} Speech embeddings can be used in many applications, such as key-word spotting\cite{query,query2,query3}, spoken term discovery\cite{utd,utd2,utd3}, and segmentation of speech into words \cite{goldwater,seg1,seg2}. It is convenient to evaluate the reliability of speech embeddings without being tied to a particular downstream task. One way to do that is to compute the intrinsic quality of speech embeddings. The basic idea is that a reliable speech embedding should maximise the information relevant to its type and minimise irrelevant token-specific information. Two popular metrics have been used: the mean average precision (MAP) \cite{map} and the ABX discrimination score \cite{abx}. ABX and MAP are mathematically distinct yet they are expected to correlate well with each other as they both evaluate the discriminability of speech embeddings in terms of their transcription. However, \cite{nils} revealed a surprising result: the best model according to the ABX, is also the worst one according to the MAP. Following \cite{nils}'s results, we observed that this kind of discrepancies is much more common than we had expected. If a model performs well according to the MAP and bad according to the ABX, which metric should be trusted? For research in this field to go forward, there is a need to quantify the correlation of these two metrics. In this paper, we wanted to go further and check that MAP and ABX can also predict performances on a downstream task. Such tasks are numerous, but one of them has not yet received enough interest: the \textit{unsupervised frequency estimation}. We define the frequency of a speech sequence as the number of times the phonetic transcription of this sequence appears in the corpus. When dealing with text corpora, frequencies can be computed exactly with a lookup table and are used in many NLP applications. In the absence of labels, deriving the frequency of a speech sequence becomes a problem of density estimation. Estimated frequencies can be useful in representation learning by enabling efficient sampling of tokens in a speech database \cite{riad}. Also, frequencies could be used for the unsupervised word segmentation using algorithms similar to those used in text \cite{goldwater}. In Section~\ref{embeddings}, we present the range of embedding models that can be grouped in five categories of increasing expected reliability: hand-crafted, unsupervised, self-supervised, supervised plus a top-line embedding. In Section~\ref{tasks}, we present the MAP and ABX metrics and introduce our frequency estimation task. In Section~\ref{results}, we present results on the five speech datasets from the ZeroSpeech Challenge \cite{zs15,zs17}. From these results, we draw guidelines for future improvements in the field of acoustic speech embeddings. \section{Embedding Methods}\label{embeddings} \subsection{Acoustic features} Neural networks learn representations on top of input features. Therefore we used two types of acoustic features known as the log-MEL filterbanks (Mel-F)\cite{melf} and the Perceptual Linear Prediction (PLP)\cite{plp}. These two features can be considered as two levels of phonetic abstraction: a high-level one (PLP) and a low-level one (Mel-F). Formally, let us define a speech sequence $s_t$ by $x_1$,$x_2$,...,$x_T$, where $x_i \in \mathbb{R}^n$ is called a frame of the acoustic features. $T$ is the number frames in the sequence $s_t$. In our setting, these frames are spaced out every 10 ms each representing a 25 ms span of the raw signal. \subsection{Hand-crafted model: Gaussian downsampling} Holzenberger and al. \cite{nils} described a method to create fixed-size embedding vectors that requires no training of neural networks: the Gaussian down-sampling (GD). Given a sequence $s_t$, $l$ equidistant frames are sampled and a Gaussian average is computed around each sample. It returns an embedding vector $e_t$ of size $l \times n$ for any size $T$ of input sequences. Therefore, given our two acoustic features, two baselines model are derived: the Gaussian-down-sampling-PLP (GD-PLP) and the Gaussian-down-sampling-Mel-F (GD-Mel-F). Similarly, we derived a simple top-line model. Instead of using hand-crafted features, we can use the transcription of a given random segment. Each frame $x_i$ in a sequence $s_t$ will be assigned a 1-hot vector referring directly to the phoneme being said. This model goes through the same Gaussian averaging process to form the Gaussian-down-sampling-1hot (GD-1hot) model. This model is almost the true labels notwithstanding the information loss due to compression. \subsection{Unsupervised model: RNNAE} A more elaborate way to create speech embeddings is to learn them on top of acoustic features using neural networks. Specifically, recurrent neural networks (RNN) can be trained with back-propagation in an auto-encoding (AE) objective: the RNNAE \cite{nils,emb3}. Formally, the model is composed of an encoder network, a decoder network and a speaker encoder network. The encoder maps $s_t$ to $e_t$, a fixed-size vector. The speaker encoder maps the speaker identity to a fixed size vector $spk_t$. Then, the decoder concatenate $e_t$ and $spk_t$ and maps them to $\hat{s}_t$, a reconstruction of $s_t$. The three networks are trained jointly to minimise the \textit{Mean Square Error} between $\hat{s}_t$ and $s_t$. \subsection{Self-supervised and supervised models: CAE, Siamese and CAE-Siamese} \subsubsection{Advanced training objectives} We consider two popular embedding models. They are also encoder-decoders but they use additional side information. One is trained according to the Siamese objective \cite{riad,siamese,settle} the other is a correspondence auto-encoder (CAE) objective \cite{cae}. Both models assume a set of pairs of sequences from the training corpus. Positive pair are assumed to have the same transcription, negative pairs, different transcriptions. Let $p_t=(s_t,s_{t'},y)$ where $(s_t,s_{t'})$ is a pair of sequences of lengths $T$ and $T'$. A binary value $y$ indicates the positive or negative nature of the pair. We will see how to find such pairs in the next sub-section. The CAE objective uses only positive pairs. The auto-encoder is asked to encode $s_t$ into $e_t$ and decode it into $\hat{s}_t$. The speaker encoder network is used similarly as for the RNNAE. To satisfy the CAE objective, $\hat{s}_t$ has to minimise the \textit{Mean Square Error} between $\hat{s}_t$ and $s_t'$. It forces the auto-encoder to learn a common representation for $s_t$ and $s_{t'}$ . The Siamese objective does not need the decoder network. It encodes both $s_t$ and $s_{t'}$ and forces the encoder to learn a similar or different representation depending on whether the pair is positive or negative. $$L_s(e_{t},e_{t'},y)=y \cos(e_t,e_{t'})\, -\, (1-y) \max(0,\cos(e_t,e_{t'})-\gamma)$$ where $\cos$ is the cosine similarity and $\gamma$ is a margin. This latter accounts for negative pairs whose transcriptions have phonemes in common. These pairs should not have embeddings 'too' far away from each other. The CAE and Siamese objective can also be combined into a CAE-Siamese loss by a weighted sum of their respective loss function \cite{caesiamese}. \subsubsection{Finding and choosing pairs of speech embeddings} Finding positive pairs of speech sequences is an area of research called \textit{unsupervised term discovery} (UTD) \cite{utd,utd2,utd3,thual}. Such UTD systems can be DTW alignment based \cite{utd} or involve a k-Nearest-Neighbours search \cite{thual}. We opted for the latter, as it is both scalable and among the state-of-the-art methods. It encodes exhaustively all possible speech sequences with an embedding model, and used optimised $k$-NN search \cite{faiss} to retrieve acoustically similar pairs of speech sequences (see the details in \cite{thual}). In our experiments, we used the pairs retrieved by $k$-NN on \textit{GD-PLP} encoded sequences to train our self-supervised models (CAE,Siamese, CAE-Siamese). As a supervised alternative, it is possible to sample `gold' pairs, i.e.~pairs of elements that have the exact same transcription. These `gold' pairs are given to the CAE, Siamese and CAE-Siamese to train supervised models. These supervised models indicate how good these self-supervised models could be if we enhanced the UTD system. \section{Evaluation metrics and frequency estimation}\label{tasks} \subsection{Intrinsic quality metrics: ABX and MAP} The intrinsic quality of an acoustic speech embedding can be measured using two types of discrimination tasks: the MAP (also called same-different)\cite{map} and ABX tasks \cite{abx}. Let us consider a set of $n$ acoustic speech embeddings: $((e_1,t_1),(e_2,t_2),...(e_n,t_n))$ where $e_i$ are the embeddings and $t_i$ the transcriptions. The ABX task creates all possible triplets ($e_a$,$e_b$,$e_x$) such that: $t_a = t_x$ and $t_b \neq t_x$. The model is asked to predict 1 or 0 to indicate if $e_x$ is of type $t_a$ or $t_b$. Such triplets are instances of the phonetic contrast between $t_a$ and $t_b$. Formally for a given a triplet, the task is to predict: $$y(e_x,e_a,e_b)=\mathds{1}_{d(e_a,e_x) \leq d(e_b,e_x)}$$ The error rate on this classification task is the ABX score. It is first averaged by type of contrast (all triplets having the same $t_a$ and $t_b$) then average over all contrasts.\\ The MAP task forms a list of all possible pairs of embeddings ($e_a$,$e_x$). The model is asked to predict 1 or 0 to indicate if $e_x$ and $e_a$ have the same type, i.e the same transcription, or not. Formally for a given pair, the model predicts: $$y(e_a,e_x,\theta) =\mathds{1}_{d(e_a,e_x) \leq \theta}$$ The precision and recall on this classification task are computed for various values of $\theta$. The final score or the MAP task is obtained by integrating over the precision-recall curve. \subsection{Downstream task: unsupervised frequency estimation} \subsubsection{The R$^2$ metric} Here, we introduce the novel task of frequency estimation as the assignment, for each speech sequence, of a positive real value that correlate with how frequent the transcription of this sequence is in a given reference corpus\footnote{This estimation could be up to a scaling coefficient; the task of finding exact count estimates is a harder task, not tackled in this paper.}. To evaluate the quality of frequency estimates, we use the correlation determinant $R^2$ between estimation and true frequencies. We compute this number in log space, to take into account the power-law distribution of frequencies in natural languages \cite{zipf}. This coefficient is between 0 and 1 and tells what percentage of variance in the true frequencies can be explained by the estimated frequencies. \subsubsection{$k$-NN and density estimation} We propose to estimate frequencies using density estimation, also called the Parzen-Rosenblatt window method \cite{parzen}. Let $N$ be the number of speech sequence embeddings. First, these $N$ embeddings are indexed into a $k$-NN graph, noted $G$, where all distances between embeddings are computed. Then, for each embedding, we search for the $k$ closest embeddings in $G$. Formally, given an embedding $e_t$ from the $k$-NN graph $G$, we compute its $k$ distances to its $k$ closest neighbours ($d_{n_1}$,...$d_{n_k}$). The frequency estimation is a density estimation function $\kappa$ of the $k$-NN graph $G$ that has three parameter: a Gaussian kernel $\beta$, the number of neighbours $k$ and the embedding $e_t$. $$\kappa_G(e_t,\beta,k)= \sum_{i=1}^{k} \exp^{-\beta d_{n_i}^2}$$ This density estimation yields a real number in $[1,k]$, which we take as our frequency estimation. We set $k$ to $2000$, the maximal frequency that should be predicted using the transcription of our training corpus (the Buckeye, see section 4.1). Then, we must tune $\beta$, the dilation of the space of a given embedding model. For each model, we choose $\beta$ such that it maximises the variance of the estimated log frequencies, thereby covering the whole spectrum of possible log frequencies, in our case $[0,\log(k)]$, which is beneficial for power-law types of distribution. Note that the $\beta$ kernel cannot be too large (resp.~small), as it would predict only high (resp. ~low) values. \subsubsection{Density estimation versus clustering} \begin{table}[H] \centering\small \begin{tabular}{lrrr} \toprule Models/methods & K-means & HC-K-means & $k$-NN \\ \midrule GD-1hot & 0.67 & 0.73 & \textbf{0.74} \\ RNNAE Mel-F & 0.30 & 0.35 & \textbf{0.41} \\ CAE Siamese Mel-F & 0.26 & 0.37 & \textbf{0.43} \\ \bottomrule \end{tabular} \newline \newline \caption{Frequency estimations using K-means, HC-K-means and $k$-NN density estimation on a subset of the Buckeye corpus}\label{3} \vspace{-2em} \end{table} We compared density estimation with an alternative method: the clustering of speech embeddings. Jansen et al.\cite{jansen} did a thorough benchmark of clustering methods on the task of clustering speech embeddings. Across all their metrics, the model that performs best is Hierarchical-K-means (HC-K-means), an improved version of K-means for a higher computational cost. In particular HC-K-means performs better than GMM HC-K-means is not scalable to our data sets, so we extracted 1\% of the Buckeye corpus in order to compare it with our method. A similar size of corpus is used by Jansen et al.\cite{jansen}. We applied $k$-NN, K-means and HC-K-means from the python library scikit-learn \cite{scikit-learn} on three of our models on this subset. For K-means and HC-K-means, we used the hyper-parameters that gave the best scores in \cite{jansen}, namely k-means++ initialisation and average linkage function for HC-K-means. On our subset, the ground truth number of clusters is $K=33000$. Yet, we did a grid-search on the value of $k$ that maximises the $R^2$ score for frequency estimation. We found that K-means and HC-K-means perform better for $K=20000$. It shows these algorithms are not tuned to handle data distributed according to the Zipf's law. Indeed K-means is subject the so-called `uniform effect' and tends to find clusters of uniform sizes \cite{uniform}. Table \ref{3} shows that even by optimising the number of clusters $K$, the $k$-NN method outperforms K-means and HC-K-means. \section{Experiments}\label{results} \subsection{Data sets} Five data sets at our disposal from the ZeroSpeech challenge: Conversational English (a sample of the Buckeye \cite{pitt2005} corpus), English, French, Xitsonga and Mandarin \cite{zs15,zs17}. These are multi-speaker non-overlapping (i.e one speaker per file) recordings of speech. All silences were removed using \textit{voice activity detection} and corrected manually. Each corpus was split into all possible segmentations to produce random speech sequences as described in \cite{thual}. Random speech sequences span from $70ms$ to $1s$. Shorter than $70ms$ sequences may contain less than one phoneme or be ill-pronounced phonemes. Therefore we removed very short sequences to avoid issues that are out of scope of this study. The Buckeye sample corpus contains 12 speakers and 5 hours of speech. The French and English corpora being much larger, we reduced their number of speech sequences and speakers to the size of the Buckeye. Mandarin and Xitsonga are smaller data sets and were left untouched. \begin{figure*}[t!] \centering \includegraphics[width=\linewidth]{LaTeX/results.PNG} \caption{Value of metrics and the downstream task across models, corpora. The average column is the average score over all corpora} \label{1} \end{figure*} \subsection{Training and hyperparameters} Our encoder-decoder network is a specific use of a three-layers bi-directional LSTM as described by Holzenberger et al. \cite{nils} with hyper-parameters selected to minimise the ABX error on the Buckeye corpus. The speaker embedding network is a single fully connected layer with fifteen neurons. Our UTD system \cite{thual} uses the embeddings of the GD-PLP model. A set of speech pairs is returned, sorted by cosine similarity. We selected the pairs that have a cosine similarity above $0.85$ as it seemed to be optimal on the Buckeye corpus according to the ABX metric. In comparison, we trained our supervised models with `gold' pairs, i.e pairs with the exact same transcription. Each corpus was randomly split into train (90\%), dev(5\%) and test (5\%). Neural networks were trained on the train set, early stopping was done using the development set and metrics computed on the test set. Specifically, we trained each model on the five training sets using the Buckeye's hyper-parameters. MAP was ABX were computed on the test sets. Frequency estimation was computed by indexing the five training sets and building $k$-NN graphs. For each element of a given test set, we searched neighbours and estimated frequencies using the $k$-NN graphs. We used the FAISS \cite{faiss} library that provides an optimised $k$-NN implementation. \subsection{Results} \subsubsection{Across models} The results of the two metrics and downstream task are shown in Figure \ref{1} and the following broad trends can be observed. \begin{itemize}[leftmargin=*] \item Supervised models yield substantially lower performance than the ground truth 1-hot encodings, on all metric and all languages. These supervised models have a margin for improvement as they do not learn optimal embeddings despite having access to ground truth labels. \item Supervised models outperform their corresponding self-supervised model, in almost all metrics and for all languages. It means that self-supervision has also a margin for improvement given better pairs from the UTD systems. \item Among self-supervised and supervised models, the CAE-Siamese Mel-F takes the pole position. This model seems to be able to combine the advantages of both training objectives. A result already claimed by \cite{caesiamese}. \item (self) supervised neural neural networks trained on low-level acoustic features (Mel-F) performs better or equally well as high-level acoustic features (PLP). This shows that neural networks can learn their own high-level acoustic features from low-level information. \item Self-supervised models are expected to outperform unsupervised models because they use side information. Yet many configurations do not show this consistently. Only the Buckeye data set seems consistent, but this dataset is the one on which pairs were selected through a grid-search to minimise the ABX error. This may be due to the variable quality of the pairs found by UTD; better UTD is therefore needed to help self-supervised models. \item Unsupervised models are supposed to be better than hand-crafted models because they can adjust by learning from the dataset. Yet, this is not consistently found. Hand crafted models are worse than unsupervised models for ABX and frequency estimation but not for MAP. \item In detail, which model is best in a particular language depends on the metric. \end{itemize} \subsubsection{Across metrics and frequency estimation} In Table \ref{2}, we quantified the possibility to observe the discrepancies that we have just discussed . We computed the correlation $R^2$ across the three `average' columns. Cross correlation scores range from $R^2=0.33$ to $0.53$; the top-line model is not included when computing these scores. \begin{table}[H] \centering\small \begin{tabular}{lrrr} \toprule R$^2$ & Frequency est. & MAP & ABX \\ \midrule Frequency est. & 1.0 & 0.34 & 0.53 \\ MAP & 0.34 & 1.0 & 0.45 \\ ABX & 0.53 & 0.45 & 1.0 \\ \bottomrule \end{tabular} \newline \newline \caption{Correlation $R^2$ across the 'average' column of MAP, ABX and frequency estimation}\label{2} \vspace{-2em} \end{table} These correlations are low enough to permit sizeable discrepancies across metrics and the downstream task. One of our model, the RNNAE Mel-F, epitomises the problem. This model is comparatively bad according to the MAP but good according to ABX and the frequency estimation. It means that MAP and ABX reveal different aspect of the reliability of embedding models. Therefore, only large progress according to one metric assures a progress according to an other metric. It shows the limit of intrinsic evaluation of speech embeddings. Moderate variations on a intrinsic metric cannot guarantee a progress on a given downstream task. ABX and MAP scores are averages over multiple phonetic contrasts. These contrasts could be clustered based on their phonetic frequencies, average lengths or number of phonemes in common. Such fined-grained analyses can sometimes give understanding divergences across metrics. However, we have been unable to find a categorisation of results that make sense of Figure~\ref{1} as a whole. There are currently no fully reliable metrics to assess the intrinsic quality of speech embeddings. \section{Conclusion} We quantified the correlation across two intrinsic metrics (MAP and ABX) and a novel downstream task: frequency estimation. Although MAP and ABX agree on general categories (like supervised versus unsupervised embeddings), we also found large discrepancies when it comes to select a particular model highlighting the limits of these intrinsic quality metrics. However convenient intrinsic metrics may be, they only show partial views of the overall reliability of a model. We showed using frequency estimation that variations on intrinsic quality metrics should not be accounted for certain progress on downstream tasks. More attention should be brought on downstream tasks that have the credit to answer practical problems. \\ \vspace{-1em} \section{Acknowledgements} We thank Matthijs Douze for useful comments on density estimation. We also thank IDRIS from CNRS for offering GPU resources on the supercomputer Jean Zay. This work was funded in part by the Agence Nationale pour la Recherche (ANR-17-EURE-0017 Frontcog, ANR-10-IDEX-0001-02 PSL*, ANR-19-P3IA-0001 PRAIRIE 3IA Institute), CIFAR, and a research gift by Facebook. \bibliographystyle{IEEEtran}
{ "timestamp": "2020-11-09T02:13:46", "yymm": "2007", "arxiv_id": "2007.13542", "language": "en", "url": "https://arxiv.org/abs/2007.13542" }
\section{Introduction} We have previously written about the scale-invariance paradox and shown how it may be resolved by the introduction of filtered-partitioned forms of the transfer spectra \cite{McComb08}, \cite{McComb14a}. In the present paper we carry on this work to show how the underlying symmetries of the triadic interactions in wavenumber space also have implications for any more general study of the Lin equation. We have remarked elsewhere that to treat the Lin equation as purely a local energy balance equation is to be in danger of failing to realize that it is actually a highly non-local equation which couples all modes together. It is in fact the basis of the cascade picture of turbulent energy transfer, and it is important to always bear in mind that the transfer spectrum can be written as an integral over all wavenumbers of a term containing the triple-moment. In the present work we will argue that it is desirable to extend this scrutiny to the filtered-partitioned forms of the transfer spectrum in order to achieve a fuller understanding of the basic energy transfer processes. This paper is organized as follows. We begin by stating the Lin equations and making some observations about the conventional interpretation of its role as an energy balance in wavenumber. Next we remind ourselves about the scale-invariance paradox and how it may be resolved. Then we move on to discussing the ways in which the Lin equation can be modified in order to clarify its role. \section{The Lin equation} We begin with the (by now) well-known spectral energy balance equation in its most familiar form, thus: \begin{equation} \left( \frac{\partial}{\partial t} + 2 \nu k^2 \right) E(k,t) = T(k,t), \label{enbalt} \end{equation} where $E(k,t)$ is the energy spectrum, $T(k,t)$ is the energy transfer spectrum and $\nu$ is the kinematic viscosity. A full derivation and discussion will be found in the book \cite{McComb14a}. We will also follow the growing practice of referring to it as the Lin equation. Now let us integrate each term of (\ref{enbalt}) with respect to wavenumber, from zero up to some arbitrarily chosen wavenumber $\kappa$: \begin{equation} \frac{\partial}{\partial t}\int_{0}^{\kappa} dk\, E(k,t) = \int^{\kappa}_{0} dk\, T(k,t) -2 \nu\int_{0}^{\kappa} dk\, k^2 E(k,t). \label{fluxbalt1} \end{equation} The energy transfer spectrum may be written as \begin{equation} T(k,t) = \int^{\infty}_{0} dj\, S(k,j;t), \label{ts} \end{equation} where, as is well known, $S(k,j;t)$ can be expressed in terms of the triple moment. Its antisymmetry under interchange of $k$ and $j$ guarantees energy conservation in the form: \begin{equation} \int^{\infty}_{0} dk\, T(k,t) =0. \label{encon} \end{equation} With some use of the antisymmetry of $S$, along with equation (\ref{encon}), equation (\ref{fluxbalt1}) may be written as \begin{equation} \frac{\partial}{\partial t}\int_{0}^{\kappa} dk\, E(k,t) = - \int^{\infty}_{\kappa} dk\,\int^{\kappa}_{0} dj\, S(k,j;t) -2 \nu\int_{0}^{\kappa} dk\, k^2 E(k,t). \label{fluxbalt2} \end{equation} In this familiar form, the integral of the transfer term is readily interpreted as the net flux of energy from wavenumbers less than $\kappa$ to those greater than $\kappa$, at any time $t$. This the well known basis for the energy cascade. It is usual to introduce a specific symbol $\Pi$ for this energy flux, thus: \begin{equation} \Pi (\kappa,t) = \int^{\infty}_{\kappa} dk\, T(k,t) =-\int^{\kappa}_{0} dk\, T(k,t), \label{tp} \end{equation} where the second equality follows from (\ref{encon}). In order to consider the stationary case, we may introduce an input spectrum $W(k)$. It is also convenient to introduce the dissipation spectrum $D(k,t)$ such that: \begin{equation} D(k,t) = 2\nu k^2 E(k,t). \end{equation} With these introductions, and some rearrangement, we may write the energy balance equation as: \begin{equation} \frac{\partial E(k,t)}{\partial t} = W(k) + T(k,t) - D(k,t). \label{enbalt2} \end{equation} Figure (\ref{fig1}) illustrates the general form of the energy transfers involved. \begin{figure} \begin{center} \includegraphics[width=0.65\textwidth, trim=0px 200px 0px 200px,clip]{figs/fig1.pdf} \end{center} \caption{\small A schematic view of the energy transfer in isotropic turbulence. The input spectrum $I(k)$ can represent either the work spectrum W(k) or $-\partial E(k.t)/ \partial t$; or the combined effects of both terms. All the other symbols have their usual meaning as defined in the text.} \label{fig1} \end{figure} It should be noted that this general schematic form applies both to the stationary case and the case of free decay, with the input term $I(k)$ being interpreted as appropriate to each case. \section{The paradox and its resolution} The inertial range of wavenumbers is defined as being where the time derivative (or input term) and the viscous term are negligible. Hence, from equation (\ref{enbalt}), it follows that the criterion for an inertial range of wavenumbers can be taken as the vanishing of the transfer spectrum; and, from equation (\ref{tp}), the constancy of the flux. In other words, for wavenumbers $\kappa$ \emph{in the inertial range} we might expect to have have: \begin{equation} T(\kappa,t)=0 \qquad \mbox{and} \qquad \Pi(\kappa,t) = \varepsilon. \label{conflux} \end{equation} Scale invariance, can be summed up as the observation that the energy spectrum takes the form of a power law (which is in itself scale-free) and that there is a constant rate of energy transfer over a range of wavenumbers, which must necessarily be equal to the rate of energy dissipation. In practice, the second criterion of equation (\ref{conflux}) is widely used to identify the inertial range. This criterion was first put forward in 1941 by Obukhov \cite{Obukhov41} and first used to derive the famous $-5/3$ spectrum using dimensional analysis by Onsager in 1945 \cite{Onsager45}. More recently, the books by Leslie \cite{Leslie73} and McComb \cite{McComb90a},\cite{McComb14a} all follow Kraichnan \cite{Kraichnan59b}, and cite the criterion $\Pi=\varepsilon$; as does work by, for instance, Bowman \cite{Bowman96}, Thacker \cite{Thacker97}, and Falkovich \cite{Falkovich06}. However, the first criterion given in equation (\ref{conflux}) only holds for a single wavenumber and this fact is the scale-invariance paradox. There are two inertial-range criteria in (\ref{conflux}); and, by elementary calculus, they seem to be equivalent. This point is illustrated in Fig. (\ref{fig2}). It shows an extended region where the flux is constant and also the transfer spectrum is zero. This makes an appealingly simple picture of spectral energy transfers but unfortunately it is wrong. The transfer spectrum always passes through zero at a single point as illustrated in Fig. (\ref{fig1}). \begin{figure} \begin{center} \includegraphics[width=0.65\textwidth, trim=0px 200px 0px 200px,clip]{figs/fig2.pdf} \end{center} \caption{\small The expected behaviour of $T(k)$, on the basis of elementary calculus, to correspond to the scale invariance of $\Pi(k)$. The fact that $T(k)$ does not behave like that is the scale-invariance paradox.} \label{fig2} \end{figure} This property of $T(k)$ was first discovered in 1963 by Uberoi \cite{Uberoi63} and later, extensive investigations confirmed that the transfer spectrum always has a single zero-crossing \cite{Bradshaw67,Helland77} and pragmatic, approximate procedures were introduced to allow the inertial range to be identified from the behaviour of the transfer spectrum \cite{Lumley64}. For a discussion of this topic, see \cite{McComb92}. So, let us consider again equation (\ref{fluxbalt2}) for the transfer of energy from low wavenumbers to high. Now we wish to draw attention to the fact that, although the first term on the right hand side correctly represents the integral over wavenumber $k$ of the transfer spectrum from zero up to $\kappa$, nevertheless the integrand is not actually $T(k)$ (from now on, we shall suppress time arguments in the interests of conciseness). In fact the integrand represents \emph{some part of} $T(k)$, because the internal integration with respect to the dummy variable $j$ has been truncated at $j=\kappa$. In order to clarify this situation, it will be found helpful to introduce low- and high-pass filtering operations, based on a cut-off wavenumber $k=\kappa$, on the Fourier components of the velocity field. These operations are used for the study of spectral mode elimination in the context of large-eddy simulation and its associated subgrid modelling: see, for example, \cite{McComb01a} and references therein. We are thus led to introduce transfer spectra which have been filtered with respect to $k$ and which have had their integration over $j$ partitioned at the filter cut-off, i.e. $j=\kappa$. Beginning with the Heaviside unit step function, defined by: \begin{eqnarray} H(x) & = & 1 \qquad \mbox{for} \qquad x > 0; \\ & = & 0 \qquad \mbox {for} \qquad x < 0. \end{eqnarray} we may define low-pass and high-pass filter functions, thus: \begin{equation} \theta^{-}(x) = 1 - H(x), \end{equation} and \begin{equation} \theta^{+}(x) = H(x). \end{equation} We may then decompose the transfer spectrum, as given by (\ref{ts}), into four constituent parts, \begin{equation} T^{--}(k|\kappa) = \theta^{-}(k-\kappa)\int^{\kappa}_{0}dj\, S(k,j); \label{tmm} \end{equation} \begin{equation} T^{-+}(k|\kappa) = \theta^{-}(k-\kappa)\int^{\infty}_{\kappa}dj\, S(k,j); \label{tmp} \end{equation} \begin{equation} T^{+-}(k|\kappa) = \theta^{+}(k-\kappa)\int^{\kappa}_{0}dj\, S(k,j); \label{tpm} \end{equation} and \begin{equation} T^{++}(k|\kappa) = \theta^{+}(k-\kappa)\int^{\infty}_{\kappa}dj\, S(k,j), \label{tpp} \end{equation} such that the overall requirement of energy conservation is satisfied: \begin{equation} \int^{\infty}_{0}dk\left[T^{--}(k|\kappa) + T^{-+}(k|\kappa) + T^{+-}(k|\kappa) + T^{++}(k|\kappa)\right] = 0. \end{equation} It is readily verified that the individual filtered/partitioned transfer spectra have the following properties: \begin{equation} \int^{\kappa}_{0}dk\, T^{--}(k|\kappa) = 0; \label{mm} \end{equation} \begin{equation} \int^{\kappa}_{0}dk\, T^{-+}(k|\kappa) = -\Pi(\kappa); \label{mp} \end{equation} \begin{equation} \int^{\infty}_{\kappa}dk\, T^{+-}(k|\kappa) = \Pi(\kappa); \label{pm} \end{equation} and \begin{equation} \int^{\infty}_{\kappa}dk\, T^{++}(k|\kappa) = 0. \label{pp} \end{equation} Equation (\ref{fluxbalt1}) may be rewritten in terms of the filtered/partitioned transfer spectrum as: \begin{equation} \frac{d}{dt}\int^{\kappa}_{0}dk\, E(k,t) = -\int^{\infty}_{\kappa}dk\, T^{+-}(k|\kappa) -2\nu_{0}\int^{\kappa}_{0}dk\, k^{2}E(k,t). \label{fluxbaltmod} \end{equation} We note from equation (\ref{mm}) that $T^{--}(k|\kappa)$ is conservative on the interval $[0,\kappa]$, and hence does not appear in (\ref{fluxbaltmod}), while $T^{-+}(k|\kappa)$ has been replaced by $-T^{+-}(k|\kappa)$, using (\ref{mp}) and (\ref{pm}). Filtered and partitioned transfer spectra have been measured, using DNS, in the context of spectral large-eddy simulation. In particular, Zhou and Vahala \cite{Zhou93a} found that the resolvable-scales energy transfer spectrum $T^{<<}(k)$ (i.e. $T^{--}(k|\kappa)$ in our notation) is conservative on the interval $0\leq k \leq \kappa$, in agreement with our equation ({\ref{mm}}); while the resolvable-subgrid transfer spectrum (i.e. our $T^{-+}(k|\kappa)$) is zero over a range of wavenumbers. Similar behaviour has also been found in the more detailed investigation by McComb and Young \cite{McComb98}. \begin{figure} \begin{center} \includegraphics[width=0.65\textwidth, trim=0px 200px 0px 0px,clip]{figs/fig3.pdf} \end{center} \caption{\small The behaviour of the filtered-partitioned transfer spectra: the paradox resolved!} \label{fig3} \end{figure} As we have previously pointed out in \cite{McComb08}, experimentalists, who do not have access to partitioned versions of the transfer spectrum, will still find pragmatic procedures, such as the Lumley criterion for the inertial range \cite{Lumley64}, useful. However, those working with DNS or analytical theory, can avoid the paradox by changing their definition of energy fluxes, from those given by (\ref{tp}), to the forms\footnote{We should mention that these forms are exactly equivalent to Kraichan's original definition of what he called the \emph{transport power} \cite{Kraichnan59b}. In later work \cite{Kraichnan64b}, his definition of the transport power was equivalent to equation (\ref{tp}) in the present paper.}: \begin{equation} \Pi (\kappa,t) = \int^{\infty}_{\kappa} dk\, T^{+-}(k|\kappa,t) =-\int^{\kappa}_{0} dk\, T^{-+}(k|\kappa,t), \label{tpmod} \end{equation} where $T^{+-}(k|\kappa,t)$ is defined by (\ref{tpm}) and $T^{-+}(k|\kappa,t)$ by (\ref{tmp}). This is equivalent to (\ref{tp}); but, unlike it, avoids the paradox. This resolution of the paradox is shown schematically in Fig. (\ref{fig3}). \section{Modifications to the Lin equation} In view of the above discussion, the obvious step now is to filter the energy spectrum in the same way as we have done for the transfer spectrum, and consider low-$k$ and high-$k$ forms of the Lin equation. In order to do this we make the decomposition: \begin{equation} E(k,t) = E^-(k|\kappa,t) + E^+(k|\kappa,t), \label{decompE} \end{equation} where $E^-$ is defined for $k\leq \kappa$ and $E^+$ is defined for $k\geq \kappa$. Trivially, we can also do this for the input spectrum $W(k)$ and dissipation spectrum $D(k,t)$, and equation (\ref{enbalt2}) can be written in low-$k$ and high-$k$ forms respectively, as: \begin{equation} \frac{\partial E^-(k|\kappa,t)}{\partial t} = W^-(k|\kappa) + T^{--}(k|\kappa,t) + T^{-+}(k|\kappa,t) - D^-(k|\kappa,t), \quad \mbox{for}\quad k \leq \kappa; \label{enbalt_low} \end{equation} and \begin{equation} \frac{\partial E^+(k|\kappa,t)}{\partial t} = W^+(k|\kappa) + T^{+-}(k|\kappa,t) + T^{++}(k|\kappa,t) - D^+(k|\kappa,t), \quad \mbox{for}\quad k \geq \kappa. \label{enbalt_high} \end{equation} For this decomposition to be meaningful, the Reynolds number must be large enough for the inertial flux to be equal to the dissipation, in accordance with the second criterion of equation (\ref{conflux}). As we increase the Reynolds number beyond this critical value, we have an increasing range of wavenumbers $k$ which satisfy that criterion, and this is the \emph{inertial range}. We shall denote this range by \[ k_{\mbox{\scriptsize bot}} \leq k \leq k_{\mbox{\scriptsize top}} \quad \equiv \quad \mbox{the inertial range of wavenumbers,} \] where we now have to define $k_{\mbox{\scriptsize bot}}$ and $k_{\mbox{\scriptsize top}}$. For sake of simplicity, we will consider stationary turbulence and omit the time variables. First, we need to consider the nature of the forcing spectrum $W(k)$. In formulating the turbulence problem according to the tenets of statistical physics, this is normally taken to arise from the introduction of random stirring forces, which are assumed to be of \emph{white noise} form. In particular, the forcing spectrum is taken to be peaked near the origin in wavenumber space, so that the turbulence that results from it is due to the Navier-Stokes equation, and not specifically related to the forcing. We should note that a different view was taken from the late 1970s onwards, in connection with the application of renormalization group methods to the Navier-Stokes equation. See either of the books \cite{McComb90a} or \cite{McComb14a} for a general discussion of this point. Accordingly, for theoretical approaches to the statistical closure problem, and also for direct numerical simulation, we should choose a form of forcing spectrum $W(k)$ which satisfies the conditions: \begin{equation} \int_0^\infty dk W(k) = \varepsilon_W \simeq \int_0^{k_{\mbox{\scriptsize bot}}} dk W(k), \label{kbot} \end{equation} where the equality defines $\varepsilon_W$, while the approximate equality defines $k_{\mbox{\scriptsize bot}}$, which we take to be the lower limit of the inertial range. In general, we would require $k_{\mbox{\scriptsize bot}}$ to be very much smaller than the Kolmogorov dissipation wavenumber $k_d$ which is generally taken as being an indicator of the dissipation range of wavenumbers. Experimenters have usually taken the the upper limit of the inertial range to be about $0.1 k_d - 0.2 k_d$. In fact we will define $k_{\mbox{\scriptsize top}}$ by another approximate equality, thus: \begin{equation} \int_0^\infty dk D(k) = \varepsilon \simeq \int_{k_{\mbox{\scriptsize top}}}^\infty dk D(k), \label{ktop} \end{equation} where the equality is the conventional definition of the dissipation rate, and the approximate equality defines the upper limit of the inertial range $k_{\mbox{\scriptsize top}}$. With these points in mind, we may simplifly the low-wavenumber and high-wavenumber forms of the Lin equation, respectively (\ref{enbalt_low}) and (\ref{enbalt_high}), to: \begin{equation} \frac{\partial E^-(k|\kappa,t)}{\partial t} = W(k) + T^{--}(k|\kappa,t) + T^{-+}(k|\kappa,t), \quad \mbox{for}\quad k \leq \kappa; \label{Lin-low} \end{equation} and \begin{equation} \frac{\partial E^+(k|\kappa,t)}{\partial t} = T^{+-}(k|\kappa,t) + T^{++}(k|\kappa,t) - D(k,t), \quad \mbox{for}\quad k \geq \kappa. \label{Lin-high} \end{equation} That is, for sufficiently high Reynolds numbers, and an appropriate choice of stirring forces, we may simplify matters by treating the input spectrum as being confined to the low-wavenumber region and the dissipation spectrum as being confined to the high-wavenumber region. Deriving the flux balance equations from (\ref{Lin-low}) and (\ref{Lin-high}), and invoking equations (\ref{kbot}) and (\ref{ktop}), we obtain the final flux balances as: \begin{equation} \varepsilon_W - \Pi(\kappa) = 0 \quad \mbox{for}\quad k \leq \kappa; \end{equation} and \begin{equation} \Pi(\kappa) - \varepsilon = 0 \quad \mbox{for}\quad k \geq \kappa. \end{equation} Reminding ourselves that the transfer spectrum has its single zero crossing at $k=k_\ast$, we may define the maximum value of the inertial flux as \begin{equation} \Pi_{\mbox{max}} = \Pi(k_\ast) = \varepsilon_T, \end{equation} and at the same time introduce the useful symbol $\varepsilon_T$ for the maximum flux. Since $k_\ast$ must lie within the inertial range, we can write the general criterion for the existence of the inertial range as: \begin{equation} \Pi(\kappa) = \varepsilon_T = \varepsilon_W = \varepsilon. \end{equation} For completeness it should be noted that this analysis is readily extended to the case of free decay, if we replace $\varepsilon_W$ by the energy decay rate $\varepsilon_D$. Further details may be found in \cite{McComb14a}. \section{Conclusion} Provided we are faced with the ideal situation, where the input and the output (\emph{i.e.} dissipation) are well separated in wavenumber space, equations (\ref{Lin-low}) and (\ref{Lin-high}) may provide a new, and one might hope, productive basis for the study of the energy transfers in isotropic turbulence. The corresponding partitioned-filtered Navier-Stokes equations are readily deduced and may be studied by direct numerical simulation as a four-component composite dynamical system, where the four components correspond to the four filtered-partitioned transfer spectra. Also, there is a growing use of hybrid approaches in fluid dynamics problems, and the closure problem could be approached in such a way by using different methods to tackle the different filtered-partitioned transfer spectra. For instance, in the low-$k$ system, we might use the local energy transfer theory \cite{McComb17a} for $T^{--}(k)$, and renormalization group methods \cite{McComb06} for $T^{-+}(k)$; or, conceivably, the other way round! It would require investigation. For the ideal situation just discussed, where we have the input and output (or, production and dissipation) ranges of wavenumber well separated, we need to choose the input spectrum $W(k)$ to be peaked near the origin; and also we need the Reynolds number to be reasonably high. If for some reason, we cannot satisfy these conditions, then we must resort to equations (\ref{enbalt_low}) and (\ref{enbalt_high}). However, even so, we must still have the Reynolds number large enough for the condition for the existence of an inertial range to be satisfied. Lastly, I should emphasise that Fig. (\ref{fig3}) is very much a schematic indication of how this graph should look, based on the small amount of information available to us. The behaviour of these filtered-partitioned transfer spectra was studied in the 1990s in the context of subgrid modelling and renormalization group methods: see \cite{McComb08} for references. Computers have advanced a lot since then, so we end with a plea to the effect that this field of study should be revived in the context of later work. An informal introduction to this topic may be found in the post of 23 July on the following weblog: blogs.ed.ac.uk/physics-of-turbulence/. \section*{Acknowledgements} I wish to thank John Morgan who worked on this topic with me as part of his MPhys research project in the academic year 2018/19. It was John's idea to plot Fig. (\ref{fig3}) in order to make the resolution of the scale-invariance paradox clearer and he also prepared the figures.
{ "timestamp": "2020-07-28T02:41:53", "yymm": "2007", "arxiv_id": "2007.13622", "language": "en", "url": "https://arxiv.org/abs/2007.13622" }
\section{Introduction} \label{sec:intro} \input{src/sections/introduction} \section{State-of-the-Art on \ac{ML}-based for RF Signal Classification} \label{sec:related} \input{src/sections/related} \section{\ac{RAT} Characterisation} \label{sec:classifier} \input{src/sections/classifier.tex} \section{Dataset Generation} \label{sec:implementation} \input{src/sections/implementation.tex} \section{Performance Evaluation} \label{sec:validation} \input{src/sections/validation} \section{Conclusion} \label{sec:conclusion} \input{src/sections/conclusion} \section*{Acknowledgements} The research leading to this work received funding from the European Horizon 2020 Program under the grant agreement No. 732174 (ORCA project). In addition, this work was partly funded by Science Foundation Ireland (SFI) and the National Natural Science Foundation of China (NSFC) under the SFI-NSFC Partnership Programme Grant Number 17/NSFC/5224. \balance \bibliographystyle{./templates/IEEEtran} \section*{Acronyms} \begin{acronym}[IMT-Advanced] \acro{NR-U}{New Radio Unlicensed} \acro{IoU}{intersection over union} \acro{mAP}{mean Average Precision} \acro{LSTM}{long short term memory} \acro{FNN}{fully connected neural network} \acro{RForest}{Random Forest} \acro{3GPP}{3rd Generation Partnership Project} \acro{ABS}{Almost Blank Subframes} \acro{Adam}{Adaptive Moment Optimisation} \acro{ADC}{Analogue-to-Digital Converter} \acro{AMPS}{Advanced Mobile Phone System} \acro{AoA}{Angle of Arrival} \acro{AoD}{Angle of Departure} \acro{AP-CNN}{Amplitude and Phase shift \ac{CNN}} \acro{ASP}{Antenna Scan Period} \acro{BBU}{Baseband Unit} \acro{BER}{Bit Error Rate} \acro{BLER}{Block Error Rate} \acro{BPSK}{Binary \ac{PSK}} \acro{BS}{Base Station} \acro{bw}{bandwidth} \acro{CBRS}{Citizens Broadband Radio Service} \acro{CDMA}{Code Division Multiple Access} \acro{CDM}{Code Division Multiplexing} \acro{CFO}{Carrier \acl{FO}} \acro{C-MTC}{Mission-Critical \acs{MTC}} \acro{CN}{Core Network} \acro{CNN}{Convolutional Neural Network} \acro{CP}{Cyclic Prefix} \acro{C-RAN}{Cloud-\ac{RAN}} \acro{CR}{Cognitive Radio} \acro{CriC}{Critical Communication} \acro{CSAT}{Carrier Sense Adaptive Transmission} \acro{CS}{Cyclic Suffix} \acro{CSMF}{Communication Service Management Function} \acro{CV}{Computer Vision} \acro{DAC}{Digital-to-Analogue Converter} \acro{D-AMPS}{Digital \acs{AMPS}} \acro{DC}{Direct Current} \acrodefplural{RAT}[RATs]{Radio Access Technologies} \acrodefplural{SDS}[SDSs]{Software-defined Switches} \acro{DEQUE}{Double-Ended Queue} \acro{DL}{Deep Learning} \acro{DNN}{Deep Neural Network} \acro{DSA}{Dynamic Spectrum Access} \acro{DS-CDMA}{Direct Sequence \acs{CDMA}} \acro{E2E}{end-to-end} \acro{ECC}{Electronic Communications Committee} \acro{EDGE}{Enhanced Data rates for \acs{GSM} Evolution} \acro{eMBB}{Enhanced \acl{MBB}} \acro{eV2X}{Enhanced \ac{V2X}} \acro{EV-DO}{Evolution-Data Optimized} \acro{FCC}{Federal Communications Commission} \acro{FD}{Frame Duration} \acro{FDMA}{Frequency Division Multiple Access} \acro{FDM}{Frequency Division Multiplexing} \acro{FHSS}{Frequency-hopping spread spectrum} \acro{FI}{Frame Interval} \acro{FM}{Frequency Modulation} \acro{FO}{Frequency Offset} \acro{FPGA}{Field-Programmable Gate Array} \acro{FS}{Flow Space} \acro{FSK}{Frequency-Shift Keying} \acro{GLDB}{Geo-Location Database} \acro{GP-KNN}{Genetic Programming with K-Nearest Neighbors} \acro{GPP}{general purpose process} \acro{GPRS}{General packet radio service} \acro{GPU}{Graphics Processing Unit} \acro{GRC}{Global Radio Coordinator} \acro{GSM}{Global System for Mobile communication} \acro{H2H}{Human-to-Human} \acro{HetNet}{Heterogeneous Network} \acro{HSDPA}{High Speed Downlink Packet Access} \acro{HSPA}{High Speed Packet Access} \acro{HSUPA}{High-Speed Uplink Packet Access} \acro{iDEN}{Integrated Digital Enhanced Network} \acro{IFD}{Inter-Frame Duration} \acro{IMT-2000}{International Mobile Telecommunications-2000} \acro{IMT-Advanced}{International Mobile Telecommunications-Advanced} \acro{INR}{Interference-to-Noise Ratio} \acro{IoT}{Internet of Things} \acro{IPM}{Intra-Pulse Modulation} \acro{ISM}{Industrial, Scientific and Medical} \acro{ITU}{International Telecommunication Union} \acro{LBT}{Listen Before Talk} \acro{LFM}{Linear Frequency Modulation} \acro{LRC}{Local Radio Controller} \acro{LTE-A}{\acs{LTE}-Advanced} \acro{LTE}{Long-Term Evolution} \acro{LTE-U}{\ac{LTE} in unlicensed spectrum} \acro{LWA}{\acs{LTE}-\acs{WLAN} aggregation} \acro{mAP}{mean average precision} \acro{MCD}{Measurement Capable Device} \acro{mIoT}{Massive \ac{IoT}} \acro{ML}{Machine Learning} \acro{M-MTC}{Massive \acs{MTC}} \acro{MNO}{Mobile Network Operator} \acro{MTC}{Machine Type Communication} \acro{MVNO}{Mobile Virtual Network Operator} \acro{NMT}{Nordic Mobile Telephone} \acro{NSMF}{Network Slice Management Function} \acro{NS}{Network Slice} \acro{NSSMF}{Network Slice Subnet Management Function} \acro{OFDMA}{Orthogonal Frequency Division Multiple Access} \acro{OFDM}{Orthogonal Frequency Division Multiplexing} \acro{ONF}{Open Network Foundation} \acro{OS}{Operational System} \acro{OTT}{Over-The-Top} \acro{PDC}{Personal Digital Cellular} \acro{PER}{Packet Error Rate} \acro{PLC}{Process Logic Controller} \acro{POCSAG}{Post Office Code Standardization Advisory Group} \acro{PRI}{Pulse Repetition Interval} \acro{PSD}{Power Spectral Density} \acro{PSK}{Phase-Shift Keying} \acro{PW}{Pulse Width} \acro{QoE}{Quality of Experience} \acro{QoS}{Quality of Service} \acro{RAN}{Radio Access Network} \acro{RAT}{Radio Access Technology} \acro{ReLU}{Rectified Linear Unit} \acro{REM}{Radio Environment Map} \acro{RF}{Radio Frequency} \acro{RMSE}{Root Mean Squared Error} \acro{RMSProp}{Root Mean Square Propogation} \acro{RSSI}{Received Signal Strength Indicator} \acro{Rx}{receiver} \acro{S/A}{Sensors/Actuators} \acro{S-CNN}{Spectrogram \ac{CNN}} \acro{SDN}{Software Defined Network} \acro{SDR}{Software-Defined Radio} \acro{SDS}{Software-defined Switch} \acro{SER}{Symbol Error Rate} \acro{SFI}{Science Foundation Ireland} \acro{SIMD}{Single Instruction Multiple Data} \acro{SINR}{Signal-to-Interference-plus-Noise Ratio} \acro{SNR}{Signal-to-Noise Ratio} \acro{SRO}{Symbol Rate Offset} \acro{SU}{Secondary User} \acro{SVM}{Support Vector Machine} \acro{TACS}{Total Access Communications System} \acro{TDMA}{Time Division Multiple Access} \acro{TDM}{Time Division Multiplexing} \acro{TVWS}{TV White Space} \acro{Tx}{transmitter} \acro{UE}{User Equipment} \acro{UHD}{\acs{USRP} Hardware Driver} \acro{UMTS}{Universal Mobile Telecommunications System} \acro{URLLC}{Ultra-Reliable Low Latency Communication} \acro{USRP}{Universal Software Radio Peripheral} \acro{V2X}{Vehicular-to-Everything} \acro{VM}{Virtual Machine} \acro{VOC}{Visual Object Classes} \acro{WCDMA}{Wideband Direct Sequence \acs{CDMA}} \acro{WiMax}{Worldwide Interoperability for Microwave Access} \acro{WLAN}{Wireless Local Area Network} \acro{WMN}{Wireless Mesh Network} \acro{WMWG}{Wireless and Mobile Working Group} \acro{WNV}{Wireless Network Virtualization} \acro{WSN}{Wireless Sensor Network} \acro{YOLO}{You Only Look Once} \acro{5G}{fifth generation of wireless technology} \end{acronym} \subsection{Image-based \ac{RAT} Classifier} We developed a \ac{CNN}-based classifier for recognising different \acp{RAT} coexisting in shared spectrum. Our classifier can identify multiple \acp{RAT} by directly applying object detection to spectrograms. The \ac{CNN} must be trained and validated against target objects. Depending on the size of the neural network and the computing platform available, the training and validation of the \ac{CNN} from scratch may take between hours and days. One way to reduce this time is by applying transfer learning, which relies on the partial reuse of a previously trained model (trained on a different set of tasks) for addressing a new task. This implies retraining an existing network, typically by fine tuning the weights from the hidden layers close to the output layer, to make the network more suitable to the new task. As such, the first layers, which are typically good at extracting basic features such as edge detection in computer vision tasks, are reused for the new task as well. Transfer learning significantly decreases the amount of data required for the training process and, consequently, the duration of the training process. The application of transfer learning requires the choice of a previously trained network as a starting point. A broad range of pre-trained networks already exists; these are suitable for different problems, e.g., predictive text, speech recognition, and image object detection. For the spectrum sharing scenario, where it is necessary to dynamically assess how the spectrum is being occupied, we need a model that can provide acceptable classification accuracy in real-time. We also require a solution that can provide not just the classification of the object, but also its localisation in the image (as discussed later, we rely on this localisation information for feature extraction). We employ the well-known object detection model \ac{YOLO} \cite{yolo} as the starting point for our \ac{RAT} classifier. \ac{YOLO} is one of the most efficient solutions in the literature for real-time implementation of object detection. This model outputs both the class of the detected objects, as well as their position in the input image. Using weights and architecture from \ac{YOLO} pre-trained on ImageNet \cite{imagenet}, we modify the Softmax layer, which corresponds to the last layer before the output of the model. During the training process, the Softmax layer is explicitly optimised for the classification of \ac{LTE} and WiFi waveforms. The architecture we adopted is presented in \cite{yolo2} and it has 19 convolution layers and 5 max-pooling layers. Moreover, our model can easily be extended for supporting more \acp{RAT}, by retraining it with datasets that include new waveforms. \begin{figure}[t] \centering \begin{subfigure}[b]{0.49\columnwidth} \includegraphics[width=\linewidth]{lte.png} \caption{\ac{LTE} detection.} \label{fig:1} \end{subfigure} \hfill \begin{subfigure}[b]{0.49\columnwidth} \includegraphics[width=\linewidth]{wifi.png} \caption{WiFi detection.} \label{fig:2} \end{subfigure} \caption{Spectrogram with the bounding boxes created by our \ac{ML}-based signal classifier. The positions of the bounding boxes represent the detection of the frame, and the colour represents the classification, blue for LTE and white for WiFi.}\label{fig:spec} \end{figure} The training itself requires the fine-tuning of parameters related to the learning rate and convergence of the classification, known as the hyperparameters. Hyperparameters are parameters chosen before the training process, for example, learning rate, optimiser, epochs, etc. We detail our choices for the hyperparameters below: \begin{itemize} \item {Learning Rate}: is the amount by which the weights in an ML model are updated. We set it to $10^{-5}$; with this value, the model did not overfit and was able to learn the objects' characteristics. \item {Epoch}: is an iteration of the training process where the model is filled with all the elements of the training dataset. If a model is trained with too many epochs, it can overfit to the training data, while if a model uses too few epochs, it might not learn the necessary features to perform the classification. After testing several values, we set the number of epochs to 50,000. \item {Mini-batch}: is a part of the dataset used to update the network's weights. The first approaches in \ac{ML} used the entire dataset to update the weights in the network; however, the work of~\cite{masters2018revisiting} argues that this update should use a smaller part of the dataset, called a mini-batch. The mini-batch approach can increase model performance when it uses batches with values between 2 to 32 \cite{masters2018revisiting}. In the early stages of the design process of our solution, we observed good performance when setting the mini-batch to 32. \item{Optimiser}: is the function that modifies the weights of each neuron with the purpose of minimising the loss function. The loss function indicates how close the output of the model is to the expected result. The main objective of the learning process is to optimise the loss function, making the predicted output closer to the expected one without over-fitting to the training data. We chose the optimiser \ac{Adam} because it has the feature of accelerating the search for the minimum value of the loss function and reducing oscillations. \end{itemize} After trained, our model produces the identification of the \ac{RAT} (i.e., the result of the classification) and the coordinates of each frame detected in the spectrogram image. Figure~\ref{fig:spec} shows examples of LTE and WiFi frames detected, surrounded by bounding boxes: blue for LTE, white for WiFi. The four coordinates of each of these bounding boxes are used by the feature extraction component, discussed next. Once our model is trained and validated, it can provide results on the fly, making it suitable for real time applications. Our classifier analyses frames in batches of three frames each, providing three outputs at the same time; this allows us to parallelise the classification task and use multiple cores in parallel. A trade-off that is important to consider is the implication of this design choice on real-time detection and \ac{RAT} classification: the number of images analysed simultaneously cannot be too large, otherwise the model will not operate in real-time. In our implementation, we evaluated the classification speed using an computer with Intel Core i7-6820HK processor and GeForce GTX 1070 Mobile. With this commercial off-the-shelf \ac{GPU}, we are able to analyse three images in around 0.1ms with 2 classes and trained with a commercial transmission dataset (described later). \begin{comment} \begin{table}[] \centering \caption{Summary of the hyperparameters for the classifier model.} { \begin{tabular}{|l|l|} \hline Parameters & Value \\ \hline Learning rate & $10^{-5}$ \\ Epochs & 50.000 \\ Batch & 32 \\ Optimiser & Adam \\\hline Training dataset & 400 spectograms\\\hline \end{tabular} } \label{tab:hyper} \end{table} \end{comment} \subsection{Post-processing Feature Extraction} Once the classification of the \ac{RAT} is completed, the feature extraction component allows us to obtain additional information about the \acp{RAT} present in a given channel. The spectrogram corresponds to a band of frequencies [$f_1$, $f_2$], collected during a time interval [$t_1$, $t_2$]. Then, we calculate the granularity that each pixel in the image represents in the time and frequency domains, as an increment value in time ($I_T$) and frequency ($I_F$), respectively. This mapping depends on the size of the spectrogram ($[X_{min},X_{max}], [Y_{min},Y_{max}]$)\footnote{Note that uppercase $X$ and $Y$ refer to the spectrogram, and lowercase $x$ and $y$ refer to the bounding box around a frame.}. The trained model provides the corners of a rectangle that encloses a transmission frame, denoted by coordinates $x_{min}, x_{max}, y_{min}, y_{max}$. Given the coordinates of this rectangle, i.e., the bounding box, as well as the values of each time and frequency increment, we can localise the signals in the spectrum and in time. In order to calculate the bandwidth of the signal ($b_w$) and its centre frequency ($f_c$), we use the horizontal coordinates of the corners of the bounding box, translating them into their respective value in frequency. The \ac{FD} of the signal is calculated in a similar manner, but now using the vertical coordinates of the corners of the bounding box. To calculate the average \ac{FI}, we must first calculate the average time the channel stays without a transmission (CWT), which is the total time represented in a spectrogram subtracted by the time that is occupied by frame transmissions. Then, the \ac{FI} is given by CWT divided by the number of transmissions on the spectrogram. We summarise the formulas we use for extracting the features of different \acp{RAT} in Table~\ref{tab:formulas}, and illustrate the representation of the relevant values on a spectrogram in Figure \ref{fig:post}. \begin{table}[] \centering \caption{The mapping between the image position and the parameters of interest in time and frequency domains.} \resizebox{0.49\textwidth}{!} { \begin{tabular}{|l|l|} \hline Parameters Time/Frequency & Position Mapping \\ \hline $I_t$ & $(t_2-t_1)/(Y_{max}-Y_{min})$ \\ $I_f$ & $(f_2-f_1)/(X_{max}-X_{min})$ \\ $b_w$ & $(x_{max} - x_{min}) * I_f$ \\ $f_c$ & $f_1+(I_f*x_{min})+(b_{w}/2)$ \\ FD & $(y_{max} - y_{min})*I_t$ \\ CWT & $(t_2-t_1) - (frame\_rate * f_{av})$ \\ FI & $CWT/frame\_rate$ \\ \hline \end{tabular} } \label{tab:formulas} \end{table} \begin{figure} \includegraphics[width=\columnwidth]{images/post2.png} \caption{Parameters representation in a spectrogram.} \label{fig:post} \vspace{-1em} \end{figure} \subsection{Tree of tasks} \begin{comment} \begin{figure*}[t!] \centering \begin{subfigure}[t]{0.33\textwidth} \includegraphics[width=\columnwidth]{Picture6.png} \caption{Probability of detection.} \label{fig:specs} \end{subfigure}\hfill \begin{subfigure}[t]{0.33\textwidth} \includegraphics[width=\columnwidth]{Picture7.png} \caption{\ac{RMSE} of the \ac{SNR} estimator.} \label{fig:graph2} \end{subfigure}\hfill \begin{subfigure}[t]{0.33\textwidth} \includegraphics[width=\columnwidth]{Picture8.png} \caption{\ac{RMSE} of the \ac{CFO} estimator.} \label{fig:graph3} \end{subfigure} \caption{Evaluation of the preamble with a length of 1031 samples, used to synchronise the \acp{SDR} in the dataset \ac{RF} generator.}\label{fig:preamble} \end{figure*} \end{comment} Our dataset generator allows the generation of datasets with different: (i) waveforms, e.g., WiFi, \ac{LTE}, and \ac{PSK} signals; (ii) waveform-specific features, e.g., modulation order and frame length, and DSP transformations, e.g., \ac{FO}, soft gains, shape filtering, and multipath emulation; (iii) \ac{RF} parameters, e.g., centre frequency, hardware gains. Each permutation of parameters and waveform types is translated into IQ signals that are transmitted over the air between \acp{SDR}. Then, the received IQ signals and the associated parameters are stored in data files for later access. We developed a pipeline-based approach for generating traces of \ac{RF} waveforms with different characteristics. The process is implemented as a graph of individual tasks, e.g., producing a waveform, setting the frame duration, and setting the transmission gain. Each task can be configured and run independently. Each of the task's parameters can be a list of different values, and the task generates respective output files for all the input values. The subsequent task receives a set of different input files from the previous task and performs its operation on all of them. Such a pipeline-based approach facilitates the extension and inclusion of new tasks, the parallelisation of tasks, and resuming from intermediate points. \subsection{Synchronisation and Channel Estimation} A compelling aspect of our dataset generator is the automatic labelling provided by it, as this is essential for the process of training an \ac{ML} model. The labelling is created in different formats, including the \ac{VOC} format that is used in object detection approaches. To provide automatic labelling, it is essential to keep the \ac{SDR} transmitter and receiver synchronised so that the labels of their transmitted and received samples remain consistent. We accomplish the synchronisation and channel estimation through the periodic transmission of preambles. The preamble used during dataset generation needs to display strong robustness to noise, so that it can collect samples at the low \ac{SNR} levels that are generally required in \ac{RF} signal/waveform classification use cases. We chose a preamble structure composed of several short Zadoff-Chu sequences with absolute phase shift by an m-sequence for coarse frequency, time offset estimation, and disambiguation, followed by a long Zadoff-Chu sequence for precise frequency offset estimation. For this study, we selected a preamble length of 1031 samples to guarantee robust synchronisation, with probability of preamble detection close to 1 even for values of \ac{SNR} lower than -5 dB. Whenever preamble synchronisation fails, the generator triggers a retransmission. \subsection{Detection and Classification Performance} In this section, we evaluate the detection performance and classification accuracy of our model, and demonstrate its robustness in detecting and classifying RF waveforms under different SNR conditions and interference levels. We used the dataset generator described in the previous section to compose a dataset of images, i.e., spectrograms and labels, for two radio access technology classes, LTE and WiFi. This scenario resembles real world use cases of coexistence in unlicensed spectrum~\cite{wifilte}. Moreover, our model can be extended, for instance, by increasing the diversity of the RATs included in the training dataset. Extending the training dataset might be useful in a scenario where a technology operating in the unlicensed spectrum might share it with Bluetooth or Zigbee, for example. \begin{comment} \begin{figure}[t] \includegraphics[width=\columnwidth]{demo.jpg} \caption{Experimental setup with three Ettus USRP B210s.} \label{fig:procedure} \vspace{-1em} \end{figure} \end{comment} \subsubsection{Performance of the Classifier Under Different SNRs} In this analysis, we evaluate the detection and classification performance of our solution under different SNR conditions. For this evaluation, we generated a dataset with different levels of transmission power, measuring the SNR at the receiver side. We used 400 images to train the model and adopted the configuration described in Section \ref{sec:classifier}, which empirically produced satisfying results. As explained in Section \ref{sec:implementation}, our dataset generator has a minimum SNR threshold value for synchronisation of the preamble over-the-air. The measurements start with an SNR value of -13dB and go up to 35dB. Each spectrogram represents a 50ms time interval and a 20MHz band. \begin{figure} \includegraphics[width=\linewidth]{images/snr-75.png} \caption{Percentage of correctly detected objects and precision as a function of SNR.} \label{fig:snr} \end{figure} First, we are interested in assessing the ability of our model to detect the transmitted frames correctly. The top curve in Figure \ref{fig:snr} shows the percentage of correctly detected frames as a function of SNR. Detection is around 98\% for all SNR values tested, except -13 dB: at that SNR, the edges of the transmitted frames are not as sharp, as illustrated in Figure~\ref{fig:snrs}, resulting in a lower probability of detection. Next, we are interested in assessing our model's ability to classify the detected frames. The precision metric is commonly used in classification problems \cite{metrics}, and it represents the percentage of all detected frames that are correctly classified. The precision is shown in Figure~\ref{fig:snr} and varies from 86\% for an SNR of -~13dB to 98\% at SNR between -3 and 32dB. For the highest SNRs, 32 and 35dBs, we obtained an accuracy of 96\%. It is worth mentioning that when the SNR is very high the leakage in the transmission also increases, as illustrated in Figure~\ref{fig:snrs}, which in our evaluation compromised 2\% of classification accuracy. Figure~\ref{fig:snr} shows that when the SNR is low, both the ability to detect the frame and to correctly classify it are impaired. Although the higher leakage does not influence the ability to detect the frames, it slightly affects the classification performance. \begin{figure}[t] \centering \begin{subfigure}[b]{0.3\columnwidth} \includegraphics[width=\linewidth]{images/wifi-02.png} \caption{WiFi detection for SNR of -13dB.} \label{fig:wifi-02} \end{subfigure} \hfill \begin{subfigure}[b]{0.3\columnwidth} \includegraphics[width=\linewidth]{images/wifi-05.png} \caption{WiFi detection for SNR of 12dB.} \label{fig:wifi-05} \end{subfigure} \hfill \begin{subfigure}[b]{0.3\columnwidth} \includegraphics[width=\linewidth]{images/wifi-099.png} \caption{WiFi detection for SNR of 35dB.} \label{fig:wifi-099} \end{subfigure} \caption{Illustration of WiFi signals under different SNRs.}\label{fig:snrs} \end{figure} \subsubsection{Interfering Transmissions Under Different SNRs} In this analysis, we evaluate the ability of our model to detect and classify frames under the effect of cross-technology interference. We consider two signals with the same bandwidth: the desired signal is an LTE transmission, and the interfering signal is a WiFi transmission. The desired signal is transmitted with an SNR of 29 dB, and the \ac{SNR} of the interfering signal varies between 3dB to 35dB, both in the same centre frequency and with 20MHz of bandwidth. The spectrograms have the same characteristics mentioned in the previous section. Figure~\ref{fig:over} shows the results of our experiment. The model could detect the \ac{LTE} frames 97\% of the time, with this accuracy declining slightly as the \ac{SNR} of the interfering WiFi transmission increases. The curve representing the precision of the model shows that it improves in classifying the frames once the SNR of the WiFi signal increases. This happens because when the interfering WiFi frames had lower \ac{SNR}, the model had issues clearly classifying the transmissions as either \ac{LTE} or WiFi. However, once the SNR of the WiFi is higher than the \ac{SNR} of the LTE transmissions, the model is more successful on classifying them, achieving 86\% of accuracy. \begin{figure} \includegraphics[width=\linewidth]{images/overlapping-75.png} \caption{Correct object detection and precision per SNR of the interference signal.} \label{fig:over} \end{figure} These results show that even in a scenario of strong cross-technology interference, our model is capable of detecting the frames and classifying different RATs, providing a reasonable characterisation of the environment. To the best of our knowledge, this is the first work to assess the performance of an ML model for RAT classification under the effect of interference with overlapping transmissions. \subsection{Feature Extraction}\label{feature} \begin{figure*} \begin{subfigure}[h]{0.5\linewidth} \includegraphics[width=\columnwidth]{images/feature-band.png} \caption{Bandwidth deviation.} \label{fig:acband} \end{subfigure} \hfill \begin{subfigure}[h]{0.5\linewidth} \includegraphics[width=\columnwidth]{images/feature-freq.png} \caption{Centre frequency deviation.} \label{fig:acfreq} \end{subfigure}% \hfill \begin{subfigure}[h]{0.5\linewidth} \includegraphics[width=\columnwidth]{images/feature-frame.png} \caption{Frame duration deviation.} \label{fig:acframe} \end{subfigure}% \hfill \begin{subfigure}[h]{0.5\linewidth} \includegraphics[width=\columnwidth]{images/feature-inter.png} \caption{Inter-frame duration deviation.} \label{fig:acinter} \end{subfigure}% \hfill \caption{Feature extraction deviation evaluation in time and frequency domain.} \label{fig:feature} \end{figure*} To evaluate the capabilities of our feature extraction component, we generated several datasets using different combinations of: transmission bandwidths, frame duration, inter-frame duration, and centre frequency. The average SNR of the transmissions in this evaluation is 29 dB. Figure~\ref{fig:feature} illustrates the accuracy in the feature extraction, for different transmission characteristics. In our experiments, the value of the $I_f$ is 192.307KHz, which means that each pixel in the spectrograms accounts for a variation of 192.307KHz in the frequency domain. For example, if the calculated centre frequency is off by a single pixel, the computed value will deviate 192.307KHz from the correct centre frequency. The same applies in the time domain, where each pixel accounts for a variation of $I_t=519\mu$s. Figures~\ref{fig:acband} and~\ref{fig:acfreq} illustrate the accuracy in the extraction of frequency domain features. For all cases tested, the median deviation from the ground truth is at most 2\%. The results of the extraction of time-domain features are shown in Figures~\ref{fig:acframe} and~\ref{fig:acinter}. For these, the median deviation from the ground truth is at most 4\%. Figure~\ref{fig:acframe} illustrates that when the frame duration to be detected is smaller, the solution tends to have an average error higher than when the frame has a longer duration. This happens because it is harder to identify the precise size of smaller objects. As depicted in Figure \ref{fig:acinter}, the extraction of inter-frame duration shows similar accuracy. Our model detects a high percentage of the transmitted frames. Whenever the model fails to detect a frame, it assumes that the spectrum is empty for that period, increasing the extracted inter-frame duration. However, even in those cases, our model achieves a median deviation of less than 2 percent in all the cases. Considering the results discussed in this subsection, we can conclude that our model is capable of extracting the signal features with high precision. Moreover, if necessary for specific applications, a higher precision can be achieved by using higher-resolution spectrograms, i.e., smaller $I_f$ and $I_t$ values. \subsection{Performance Comparison Using Public Datasets} In this section, we evaluate our model using a publicly available dataset of commercial LTE and WiFi transmissions collected in Belgium. This evaluation is crucial because it shows that our model can work in real-world scenarios. First, we investigate the accuracy of our model as a function of the number of spectrograms in the training dataset. Then, to demonstrate the ability of our object detection model to classify commercial transmissions accurately, we compare our solution to the ones proposed in \cite{imecpaper}, which used the same publicly available dataset. \begin{figure} \includegraphics[width=\columnwidth]{images/n-spec.png} \caption{Number of spectrogram in the training phase versus accuracy of the model.} \label{fig:specresult} \end{figure} We start by analysing how the number of the samples (spectrogram images) affects the performance of the proposed model. The volume of training data can limit the application of \ac{ML}, because \ac{ML} techniques usually require a considerable amount of data to learn. For example, the work of \cite{imecpaper} used more than 12 thousand images for training the CNN solution based on spectrograms. In this section, we assess the performance of our model, considering the volume of training data We repeated the training in an identical setup while only adjusting the number of spectrograms used: 2, 10, 20, 30, 40, 50, 100, 200, and 400. The training samples equally represent the LTE and WiFi classes. Figure~\ref{fig:specresult} illustrates how accuracy depends on the number of spectrograms used in training the model. The best accuracy achieved was 96\% with 400 spectrograms. Hence, we limited the size of our training dataset to 400 images, as this volume of training data is sufficient for our model to achieve a comparable accuracy to the CNN image-based solution presented in \cite{imecpaper}, while using a considerably lower number of training images (only 3.23\% of the dataset size used in \cite{imecpaper}). \begin{figure} \includegraphics[width=\columnwidth]{images/comp_real.png} \caption{Classification accuracy of different ML solutions.} \label{fig:comparison} \end{figure} We then compared the object detection-based classification solution presented in this paper against other \ac{RAT} classification solutions in \cite{imecpaper}. These solutions include a \ac{FNN}, a \ac{RForest} \cite{random}, a \ac{CNN} solution based on \ac{RSSI}, a \ac{CNN} solution based on IQ samples, and a CNN solution based on spectrograms. The results of this comparison are shown in Figure \ref{fig:comparison}. The CNN-based solutions, including the solution presented in this paper, correctly identify the RAT with accuracy above 95\%. The CNNs for IQ and image-based solutions achieve marginally better accuracy compared to our proposed solution. However, our solution provides additional information regarding spectrum usage that can enhance the efficient use of the spectrum. \iffalse \subsection{Evaluation of the bounding boxes: publicly available and generated datasets} In this section, we evaluate how precisely our bounding boxes are being generated. In the field of object detection, the evaluation of the generation of accurate bounding boxes resorts to a metric named \ac{mAP}. This metric was introduced during the PASCAL VOC 2012 \cite{pascal} competition and has been used to calculate the precision of an object detection model. The first step for the calculation of the \ac{mAP} is the calculation of the AP for every class of each model. The ground truth of the object position is necessary to perform this evaluation. To calculate the AP it is necessary to plot the precision curve, which identifies the precision and the recall of our model. The recall is the ratio of the number of frames that are correctly classified to the number of transmitted frames. The area under the curve is then used to calculate the AP value. The \ac{mAP} value is the mean of all AP: in our case we have 2 classes, so the \ac{mAP} is the mean of 2 APs. Using the \ac{mAP} metric, we compared the performance of our model trained with the publicly available dataset to our model trained with the generated dataset. Figure~\ref{fig:ap-real} illustrates the values of APs calculated by the model trained with the dataset collected in Belgium. Figure~\ref{fig:ap-lab} shows the values of APs for both classes calculated by the model trained with the generated dataset. The model created with data collected in Belgium shows inferior performance compared to the one trained with the dataset generated in the laboratory. The model used with commercial data cannot find more than 75\% of all the transmitted LTE frames and no more than 92\% for WiFi frames. However, it maintains the precision of the detected objects above 90\% for LTE and near to 99\% for WiFi. The model performs better for WiFi transmissions because the distance between the LTE BSs can vary a few kilometres, which influences the SNR and impairments of the collected data. In the WiFi scenario, it was collected in a more stable scenario, where the distance does not vary as much. In the AP graph of the model trained with dataset generated in the laboratory, Figure~\ref{fig:ap-lab}, our model detects more than 97\% of the LTE frames and 94\% of the WiFi frames with high precision, achieving approximately 99\% detection of the objects. The slightly difference between the RATs exists because the WiFi transmissions have a shorter frame duration than the LTE transmisions, which makes slightly harder to find all the frames. We believe that the model trained with the data generated by us has a better performance due to the automatic labelling, being more precise than manual label approaches. There is also the fact that the spectrograms generated by the public dataset were collected by different \acp{BS} and under different circumstances, which may have influenced the results as the same did not apply to the generated data in a controlled environment. The \ac{mAP} of our model trained with the public dataset is 83.04\%, and the \ac{mAP} of the same model trained with the generated data is 96.17\%. To the best of our knowledge, this is the first work that evaluates the mAP of an object detection model for \acp{RAT} classification. It is worth mentioning that when the YOLOv2 is used on the VOC 2007 dataset \cite{yolo2}, it achieves 78.6 mAP for images with a resolution of 544x544. \begin{figure} \centering \begin{subfigure}[h]{0.9\linewidth} \includegraphics[width=\linewidth]{images/AP_lte-real.png} \caption{\ac{LTE} AP from our model trained with the dataset collected in Belgium.} \label{fig:aplte-real} \end{subfigure} \hfill \begin{subfigure}[h]{0.9\linewidth} \includegraphics[width=\linewidth]{images/AP_wifi-real.png} \caption{WiFi AP from our model trained with the dataset collected in Belgium.} \label{fig:apwifi-real} \end{subfigure}% \caption{AP of the \ac{ML}-based signal classifier trained with the dataset collected in Belgium.} \hfill \label{fig:ap-real} \end{figure} \begin{figure} \centering \begin{subfigure}[h]{0.9\linewidth} \includegraphics[width=\linewidth]{images/AP_lte-lab.png} \caption{\ac{LTE} AP from our model trained with the generated dataset.} \label{fig:aplte-lab} \end{subfigure} \hfill \begin{subfigure}[h]{0.9\linewidth} \includegraphics[width=\linewidth]{images/AP_wifi-lab.png} \caption{WiFi AP from our model trained with the generated data.} \label{fig:apwifi-lab} \end{subfigure}% \caption{AP of the \ac{ML}-based signal classifier trained with the generated dataset.}\label{fig:ap-lab} \hfill \end{figure} \begin{comment} However, in object detection, a model can detect any number of false objects in an image, which means there is an infinite number of possible incorrect detection. To tackle this issue, in object detection, the metric accuracy is used to express how reliable are the predictions from a model, i.e., it illustrates the percentage of the predictions' correctness, as shown in Equation~\ref{eq:precision}. Furthermore, for estimating the number of misclassifications, we can simply calculate $1 - Accuracy$. \end{comment} \fi \section{Introduction}\label{sec:introduction}} \else \section{Introduction} \label{sec:introduction} \fi \IEEEPARstart{T}{his} demo file is intended to serve as a ``starter file'' for IEEE Computer Society journal papers produced under \LaTeX\ using IEEEtran.cls version 1.8b and later. I wish you the best of success. \hfill mds \hfill August 26, 2015 \subsection{Subsection Heading Here} Subsection text here. \subsubsection{Subsubsection Heading Here} Subsubsection text here. \section{Conclusion} The conclusion goes here. \appendices \section{Proof of the First Zonklar Equation} Appendix one text goes here. \section{} Appendix two text goes here. \ifCLASSOPTIONcompsoc \section*{Acknowledgments} \else \section*{Acknowledgment} \fi The authors would like to thank... \ifCLASSOPTIONcaptionsoff \newpage \fi \section{Introduction} \IEEEPARstart{T}{his} demo file is intended to serve as a ``starter file'' for IEEE Computer Society journal papers produced under \LaTeX\ using IEEEtran.cls version 1.7 and later. I wish you the best of success. \hfill mds \hfill January 11, 2007 \subsection{Subsection Heading Here} Subsection text here. \subsubsection{Subsubsection Heading Here} Subsubsection text here. \section{Conclusion} The conclusion goes here. \appendices \section{Proof of the First Zonklar Equation} Appendix one text goes here. \section{} Appendix two text goes here. \ifCLASSOPTIONcompsoc \section*{Acknowledgments} \else \section*{Acknowledgment} \fi The authors would like to thank... \ifCLASSOPTIONcaptionsoff \newpage \fi \section{Introduction} This demo file is intended to serve as a ``starter file'' for IEEE conference papers produced under \LaTeX\ using IEEEtran.cls version 1.8b and later. I wish you the best of success. \hfill mds \hfill August 26, 2015 \subsection{Subsection Heading Here} Subsection text here. \subsubsection{Subsubsection Heading Here} Subsubsection text here. \section{Conclusion} The conclusion goes here. \section*{Acknowledgment} The authors would like to thank... \section{Introduction} This demo file is intended to serve as a ``starter file'' for IEEE conference papers produced under \LaTeX\ using IEEEtran.cls version 1.7 and later. I wish you the best of success. \hfill mds \hfill January 11, 2007 \subsection{Subsection Heading Here} Subsection text here. \subsubsection{Subsubsection Heading Here} Subsubsection text here. \section{Conclusion} The conclusion goes here. \section*{Acknowledgment} The authors would like to thank... \section{Introduction} This demo file is intended to serve as a ``starter file'' for IEEE Computer Society conference papers produced under \LaTeX\ using IEEEtran.cls version 1.8b and later. I wish you the best of success. \hfill mds \hfill August 26, 2015 \subsection{Subsection Heading Here} Subsection text here. \subsubsection{Subsubsection Heading Here} Subsubsection text here. \section{Conclusion} The conclusion goes here. \ifCLASSOPTIONcompsoc \section*{Acknowledgments} \else \section*{Acknowledgment} \fi The authors would like to thank... \section{Introduction} \IEEEPARstart{T}{his} demo file is intended to serve as a ``starter file'' for IEEE journal papers produced under \LaTeX\ using IEEEtran.cls version 1.8b and later. I wish you the best of success. \hfill mds \hfill August 26, 2015 \subsection{Subsection Heading Here} Subsection text here. \subsubsection{Subsubsection Heading Here} Subsubsection text here. \section{Conclusion} The conclusion goes here. \appendices \section{Proof of the First Zonklar Equation} Appendix one text goes here. \section{} Appendix two text goes here. \section*{Acknowledgment} The authors would like to thank... \ifCLASSOPTIONcaptionsoff \newpage \fi \section{Introduction} \IEEEPARstart{T}{his} demo file is intended to serve as a ``starter file'' for IEEE journal papers produced under \LaTeX\ using IEEEtran.cls version 1.7 and later. I wish you the best of success. \hfill mds \hfill January 11, 2007 \subsection{Subsection Heading Here} Subsection text here. \subsubsection{Subsubsection Heading Here} Subsubsection text here. \section{Conclusion} The conclusion goes here. \appendices \section{Proof of the First Zonklar Equation} Appendix one text goes here. \section{} Appendix two text goes here. \section*{Acknowledgment} The authors would like to thank... \ifCLASSOPTIONcaptionsoff \newpage \fi \section{Introduction}\label{sec:introduction}} \IEEEPARstart{T}{his} demo file is intended to serve as a ``starter file'' for IEEE Computer Society journal papers produced under \LaTeX\ using IEEEtran.cls version 1.8b and later. I wish you the best of success. \hfill mds \hfill August 26, 2015 \subsection{Subsection Heading Here} Subsection text here. \subsubsection{Subsubsection Heading Here} Subsubsection text here. \section{Conclusion} The conclusion goes here. \appendices \section{Proof of the First Zonklar Equation} Appendix one text goes here. \section{} Appendix two text goes here. \ifCLASSOPTIONcompsoc \section*{Acknowledgments} \else \section*{Acknowledgment} \fi The authors would like to thank... \ifCLASSOPTIONcaptionsoff \newpage \fi \section{Introduction} \IEEEPARstart{T}{his} demo file is intended to serve as a ``starter file'' for IEEE Computer Society journal papers produced under \LaTeX\ using IEEEtran.cls version 1.7 and later. I wish you the best of success. \hfill mds \hfill January 11, 2007 \subsection{Subsection Heading Here} Subsection text here. \subsubsection{Subsubsection Heading Here} Subsubsection text here. \section{Conclusion} The conclusion goes here. \appendices \section{Proof of the First Zonklar Equation} Appendix one text goes here. \section{} Appendix two text goes here. \ifCLASSOPTIONcompsoc \section*{Acknowledgments} \else \section*{Acknowledgment} \fi The authors would like to thank... \ifCLASSOPTIONcaptionsoff \newpage \fi \section{Introduction} \IEEEPARstart{T}{his} demo file is intended to serve as a ``starter file'' for IEEE Communications Society journal papers produced under \LaTeX\ using IEEEtran.cls version 1.8b and later. I wish you the best of success. \hfill mds \hfill August 26, 2015 \subsection{Subsection Heading Here} Subsection text here. \subsubsection{Subsubsection Heading Here} Subsubsection text here. \section{Conclusion} The conclusion goes here. \appendices \section{Proof of the First Zonklar Equation} Appendix one text goes here. \section{} Appendix two text goes here. \section*{Acknowledgment} The authors would like to thank... \ifCLASSOPTIONcaptionsoff \newpage \fi \section{Introduction} \IEEEPARstart{T}{his} demo file is intended to serve as a ``starter file'' for IEEE \textsc{Transactions on Magnetics} journal papers produced under \LaTeX\ using IEEEtran.cls version 1.8b and later. I wish you the best of success. \hfill mds \hfill August 26, 2015 \subsection{Subsection Heading Here} Subsection text here. \subsubsection{Subsubsection Heading Here} Subsubsection text here. \section{Conclusion} The conclusion goes here. \appendices \section{Proof of the First Zonklar Equation} Appendix one text goes here. \section{} Appendix two text goes here. \section*{Acknowledgment} The authors would like to thank... \ifCLASSOPTIONcaptionsoff \newpage \fi
{ "timestamp": "2020-07-28T02:39:59", "yymm": "2007", "arxiv_id": "2007.13561", "language": "en", "url": "https://arxiv.org/abs/2007.13561" }
\section{Introduction} Data Compression and the associated techniques coming from Information Theory, has a long and very influential history for the storage and mining of biological data \cite{giancarloCompression09}. In recent years, it has received increasing attention via the proposal of novel specialized compressors, due to the facts that (a) storage costs have become quite significant given the massive amounts of data produced by HTS technologies (see \cite{Fritz11} for an enlightening analysis, which is still valid \cite{pavlichin2018}); (b) generic compressors, even of the latest generation, e.g. LZ4 \cite{Lz4}, BZIP2 \cite{Bzip2}, are inadequate for the task of biological data compression. Good analytic reviews of the State of the Art are provided in \cite{giancarlo2014compressive,Numanagic2016}, although no clear winner compressor has emerged. It is worth of mention that data compression is now considered also as something convenient to speed-up the processing of Bioinformatics pipelines. Indeed, following the idea of computing on compressed data, developed in Computer Science under the term Succinct Data Structures \cite{navarro2016}, the concept of Compressive Genomics has been proposed, with some highly specialized proof of principle \cite{loh2012}. Due to the same reasons of massive data production, Big Data Technologies for Genomics and the Life Sciences, have been indicated as a direction to be actively pursued \cite{kahn2011future}, with MapReduce \cite{dean2008mapreduce}, Hadoop \cite{HadoopGuide} and Spark \cite{SparkGuide} being the preferred ones \cite{Cattaneo19}. This is not just a following of a \vir{Big Data trend} that has proved successful in other fields of Science, since Bioinformatics solutions based on those techniques can be more effective than classic HPC ones, thanks to their scalability with available hardware \cite{KCH} and to their easiness of use. For later reference, it is worth pointing out that those technologies have \vir{compression capabilities} via built-in generic data compressors, e.g., BZIP2 \cite{Bzip2}. The corresponding software is referred to with the technical term Codec, where compression is coding and decompression is decoding. Moreover, although with quite some knowledge of those technologies, it is possible to add other compressors to Hadoop, i.e., additional Codecs. It is to be added that not all data compressors are amenable to a profitable incorporation due to the requirement of {\em splittable} compression: a file is divided into (un)compressed data blocks that can be compressed and decompressed separately granting in any case the integrity of the entire file. Indeed, processing files compressed using a non-splittable format is still possible under Hadoop, but at a cost of very long decompression times (data not shown but available upon request). Further discussion on those topics is in Section \ref{sec:split}. In order to make a compressor splittable, when its standard version is not, requires major code reorganization and rewriting. In what follows, the term {\em standard} denotes a compressor that executes on a sequential machine, i.e., a PC. Given the above discussion about Data Compression, it is rather surprising that the deployment of specialized compressors for biological data in Big Data technologies is episodic, in particular for FASTA/Q file formats, e.g., \cite{shi2016}, that host a substantial part of genomic data. \subsection{Methodological Contributions} We provide two contributions for the deployment of standard specialised compressors for FASTA/Q files within MapReduce-Hadoop, together with the corresponding software. \begin{itemize} \item {\bf Splittable Compressor Meta-Codec.} When a standard compressor is splittable, we provide a method that facilitates its incorporation in Hadoop. Use of the software library associated to the method offers a substantial savings of programming time for a rather complicated task. Intuitively, the {\bf Splittable Compressor Meta-Codec} performs a transformation of a standard splittable compressor in an Hadoop splittable Codec for that compressor. \item {\bf Universal Compressor Meta-Codec.} Independently of being splittable or not, as long as some mild assumptions in regard to input/output handling, we provide a method to incorporate a data compressor in Hadoop, making it splittable. It is worth pointing out that the vast majority of standard specialized FASTA/Q compressors are not splittable. Again, intuitively, the {\bf Universal Compressor Meta-Codec} performs a transformation of a standard compressor in an Hadoop splittable Codec for that compressor. \end{itemize} A few comments are in order. The {\bf Splittable Compressor Meta-Codec} provides a template useful for accelerating and simplifying the development of specialized Hadoop Codecs. The {\bf Universal Compressor Meta-Codec} allows to support in Hadoop any standard compressor with no programming at all, provided that is usable as a command-line application. The first option has to be preferred when interested in achieving the best performance possible, at a cost of analyzing the internal format employed by files processed with that compressor and writing the required integration code. The second option allows to almost instantaneously support any command-line compressor, but at a cost of possibly reduced performance that we have measured to be negligible with respect to the direct use of the {\bf Splittable Compressor Meta-Codec}. Both methods work also for Spark, when it uses the Hadoop File System. Finally, given the pace at which new standard specialized compressors are implemented, our methods can readily support the deployment of those future implementations in Hadoop. For later use, we refer to the version of a standard compressor with the prefix HS when the incorporation in Hadoop has been made by using the {\bf Splittable Compressor Meta-Codec} or an Hadoop splittable Codec is already available, e.g., LZ4 becomes HS\_LZ4. Analogously, we use the prefix HU, when the {\bf Universal Compressor Meta-Codec.} has been used. . \subsection{Practical Contributions} We provide experimental evidence that our methods are a major advance in dealing with massive data production in genomics within one of the Big Data technologies of choice. Indeed, for the {\bf Universal Compressor Meta-Codec}, we show the following via an experimental comparative analysis involving a selection of specialized FASTA/Q compressors vs the generic compression Codecs already available in Hadoop. \begin{itemize} \item{\bf Disk space savings.} The size of the FASTA/Q files is significantly reduced with the use of specialized HU Codecs vs the generic HS available in Hadoop. Consequently, the cost of the hardware required to store them in the Hadoop File System is reduced. \item{\bf Reading time savings.} When using a specialized HU, the additional time required to decompress a FASTA/Q file in memory is counterbalanced by the much smaller amount of time required to load that file from the Hadoop File System. This results in a significant reduction of the overall reading time. \item{\bf Network communication time overhead savings.} The number of concurrent tasks required to process, in a distributed way, a FASTA/Q file compressed via an HU is greatly reduced, thus allowing for a significant reduction of the network communication time overhead required for the recombination of their outputs. \end{itemize} As for the {\bf Splittable Compressor Meta-Codec}, we reach the same conclusions as above, but the experimentation is somewhat limited: the only standard specialized compressor for FASTA/Q files featuring a splittable format is DSRC \cite{roguski2014dsrc}. Finally, disk space and reading time savings apply also to the Apache Spark framework, when used to process FASTA/Q files stored on the Hadoop File System. \section{Methodologies} This section is organized as follows. Section \ref{sec:split} is dedicated to introduce some basic notions about Hadoop, useful for the presentation of our methods. Section \ref{subsec:guidelines} outlines some technical problems regarding the design of a splittable Codec for Hadoop, proposing our solutions. The last two section are dedicated to the description of our two Meta-Codecs. \subsection{Preliminary}\label{sec:split} MapReduce is a programming paradigm for the development of algorithms able to process Big Data on a distributed system on an efficient and scalable way. It is based on the definition of a sequence of {\em map} and {\em reduce} functions that are executed, as {\em tasks}, on the nodes of a distributed system. Data communications between consecutive tasks is automatically handled by the underlying distributed computing framework, including the {\em shuffle} operation, required to move data from one node to another one of the distributed system. In Section 1 of the Supplementary Material\xspace we provide more information about this topic, including Hadoop, one of the most popular MapReduce implementation. Here we limit ourselves to describe how files are stored in the the Hadoop File System, i.e. HDFS. When uploading a large file to HDFS (by default, larger than $128$MB), it is automatically partitioned into several parts of equal size, where each part is called {\em HDFS data block} and is physically assigned to a Datanode, the nodes of the distributed system that execute map and reduce tasks. For fault-tolerance reasons, HDFS data blocks can be replicated on several Datanodes according to a user-defined {\em replication factor}. This allows to process a HDFS data block even if the Datanode originally containing it becomes unavailable. By default, Hadoop assumes that each map task processes only the content of one particular HDFS data block. However, it may happen that, because of the aforementioned partitioning, a record to be analyzed by one map task is cut into two parts located in two different HDFS data blocks. We refer to these cases as {\em disalignments}. This circumstance is managed by HDFS through the introduction of the {\em input split} concept or {\em split}, for short. It can be used, at the application level, to logically redefine the range of data to be processed by each map task, thus allowing a map task to process data found on HDFS data blocks different than the one it is processing. \subsubsection{Hadoop Support for the Input of Compressed Files} Currently, Hadoop supports two types of Codecs: \begin{itemize} \item \emph{Stream-oriented.} Codecs in this class require that the whole file be available to each map task prior to decompressing it. For this reason, when a map task starts its execution, a request is issued to the other nodes of the cluster. As a result, all the parts of the file to be processed are collected from these nodes and merged into a single local file. This type of Codec can be developed by creating a new Java class implementing the standard Hadoop \texttt{CompressionCodec} interface. \item \emph{Block-oriented.} Codecs in this class allow each map task to decompress only a portion of the input file, without requiring the remaining parts of it. They assume the compressed file to be logically split into data blocks, here referred to as {\em compressed data blocks}, where each of them can be decompressed independently of the others. Assuming the possibility of knowing the boundaries of each compressed data block, a map task can autonomously extract and decompress all the compressed data blocks existing in its HDFS data blocks. This type of Coded can be developed by creating a new Java class implementing the standard Hadoop \texttt{SplittableCompressionCodec} interface. It is worth noting that the stream-oriented approach implies a significant computational overhead, as the same file is decompressed as many times as the number of map tasks processing it. It implies also a significant communication overhead, because the same file has to be replicated on each computational node running at least a map task. Finally, it may prevent a job from running at all because map tasks may not have enough memory to handle the decompression of the input file (e.g., when handling large files). For this reason, in this research, we focus on block-oriented Codecs, i.e., \emph{splittable} Codecs. \end{itemize} \label{sec:Methods} \subsection{General Guidelines for the Design of an Hadoop Splittable Codec.} \label{subsec:guidelines} Here we consider some problems that a programmer must face in order to obtain an Hadoop splittable Codec, offering solutions. We concentrate on genomic files, although the guidelines apply to any lossless textual compressor. There are two problems to face when extracting genomic sequences from a splittable compressed file. The first is about inferring the logical internal organization of the compressed file in regard to determine the relative position of the compressed data blocks. The second is in regard to the management of the possible disalignments existing between the physical partitioning of the file, as determined by HDFS, and the internal logical organization of the compressed file in compressed data blocks. In Section \ref{subsec:inferring} and in Section \ref{subsec:disalignments}, respectively, these problems are described in details and the solution we propose is presented. \begin{figure}[ht] \centering \includegraphics[scale=.25]{img/layout2.png} \caption{The layout of a block-oriented compressed data file when uploaded to HDFS. In the figure, (a) the original file includes an header, a footer and $8$ compressed data blocks. (b) When uploaded to HDFS, it is partitioned into $4$ HDFS data blocks. (c) As a result of the partitioning, the compressed data block labeled as $CB{5}$ is divided into two parts and assigned to two different HDFS data blocks. Using the {\em Compressed Block Split} strategy, each compressed data block is modeled as a distinct split. (d) Using the {\em Enhanced Split} strategy, several compressed data blocks are grouped into fewer input splits.} \label{fig:layout2} \end{figure} \subsubsection{Determining the Internal Structure of a Compressed File} \label{subsec:inferring} A map task can extract and decompress the compressed data blocks existing in the HDFS data block it is analyzing only if it knows their size and relative positions. However, this information could be stored elsewhere (e.g., in the footer of the compressed file) or it could be encoded implicitly. In the following, we provide a solution for efficiently dealing with the most frequent scenario, i.e., the one where the list of compressed data blocks is made explicitly available. We refer the interested reader to \cite{Bzip2} for an example of a solution for encoding this list implicitly. \paragraph{\em Explicit Representation.} An explicit list of all the compressed data blocks existing in a compressed file is maintained in an auxiliary {\em index} data structure. This latter may either be located at the beginning or at the end of the file (e.g., DSRC \cite{roguski2014dsrc}), or it can be saved in multiple copies along a file. In some other cases, this data structure can be saved in an external file complementing the compressed file. In this case, the solution proposed here is to have one process to retrieve the index before processing the compressed file and send a copy to all nodes of the distributed system using the standard Hadoop {\tt Configuration} class. Then, each computing node makes available this information to the map tasks that it runs, thus allowing them to determine the list and the relative position of the compressed data blocks in their HDFS data blocks. \subsubsection{Managing Disalignments between Compressed Data Blocks and HDFS Data Blocks} \label{subsec:disalignments} When uploading a large compressed splittable file on HDFS, it is likely that several of its compressed data blocks would be broken into parts located on different HDFS data blocks, because of the partitioning strategy used by the distributed file system. An example of such a case is discussed in Figure \ref{fig:layout2}. The file is initially stored as a whole on a local file system (Figure \ref{fig:layout2}(a)). If uploaded without specifying any splitting strategy, it would be partitioned into separate parts independently of the compressed data blocks, as pictured in Figure \ref{fig:layout2}(b). This would imply a severe performance overhead when reading the content of compressed data blocks spawn across different parts. Here, a first possible solution, denoted as {\em Compressed Block Split} strategy, would be to model as input splits all the compressed data blocks existing in a compressed file (see Figure \ref{fig:layout2}(c)). However, this strategy may imply a performance overhead because the typical size of compressed data blocks is usually orders of magnitude smaller than those of the HDFS data blocks. Thus, the number of input splits would be much larger than the number of HDFS data blocks. A more efficient solution, here denoted as {\em Enhanced Split} strategy, is to fit several compressed data blocks into the same Hadoop input split and, then, have each map task query a local index listing the offset of all the single compressed data blocks existing in a split (see Figure \ref{fig:layout2}(d)). At this point, when processing compressed data blocks in a split, two cases may occur: \begin{itemize} \item{\bf standard case:} the compressed data block is entirely contained in a single HDFS data block. In such a circumstance, it is retrieved using the information contained in the index and, then, decompressed using the considered Codec. \item{\bf exceptional case:} the compressed data block is physically divided by HDFS into two parts, $p_{1}$ and $p_{2}$. These parts are located on two HDFS data blocks but are assigned to the same input split. In such a case, a copy of $p_{2}$ is automatically pulled from the Datanode holding it. Then, $p_{1}$ and $p_{2}$ are properly concatenated to obtain $p$. The resulting compressed data block is decompressed using the Codec decompression function. \end{itemize} \subsection{The architecture of the Splittable Compressor Meta-Codec} \label{subsec:codec} This Meta-Codec consists of a library of abstract Java classes and interfaces implementing a standard Hadoop splittable Codec for the compression of FASTA/Q files, but without any compression/decompression routine. \label{subsubsec:customcodec} Its architecture is based on a specialization of the generic compressors and decompressors interface coming with Hadoop and targeting block-based Codecs. It offers the possibility to automatically assemble a compressed file as a set of compressed data blocks while maintaining their index using an explicit representation, as described in Section \ref{subsec:inferring}. In addition, the compressed data blocks are organized according to the Enhanced Split strategy (see Section \ref{subsec:disalignments}). Also the creation of the compressed data blocks index is automatically managed by our Meta-Codec, which also provides the ability to share the content of the index with all nodes of an Hadoop distributed system so to allow for each node to know the exact boundaries of the compressed data blocks it has to process. Additional details regarding the architecture of this Meta-Codec are given in Figure 1 of the Supplementary Material\xspace. Here we limit ourselves to mention that it includes the following Java classes. \begin{itemize} \item{\texttt{CodecInputFormat.}} It fetches the list of compressed data blocks existing in a compressed file and sends it to all the nodes of an Hadoop cluster together with the instructionts required for their decompression. Then, it defines the input splits as containers of compressed data blocks. These operations are compressor-dependent and require the implementation of several abstract methods like \texttt{extractMetadata}, to extract the metadata from the input file, and \texttt{getDataPosition}, to point to the starting address of the first compressed data block. \item{\texttt{NativeSplittableCodec.}} Assuming the compression/decompression routines for a particular Codec are available as a standard library installed on the underlying operation system, it simplifies its integration in the Codec under development. \item{\texttt{CodecInputStream.}} It reads the compressed data blocks existing in a HDFS data block, according to the input split strategy defined by the \texttt{CodecInputFormat}. The compressed data blocks are decompressed on-the-fly by invoking the decompression function of the considered compressor and returned to the main application. Some of these operations are compressor-dependent and require the implementation of the \texttt{setParameters} abstract method. This method is used to pass to the Codec the command-line parameters required by the compressor, e.g execution flags, in order to correctly decompress the compressed data blocks. \item{\texttt{CodecDecompressor.}} It decompresses the compressed data blocks given by the \texttt{CodecInputStream}. It requires the implementation of the \texttt{decompress} abstract method. \item{\texttt{NativeCodecDecompressor.}} It decompresses the compressed data blocks given by the \texttt{CodecInputStream}. It requires the implementation of the \texttt{decompress} method through the native interface. \end{itemize} \subsection{The architecture of the Universal Compressor Meta-Codec} \label{subsec:UC} This Meta-Codec is a software component able to automatically expose as a HU splittable Codec the compression/decompression routines offered by a given standard compressor. As opposed to the {\bf Splittable Compressor Meta-Codec}, requiring some programming, it works as a ready-to-use black box, since the only information it needs is the set of command lines to be used for compressing and for decompressing an input file by means of a standard compressor. Assuming there is an input file to compress in a splittable way, this method works by splitting the file into uncompressed data blocks and, then, compressing each uncompressed data block using an external compression application according to the command line given at configuration time. As for the {\bf Splittable Compressor Meta-Codec}, compressed data blocks are organized following the Enhanced Split strategy (see Section \ref{subsec:disalignments}). The resulting file will use an index for the explicit representation of the compressed data blocks existing therein (see Section \ref{subsec:inferring}) based on the following format. \begin{itemize} \item {\bf compression\_format}: A unique id number telling the Codec format used for this file. \item {\bf compressed\_data\_blocks\_number}: Number of compressed data blocks existing in the file. \item {\bf blocks\_sizes\_list}: List of the size of all the compressed data blocks included in the file. \item {\bf uncompressed\_block\_size}: The size of the data structure used for decompressing the compressed data blocks. \end{itemize} The decompression is achieved by exploiting the information contained in the aforementioned index. The usage of this Meta-Codec assumes the possibility of parking as files on a local device the content of the (un)compressed data blocks to process. For efficiency reasons, these are saved on the local RAM disk, a virtual device usable as a disk but with the same performance of memory. The Java classes for this Meta-Codec, shown in Figure 2 of the Supplementary Material\xspace, are the following. \begin{itemize} \item \texttt{Algo}. Contains the command-line instructions of a particular compressor, defined through the configuration file. \item \texttt{UniversalCodec}. Contains fields and methods for managing data compression and decompression. \item \texttt{UniversalInputFormat}. Extends the \texttt{CodecInputFormat} class, implementing the methods according to the compressed file structure. \item \texttt{UniversalDecompressor}. Extends the \texttt{CodecDecompressor} class, implementing the method \texttt{decompress}, according to the command-line commands of the \texttt{Algo} object. \end{itemize} \section{Results and Discussion} \label{sec:experiments} In order to quantify the advantages of deploying FASTA/Q Codecs in Hadoop via our methods, we perform the following experiments. \begin{itemize} \item{\bf Experiment 1: An assessment of disk space savings}. The aim here is to determine the possible disk space savings achievable thanks to the adoption of a specialized HU or HS Codec, when storing FASTA/FASTQ files on the Hadoop HDFS distributed file system, with respect to the usage of general-purpose HS Codecs available in Hadoop. \item {\bf Experiment 2: An assessment of the possible performance loss due to the usage of an HU Codec against an HS Codec}. The aim here is to evaluate the potential performance loss that is experienced when processing a compressed file using a compressor obtained by means of our {\bf Universal Compressor Meta-Codec } rather than using a compressor obtained via the {\bf Splittable Compressor Meta-Codec}. This experiment is implemented by comparing HU\_DSRC, obtained via the first Meta-Codec vs HS\_DSRC, obtained via the latter Meta-Codec. \item{\bf Experiment 3: An assessment of reading times savings}. The aim here is to determine if the trade-off between the cost to be paid for reading and unpacking compressed FASTA/Q files, once compressed with an HU Codec, and the time saved thanks to the smaller amount of data to read from HDFS is positive. Following the methodology used in \cite{fastdoop}, this experiment is implemented by benchmarking a very simple Hadoop application. It runs only map tasks whose goal is to count the number of occurrences of the letters $\{A,C,G,T,N\}$ in the input sequences, without producing any output. That is, the application spends most of its time reading data from HDFS. \item {\bf Experiment 4: An assessment of network communication time overhead savings}. The aim here is to establish if the smaller amount of network traffic due to the reduced number of map tasks needed to process a FASTA/Q file compressed with an HU Codec has a beneficial effect on the overall shuffle time of an application, compared to the case where the input file is uncompressed. This experiment is implemented by benchmarking an application where each map task counts the number of occurrences of the letters $\{A,C,G,T,N\}$, in each of the sequences read from an input file. Once finished, the map task emits, as output, the overall count for each of the considered sequences. The reduce tasks gather and aggregate the output of all map tasks, and print on output the overall number of occurrences of each distinct letter. That is, the execution of this experiment requires a communication activity between map and reduce tasks that is proportional to the number of map tasks being used. \end{itemize} \subsection{Experimental Setting} \subsubsection{Choice of Compression Codecs: Standard Specialized or Available in Hadoop} \label{subsec:compressors} For our experiments, all the standard splittable general-purpose compression Codecs available with Hadoop have been considered: BZIP2 \cite{Bzip2}, LZ4 \cite{Lz4} and ZSTD \cite{Zstd}. As for the specialized FASTA/Q files compressors, we have developed a set of compression Codecs based on SPRING \cite{spring}, DSRC \cite{roguski2014dsrc}, Fqzcomp \cite{bonfield2013compression}, MFCompress \cite{pinho2013mfcompress}. These have been chosen, with independent experiments, as they cover the range of possibilities in terms of the trade-off compression and time. A list of all these Codecs is reported in Table \ref{tab:encoders}, with their relevant features for this research. We recall from the Introduction, for the convenience of the reader, the terminology we use for denoting these compressors: we use the prefix HS when referring to compressors that are already present in Hadoop or that have have been incorporated in it using our {\bf Splittable Compressor Meta-Codec}, and the prefix HU when the incorporation has been made with our {\bf Universal Compressor Meta-Codec }. It is to be remarked that while the general purpose compressors have been designed to compress well and be fast in compression/decompression times, the specialized ones are not so uniform with respect to this design criteria. For instance, HU\_SPRING compresses very well, but it is very slow in compression/decompression times, while HU\_DSRC offers a good balance of those aspects. To place every compressor at a peer, we use their default settings. \begin{table}[t] \centering \begin{tabular}{lccc} \textbf{Compressor} & \textbf{Input Format} & \textbf{Implementation} \\ & \textbf{Type} & \\ \hline BZIP2 \cite{Bzip2} & Any file & HS\\ LZ4 \cite{Lz4} & Any file & HS\\ ZSTD \cite{Zstd} & Any file & HS\\ DSRC \cite{roguski2014dsrc} & FASTQ files & HS/HU \\ Fqzcomp \cite{bonfield2013compression} & FASTQ files & HU \\ MFCompress \cite{pinho2013mfcompress} & FASTA files & HU\\ SPRING \cite{spring} & FASTA/Q files & HU\\ \end{tabular} \caption{List of splittable Codecs considered in our experiments. For each splittable Codec it is reported: 1) the originating compressor; 2) the input format it supports; 3) whether or not it has been developed using our {\bf Splittable Compressor Meta-Codec} (HS) or our {\bf Universal Compressor Meta-Codec} (HU) or directly supported (HS). } \label{tab:encoders} \end{table} \subsubsection{Datasets} We have used for our experiments a collection of FASTQ and FASTA files, of different sizes. The FASTQ files contain a set of reads extracted from a collection of genomic sequences coming from the Pinus Taeda genome \cite{PinusTaeda2013}, while the FASTA files contain a set of reads extracted from a collection of genomic sequences coming from the Human genome \cite{Human2008}. We have chosen these datasets because they are so large to represent a relevant benchmark for the type of experiment we were interested in Section 4 of the Supplementary Material\xspace. Moreover, the choice of using collection of reads is to consider files that are the end product of HTS technologies. \begin{comment} \begin{table}[t] \centering \begin{tabular}{r|r|r|r|r|r} \textbf{Dataset} & \textbf{HS\_BZIP2} & \textbf{HS\_LZ4} & \textbf{HS\_ZSTD} & \textbf{HU\_MFCompress} & \textbf{HU\_SPRING}\\ \hline 16GB & 3.01GB & 7.21GB & 3.85GB & 2.22GB & 2.01GB\\ 32GB & 6.02GB & 14.42GB & 7.69GB & 4.43GB & 3.82GB\\ 64GB & 11.98GB & 28.71GB & 15.32GB & 9.26GB & 6.68GB\\ 96GB & 18.06GB & 43.25GB & 23.08GB & ?GB & ?GB\\ \end{tabular} \caption{Size of the FASTA input datasets when compressed with HS\_BZIP2, HS\_LZ4, HS\_ZSTD, HU\_MFCompress and HU\_SPRING compressors.} \label{tab:FAdatasetsFast} \end{table} \begin{table}[t] \centering \begin{tabular}{r|r|r|r|r|r} \textbf{Dataset} & \textbf{BZ2} & \textbf{LZ4} & \textbf{ZSTD} & \textbf{MFCompress} & \textbf{SPRING}\\ \hline 16GB & 2.93GB & ?GB & ?GB & ?GB & 2.01GB\\ 32GB & 5.86GB & 9.18GB & 6.93GB & 4.66GB & 3.82GB\\ 64GB & 11.65GB & 18.27GB & 13.81GB & 9.26GB & 6.68GB\\ 96GB & 17.58GB & ?GB & ?GB & ?GB & ?GB\\ \end{tabular} \caption{Size of the FASTA input datasets when compressed with HS\_BZIP2, HS\_LZ4, HS\_ZSTD, HU\_MFCompress and HU\_SPRING compressors.} \label{tab:FAdatasetsSlow} \end{table} \begin{table}[t] \centering \resizebox{\columnwidth}{!}{ \begin{tabular}{r|r|r|r|r|r|r} \textbf{Dataset} & \textbf{BZ2} & \textbf{LZ4} & \textbf{ZSTD} & \textbf{DSRC} & \textbf{Fqzcomp} & \textbf{SPRING}\\ \hline 16GB & 3.11GB & 7.42GB & 4GB & 2.45GB & 2.29GB & 1.89GB\\ 32GB & 6.2GB & 14.42GB & 7.69GB & 4.9GB & 4.57GB & 3.69GB\\ 64GB & 12.6GB & 28.71GB & 15.32GB & 9.98GB & 9.31GB & ?GB\\ 96GB & 19.09GB & 45.41GB & 24.45GB & 15.16GB & 14.13GB & ?GB\\ \end{tabular} } \caption{Size of the FASTQ input datasets when compressed with BZ2, LZ4, ZSTD, DSRC, Fqzcomp and SPRING encoders using, when possible, compression speed set to maximum} \label{tab:FQdatasetsFast} \end{table} \begin{table}[t] \centering \resizebox{\columnwidth}{!}{ \begin{tabular}{r|r|r|r|r|r|r} \textbf{Dataset} & \textbf{BZ2} & \textbf{LZ4} & \textbf{ZSTD} & \textbf{DSRC} & \textbf{Fqzcomp} & \textbf{SPRING}\\ \hline 16GB & 3.04GB & 4.67GB & 3.61GB & 2.26GB & 2.27GB & 1.89GB\\ 32GB & 6.06GB & 9.32GB & 7.2GB & 4.51GB & 4.53GB & 3.69GB\\ 64GB & 12.31GB & 18.9GB & 14.6GB & 9.19GB & 9.22GB & ?GB\\ 96GB & 18.65GB & 28.57GB & 22.03GB & 13.96GB & 14.00GB & ?GB\\ \end{tabular} } \caption{Size of the FASTQ input datasets when compressed with BZ2, LZ4, ZSTD, DSRC, Fqzcomp and SPRING encoders using, when possible, compression ratio set to maximum} \label{tab:FQdatasetsSlow} \end{table} \end{comment} \subsubsection{Hardware} The testing platform used for our experiments is a $9$ nodes Linux-based Hadoop cluster, with one node acting as \textit{resource manager} and the remaining nodes being used as workers. Each node of this cluster is equipped with two 8-core Intel Xeon E3-12@2.70 GHz processor and 32GB of RAM. Moreover, each node has a 200 GB virtual disk reserved to HDFS, for an overall capacity of about 1.6 TB. All the experiments have been performed using the Hadoop 3.1.1 software distribution. \subsection{Analysis of the experiments} \subsubsection{Experiment 1: Specialized compression yields significant disk space savings on Hadoop.} The results of this experiment, reported in Tables \ref{tab:bs_fasta}-\ref{tab:bs_fastq}, confirm the ability of the specialized HU and HS Codecs, i.e. the ones that have been imported in Hadoop using our methods, to reach a compression ratio much higher than that of generic HS Codecs already available in Hadoop. This is witnessed by the much smaller number of HDFS data blocks needed to store a distributed compressed representation of each file, with respect to uncompressed files. \begin{table} \centering \resizebox{\textwidth}{!}{ \begin{tabular}{c|c|c|c|c|c|c} Dataset & NoCompress & HS\_BZIP2 & HS\_LZ4 & HS\_ZSTD & HU\_SPRING & HU\_MFCompress \\ \hline 16G & 128 & 24 & 58 & 31 & 18 & 19 \\ 32G & 256 & 47 & 116 & 62 & 35 & 38 \\ 64G & 512 & 94 & 231 & 124 & 69 & 76 \\ 96G & 768 & 141 & 346 & 185 & 104 & 113 \\ \end{tabular} } \caption{Size of the FASTA input datasets, in terms of HDFS data blocks, when compressed with general-purpose and FASTA specialized compression Codecs. The size of each HDFS data block is 128 MB.} \label{tab:bs_fasta} \end{table} \begin{table} \centering \resizebox{\textwidth}{!}{ \begin{tabular}{c|c|c|c|c|c|c|c|c} Dataset & NoCompress & HU\_DSRC & HS\_DSRC & HU\_Fqzcomp & HS\_BZIP2 & HS\_LZ4 & HS\_ZSTD & HU\_SPRING \\ \hline 16G & 128 & 22 & 20 & 20 & 25 & 60 & 33 & 20 \\ 32G & 256 & 44 & 40 & 40 & 49 & 119 & 64 & 40 \\ 64G & 512 & 90 & 80 & 82 & 99 & 241 & 130 & 81 \\ 96G & 768 & 122 & 122 & 110 & 150 & 364 & 196 & 103 \\ \end{tabular} } \caption{Size of the FASTQ input datasets, in terms of HDFS data blocks, when compressed with general-purpose and FASTQ specialized compression Codecs. The size of each HDFS data block is 128 MB.} \label{tab:bs_fastq} \end{table} \subsubsection{Experiment 2: The performance overhead of our {\bf Universal Compressor Meta-Codec} with respect to our {\bf Splittable Compressor Meta-Codec} is negligible.} The decompression time performance guaranteed by our {\bf Universal Compressor Meta-Codec } when executing a particular compressor is very similar to that of a specialized implementation of the same compressor by means of our {\bf Splittable Compressor Meta-Codec}. This is clearly visible in Figures \ref{fig:task1FQGARR} and \ref{fig:task2FQGARR}, where we report the performance of HS\_DSRC and HU\_DSRC. Indeed, the two Codecs exhibit very similar performance, but the one based on our {\bf Universal Compressor Meta-Codec } took few minutes to be developed while the specialized one required non trivial programming skills as well as several days of work. \subsubsection{Experiment 3: a careful use of compression yields significant reading-times savings on Hadoop.} Space savings may turn into I/O time slow-down, when the decompression procedure is slow. Such a trade-off is well known for generic standard compressors. Here we study it in regard to HU specialized Codecs. Indeed, such a trade-off is clearly visible when comparing, e.g., the performance of HU\_DSRC with those of HU\_SPRING. As reported in Tables \ref{tab:bs_fasta} and \ref{tab:bs_fastq}, FASTQ files compressed with HU\_SPRING require about a smaller number of HDFS data blocks to process than those compressed with HU\_DSRC. Despite this, the performance of HU\_DSRC when used in the first benchmarking task are much better than that of HU\_SPRING because of its much faster decompression routines. In details, the best performance is achieved by HS\_DSRC, HU\_DSRC and HS\_ZSTD, but for different reasons: the first two because of their more efficient compression algorithm, the third because of its faster decompression routines. We also observe that the speed-up achieved by HS\_DSRC increases with the input file size. To explain this, consider that when managing the 16G input file, HS\_DSRC returns a number of HDFS data blocks to process that is smaller than the number of available processing cores. So, not all the available processing capability of the cluster is exploited. When the size of the input increases to 32G, the number of HDFS data blocks gets larger and allows to use all the available processing cores, thus resulting in an improved overall efficiency. This speed-up gets increasing because, as well as the input size grows, the number of HDFS data blocks to process per core increases as well, giving Hadoop the possibility to reschedule tasks over the cores having a smaller workload. On the bottom side, the HS\_BZIP2 and HU\_SPRING are the ones exhibiting the worst performance, because of their very slow decompression routines. \subsubsection{Experiment 4: a careful use of compression may yield significant network-overhead savings on Hadoop.} The smaller amount of HDFS data blocks required to store a compressed file yields a beneficial effect also on the network overhead required by Hadoop to recombine the output of the map tasks and, consequently, on the overall execution time. This is visible in Figures \ref{fig:task2FQGARR} and \ref{fig:task2FAGARR}, where we observe that the usage of compression allows for a significant speed-up, even when running more complex applications that the one considered in experiment 3. Interestingly, here the benchmarking task run using HU\_DSRC and HS\_DSRC is faster than the one run using HS\_ZSTD (see Figure \ref{fig:task2FQGARR}). The reason is that the smaller number of compressed data blocks produced by the DSRC algorithm implies a smaller number of Hadoop map tasks to be concurrently run for analyzing the input dataset thus reducing, in turn, the network overhead required for feeding the reduce tasks with the output of the map tasks. The smaller network overhead achievable by using either HS\_DSRC or HU\_DSRC rather than HS\_ZSTD is witnessed by the reduced shuffle time, as observable in Figure \ref{fig:shuffleFQGARR}. \begin{comment} \begin{figure}[ht] \centering \includegraphics[scale=.5]{img/countMap_fasta.png} \caption{Execution time speedup measured while running the first benchmarking task when considering compressed datasets of increasing size and different encoders, with respect to the execution on the equivalent uncompressed datasets (TeraStat).} \label{fig:task1FA} \end{figure} \begin{figure}[ht] \centering \includegraphics[scale=.5]{img/countMapReduce_fasta.png} \caption{Execution time speedup measured while running the second benchmarking task when considering compressed datasets of increasing size and different encoders, with respect to the execution on the equivalent uncompressed datasets (TeraStat).} \label{fig:task2FA} \end{figure} \begin{figure}[ht] \centering \includegraphics[scale=.5]{img/shuffle_fasta.png} \caption{Time spent running the shuffle phase during the second benchmarking task when considering compressed datasets of increasing size and different compressors, compared to the execution of the same task on uncompressed datasets (TeraStat).} \label{fig:shuffleFA} \end{figure} \end{comment} \begin{comment} \subsection{Results for FASTA Files} The full set of results for the remaining experiments are reported in in graphical form, in Figure \ref{fig:task1FAGARR} and in Figure \ref{fig:task2FAGARR}. The abscissa indicates the original size of the compressed dataset and the ordinate the speedup with respect to the time required to process the corresponding uncompressed dataset, Several comments are in order. The results show that compression succeeds in significantly improving the performance of the considered tasks. In almost all considered cases, the usage of compression led to a significant speedup with respect to the usage of non-compressed files. We also observe that the usage of special-purpose encoders is generally preferable, mostly because of their more pronounced compression efficiency, with respect to general-purpose encoders. However, the performance of these codes is not only bound to their compression efficiency but also their ability to quickly decompress a file. This is clearly visible when comparing, e.g., the performance of the DSRC Codec with those of the SPRING Codec. As reported in Table \ref{tab:FQdatasetsFast} and in Table \ref{tab:FAdatasetsSlow}, FASTQ files compressed with SPRING are about a 15\% smaller than those compressed with DSRC but, despite this, the performance of DSRC when used in the first benchmarking task are much better because of its faster decompression time. In details, the best performance is achieved by the DSRC and by the ZSTD Codecs, but for two different reasons: the first because of its more efficient compression algorithm, the second because of its faster decompression routines. We also observe that the speed-up achieved by DSRC increases with the input file size. To explain this, consider that when managing the 16G input file, DSRC returns a number of HDFS data blocks to process that is smaller than the number of available processing cores. So, not all the available processing capability of the cluster is exploited. When the size of the input increases to 32G, the number of HDFS data blocks gets larger and allows to use all the available processing cores, thus resulting in an improved overall efficiency. This speed-up gets increasing because, as well as the input size grows, the number of HDFS data blocks to process per core increases as well, giving Hadoop the possibility to reschedule tasks over the cores having a smaller workload. It is interesting to note that the performance exhibited by experiments run using DSRC via the universal Codec are were similar to the one measured when running the experiment using the DSRC specialized Codec. This seems to suggest that the overhead introduced by the universal Codec is very small, thus making this the preferred option when integrating a new Codec for dealing with compressed genomic sequences in Hadoop. Another effect worth to be mentioned is the irregular speed-up observed on some of the considered Codecs when increasing the input size. This effect is due to the number of stages required for processing all the HDFS data blocks of each input. According to this input and to the compression Codec, For instances, assuming that the compressed encoding of an input file could be processed in a certain number of stages, doubling its size could result in the execution of a number of stages larger than two times the original one. On the bottom side, the BZ2 and the SPRING Codecs are the ones exhibiting the worst performance, because of their very slow decompression routines. The situation is similar for the second experiment, but with one important difference. In this case, the benchmarking task run using the DSRC encoder if faster than the one run using the ZSTD encoder (see Figure \ref{fig:task2FQGARR}. The reason is that the smaller number of compressed data blocks produced by the DSRC encoder implies a smaller number of Hadoop map tasks to be concurrently run for analyzing the input dataset thus reducing, in turn, the network overhead required for feeding the reduce tasks with the output of the map tasks. The smaller network overhead achievable by using DSRC rather than ZSTD is witnessed by the reduced shuffle time, as observable in Figure \ref{fig:shuffleFQGARR}. \subsection{Results for FASTQ Files} The results of the first experiment, when run on FASTA and FASTQ files, are reported in Table \ref{tab:FQdatasetsFast} and Table \ref{tab:FQdatasetsSlow}. When considering Codecs with a tunable compression ratio, i.e. the general-purpose ones, we compressed twice each file. The first time we opted for the maximum compression achievable, at a cost of a slower decompression (see Table \ref{tab:FAdatasetsSlow} and Table \ref{tab:FQdatasetsSlow}). The second time we opted for the fastest decompression possible, at a cost of a lower compression rate (the results are in Table \ref{tab:FAdatasetsFast} and Table \ref{tab:FQdatasetsFast}). As expected, special-purpose encoders reach a compression ratio much higher than that achievable using general-purpose encoders. SPRING is indeed the compressor able to reach the best disk-space savings. Despite being a general-purpose compressor, BZIP2 exhibits a compression performance very similar to that of special-purpose, but at a cost of very slow compression/decompression routines. The full set of results for the remaining experiments are reported in in graphical form, in Figure \ref{fig:task1FQGARR} and in Figure \ref{fig:task2FQGARR}. The abscissa indicates the original size of the compressed dataset and the ordinate the speedup with respect to the time required to process the corresponding uncompressed dataset, Several comments are in order. As in the FASTA case, the results show that compression succeeds in significantly improving the performance of the considered tasks. In almost all considered cases, the usage of compression led to a significant speedup with respect to the usage of non-compressed files. We also observe that the usage of special-purpose encoders is generally preferable, mostly because of their more pronounced compression efficiency, with respect to general-purpose encoders. However, the performance of these codes is not only bound to their compression efficiency but also their ability to quickly decompress a file. This is clearly visible when comparing, e.g., the performance of the DSRC Codec with those of the SPRING Codec. As reported in Table \ref{tab:FQdatasetsFast} and in Table \ref{tab:FAdatasetsSlow}, FASTQ files compressed with SPRING are about a 15\% smaller than those compressed with DSRC but, despite this, the performance of DSRC when used in the first benchmarking task are much better because of its faster decompression time. In details, the best performance is achieved by the DSRC and by the ZSTD Codecs, but for two different reasons: the first because of its more efficient compression algorithm, the second because of its faster decompression routines. We also observe that the speed-up achieved by DSRC increases with the input file size. To explain this, consider that when managing the 16G input file, DSRC returns a number of HDFS data blocks to process that is smaller than the number of available processing cores. So, not all the available processing capability of the cluster is exploited. When the size of the input increases to 32G, the number of HDFS data blocks gets larger and allows to use all the available processing cores, thus resulting in an improved overall efficiency. This speed-up gets increasing because, as well as the input size grows, the number of HDFS data blocks to process per core increases as well, giving Hadoop the possibility to reschedule tasks over the cores having a smaller workload. It is interesting to note that the performance exhibited by experiments run using DSRC via the universal Codec are were similar to the one measured when running the experiment using the DSRC specialized Codec. This seems to suggest that the overhead introduced by the universal Codec is very small, thus making this the preferred option when integrating a new Codec for dealing with compressed genomic sequences in Hadoop. Another effect worth to be mentioned is the irregular speed-up observed on some of the considered Codecs when increasing the input size. This effect is due to the number of stages required for processing all the HDFS data blocks of each input. According to this input and to the compression Codec, For instances, assuming that the compressed encoding of an input file could be processed in a certain number of stages, doubling its size could result in the execution of a number of stages larger than two times the original one. On the bottom side, the BZ2 and the SPRING Codecs are the ones exhibiting the worst performance, because of their very slow decompression routines. The situation is similar for the second experiment, but with one important difference. In this case, the benchmarking task run using the DSRC encoder if faster than the one run using the ZSTD encoder (see Figure \ref{fig:task2FQGARR}. The reason is that the smaller number of compressed data blocks produced by the DSRC encoder implies a smaller number of Hadoop map tasks to be concurrently run for analyzing the input dataset thus reducing, in turn, the network overhead required for feeding the reduce tasks with the output of the map tasks. The smaller network overhead achievable by using DSRC rather than ZSTD is witnessed by the reduced shuffle time, as observable in Figure \ref{fig:shuffleFQGARR}. \end{comment} \begin{comment} \begin{table} \centering \resizebox{\textwidth}{!}{ \begin{tabular}{c|c|c|c|c|c|c|c|c} Dataset & NoCompress & DSRC (UC) & DSRC (GC) & Fqzcomp & BZ2 & LZ4 & ZSTD & SPRING \\ \hline 16G & 128 & 22 & 20 & 20 & 25 & 60 & 33 & 20 \\ 32G & 256 & 44 & 40 & 40 & 49 & 119 & 64 & 40 \\ 64G & 512 & 90 & 80 & 82 & 99 & 241 & 130 & 81 \\ 96G & 768 & 122 & 122 & 110 & 150 & 364 & 196 & 103 \\ \end{tabular} } \caption{Number of HDFS data blocks, with a block size of 128MB, for the FASTQ files.} \label{tab:bs_fastq} \end{table} \begin{table} \centering \resizebox{\textwidth}{!}{ \begin{tabular}{c|c|c|c|c|c|c|c|c} Dataset & NoCompress & BZ2 & LZ4 & ZSTD & SPRING & MFCompress \\ \hline 16G & 128 & 24 & 58 & 31 & 18 & 19 \\ 32G & 256 & 47 & 116 & 62 & 35 & 38 \\ 64G & 512 & 94 & 231 & 130 & 69 & 76 \\ 96G & 768 & & 346 & 185 & 104 & 113 \\ \end{tabular} } \caption{Number of HDFS data blocks, with a block size of 128MB, for the FASTQ files.} \label{tab:bs_fastq} \end{table} \end{comment} \begin{comment} \begin{figure} \centering \includegraphics[scale=.4]{img/countMap_fastq.png} \caption{Execution time speedup measured while running the first benchmarking task when considering FASTQ compressed datasets of increasing size and different compressors, with respect to the execution on the equivalent uncompressed datasets (TeraStat).} \label{fig:task1FQ} \end{figure} \begin{figure} \centering \includegraphics[scale=.4]{img/countMapReduce_fastq.png} \caption{Execution time speedup measured while running the second benchmarking task on FASTQ format data when considering compressed datasets of increasing size and different compressors, with respect to the execution on the equivalent uncompressed datasets (TeraStat).} \label{fig:task2FQ} \end{figure} \begin{figure} \centering \includegraphics[scale=.4]{img/shuffle_fastq.png} \caption{Time spent running the shuffle phase during the second benchmarking task when considering compressed FASTQ-format datasets of increasing size and different compressors, compared to the execution of the same task on uncompressed datasets (TeraStat).} \label{fig:shuffleFQ} \end{figure} \end{comment} \begin{figure} \centering \includegraphics[scale=.4]{img/garr_countMap_fastq.png} \caption{Execution time speedup measured while running the first benchmarking task when considering FASTQ compressed datasets of increasing size and different compressors, with respect to the execution on the equivalent uncompressed datasets.} \label{fig:task1FQGARR} \end{figure} \begin{figure} \centering \includegraphics[scale=.4]{img/garr_countMapReduce_fastq.png} \caption{Execution time speedup measured while running the second benchmarking task on FASTQ format data when considering compressed datasets of increasing size and different compressors, with respect to the execution on the equivalent uncompressed datasets.} \label{fig:task2FQGARR} \end{figure} \begin{figure} \centering \includegraphics[scale=.4]{img/garr_shuffle_fastq.png} \caption{Time spent running the shuffle phase during the second benchmarking task when considering compressed FASTQ-format datasets of increasing size and different compressors, compared to the execution of the same task on uncompressed datasets.} \label{fig:shuffleFQGARR} \end{figure} \begin{figure} \centering \includegraphics[scale=.4]{img/garr_countMap_fasta.png} \caption{Execution time speedup measured while running the first benchmarking task when considering FASTA compressed datasets of increasing size and different compressors, with respect to the execution on the equivalent uncompressed datasets.} \label{fig:task1FAGARR} \end{figure} \begin{figure} \centering \includegraphics[scale=.4]{img/garr_countMapReduce_fasta.png} \caption{Execution time speedup measured while running the second benchmarking task on FASTA format data when considering compressed datasets of increasing size and different compressors, with respect to the execution on the equivalent uncompressed datasets.} \label{fig:task2FAGARR} \end{figure} \begin{figure} \centering \includegraphics[scale=.4]{img/garr_shuffle_fasta.png} \caption{Time spent running the shuffle phase during the second benchmarking task when considering compressed FASTA-format datasets of increasing size and different compressors, compared to the execution of the same task on uncompressed datasets.} \label{fig:shuffleFAGARR} \end{figure} \section{Conclusions} We have provided two general methods that can be used to transform standard FASTA/Q data compression programs into Hadoop splittable data compression Codecs. Being the methods general, they can be used for specialized standard compression programs that will be developed in the future. Another main characteristic of our methods is that they require very little, or none at all, programming and knowledge of Hadoop to carry out a rather complex task. Our methods apply also to the Apache Spark framework, when used to process FASTA/Q files stored on the Hadoop File System. We have also shown that the use of specialized FASTA/Q Hadoop Codecs, not available before this work, is advantageous in terms of space and time savings. That is, we provide effective and readily usable tools that have a non-negligible effect on saving costos in genomic data storage and processing within Big Data Technologies. \section*{Acknowledgements} All authors would like to thank the computing time on a cutting edge OpenStack Virtual Datacenter for this research made available by GARR. Discussions with Simona Ester Rombo in the early stages of this research have been helpful. \section*{Funding} G.C., R.G. and U.F.P. are partially supported by GNCS Project 2019 \vir{Innovative methods for the solution of medical and biological big data}. R.G. is additionally supported by MIUR-PRIN project \vir{Multicriteria Data Structures and Algorithms: from compressed to learned indexes, and beyond} grant n. 2017WR7SHH. U.F.P. and F.P. are partially supported by Universit\`{a} di Roma - La Sapienza Research Project 2018 \vir{Analisi, sviluppo e sperimentazione di algoritmi praticamente efficienti}. \bibliographystyle{abbrv} \section{The MapReduce\xspace Programming Paradigm and Hadoop} \label{sec:mr-hadoop} \subsection{The Paradigm} \label{sec:mr} MapReduce\xspace \cite{dean2008mapreduce} is a paradigm for the processing of large amounts of data on a distributed computing infrastructure. Assuming the input data is organized as a set of \KV{key}{value} pairs, it is based on the definition of two functions. The {\em map} function processes an input \KV{key}{value} pair and returns a (possibly empty) intermediate set of \KV{key}{value} pairs. The {\em reduce} function merges all the intermediate values sharing the same \empty{key} to form a (possibly smaller) set of values. These functions are run, as tasks, on the nodes of a distributed computing framework. All the activities related to the management of the lifecycle of these tasks as well as the collection of the map function results and their transmission to the reduce functions are transparently handled by the underlying framework (\emph{implicit parallelism}), with no burden on the programmer. \subsection{Apache Hadoop} \label{sec:hadoop} Apache Hadoop is the most popular framework supporting the MapReduce\xspace paradigm. It allows for the execution of distributed computations thanks to the interplay of two architectural components: YARN (\emph{Yet Another Resource Negotiator}) \cite{vavilapalli2013apache} and HDFS (\emph{Hadoop Distributed File System}) \cite{HDFS}. YARN manages the lifecycle of a distributed application by keeping track of the resources available on a computing cluster and allocating them for the execution of application tasks modeled after one of the supported computing paradigms. HDFS is a distributed and block-structured file-system designed to run on commodity hardware and able to provide fault tolerance through replication of data. A basic Hadoop cluster is composed of a single \emph{master node} and multiple \emph{worker nodes}. The master node arbitrates the assignment of computational resources to applications to be run on the cluster and maintains an index of all the directories and the files stored in the HDFS distributed file system. Moreover, it tracks the worker nodes physically storing the HDFS data blocks making up these files. The worker nodes host a set of \emph{worker}s (also called \emph{Containers}), in charge of running the map and reduce tasks of a MapReduce\xspace application, as well as using the local storage to maintain a subset of the HDFS data blocks. One of the main characteristics of Hadoop is its ability to exploit \emph{data-local} computing. By this term, we mean the possibility to move applications closer to the data (rather than the vice-versa). This allows to greatly reduce network congestion and increase the overall throughput of the system when processing large amounts of data. Moreover, in order to reliably maintain files and to properly balance the load between different nodes of a cluster, large files are automatically split into smaller HDFS data blocks, replicated and spread across different nodes. % % \section{Specialized Compressors Supported by means of our {\bf Splittable Compressor Meta-Codec}} \label{sec:DSRC} Among the many compression algorithms specialized for genomic data \cite{Numanagic2016}, DSRC is the only featuring a splittable Codec among the data compression tools achieving the best performance, based on benchmarking, when dealing with FASTA/Q files. It represents a robust testbed for our solution because its original implementation has been developed in C++ and its integration within a Java Codec is not trivial to realize. A DSRC standard compressed file is organized in three parts. \begin{itemize} \item {\bf Body.} It contains a set of compressed data blocks. Each of these is compressed and can be decompressed independently from the others. The default size of each compressed data block is 10MB. \item {\bf Header.} It reports the number of compressed data blocks existing in that file, the size of the footer and its relative position inside the file. \item {\bf Footer.} It reports the size of each compressed data block and the flags used for its compression. \end{itemize} \subsection{Implementation details} \label{subsec:specialpurpose} The special-purpose Codec supporting DSRC, HS\_DSRC, has been obtained following our {\bf Splittable Compressor Meta-Codec}, as described in Section 2.3 of the Main Manuscript\xspace. It required the development of two Java classes: \texttt{DSRCInputFormat} and \texttt{DSRCCodec}. In particular, \texttt{DSRCCodec} uses the JNI framework\cite{Jni} to load in memory and instantiate the dynamic library containing the DSRC native implementation. Then, it uses the \texttt{DSRCInputFormat} class to extract the information regarding the DSRC parameters and the list of compressed data blocks, according to the DSRC format. In addition, this class initializes the \texttt{CodecInputStream} object, pointing to the file to be decompressed during the execution of a job. Finally, it runs the \texttt{NativeCodecDecompressor decompress} method on each compressed data block to obtain its decompressed version. \section{Specialized Compressors Supported by means of our {\bf Universal Compressor Meta-Codec}} In this Section we provide details about the work done for incorporating in Hadoop the specialized compressors reported in Section 3.1.1 of the Main Manuscript\xspace, using our {\bf Universal Compressor Meta-Codec}. For each compressor, the only step required to support it is the definition of a set of properties stating the supported input file types and the command-line required for compressing and decompressing a generic input file. Let X be the unique name denoting the compressor to be supported and F the file being processed, the following command line properties are available for its integration: \begin{itemize} \item{\texttt{uc.X.compress.cmd}}: the command line to be used for compressing F using X. \item{\texttt{uc.X.decompress.cmd}}: the command line to be used for decompressing F using X. \item{\texttt{uc.X.io.input.flag}}: the command line flag used to specify the input filename. \item{\texttt{uc.X.io.output.flag}}: the command line flag used to specify the output filename. \item{\texttt{uc.X.compress.ext}}: the extension used by X for saving a compressed copy of F. \item{\texttt{uc.X.decompress.ext}}: the extension used by X for saving a decompressed copy of X ("fastq" by default). \item{\texttt{uc.X.io.reverse}}: if X requires the output file name to be specified before the input file name, it is set to \emph{true}. {false}, otherwise. \end{itemize} In Table \ref{tab:my_label}, the command lines used for integrating the target specialized compressors using our {\bf Universal Compressor Meta-Codec} are reported. \begin{table} \centering \resizebox{\columnwidth}{!}{ \begin{tabular}{|l|l|l|l|l|l|} \hline \multicolumn{1}{|c|}{Properties} & \multicolumn{5}{c|}{Compressors}\\\cline{2-6} & SPRING (for FASTQ) & SPRING (for FASTA) & DSRC & FqzComp & MFCompress\\ \hline uc.X.compress.cmd & spring -c & spring -c --fasta-input & dsrc c -t8 & fqz\_comp & MFCompressC -t 8 -p 8\\ uc.X.decompress.cmd & spring -d & spring -d & dsrc d -t8 & fqz\_comp -d & MFCompressD -t 8\\ uc.X.io.input.flag & -i & -i & & & \\ uc.X.io.output.flag & -o & -o & & & -o \\ uc.X.compress.ext & .spring & .spring & .dsrc & .fqz & .mfc \\ uc.X.decompress.ext & & .fasta & & & .fasta \\ uc.X.io.reverse & & & & & true \\ \hline \end{tabular} } \caption{Command line properties required for supporting several specialized compressors using our {\bf Universal Compressor Meta-Codec}} \label{tab:my_label} \end{table} \section{Datasets} \label{sec:datasets} The FASTQ files used in our experiments contain a set of reads extracted uniformly at random from a collection of genomic sequences coming from the Pinus Taeda genome \cite{PinusTaeda2013}. The FASTA files used in our experiments contain a set of reads extracted uniformly at random from a collection of genomic sequences coming from the Human genome \cite{Human2008}. Details about the files included in these datasets are reported in Table \ref{tab:FAdataset} and Table \ref{tab:FQdataset}. \begin{table}[!ht] \centering \begin{tabular}{|l|r|c|} \hline Name & \# of reads & Avg. read length \\ \hline 16GB & 96,407,378 & 100 \\ 32GB & 192,653,438 & 100 \\ 64GB & 385,306,876 & 100 \\ 96GB & 577,960,314 & 100 \\ \hline \end{tabular} \caption{Files included in the FASTA dataset used in our experiments} \label{tab:FAdataset} \end{table} \begin{table}[!ht] \centering \begin{tabular}{|l|r|c|} \hline Name & \# of reads & Avg. read length \\ \hline 16GB & 44,681,859 & 151 \\ 32GB & 89,363,718 & 151 \\ 64GB & 178,727,437 & 151 \\ 96GB & 268,091,154 & 151 \\ \hline \end{tabular} \caption{Files included in the FASTQ dataset used in our experiments} \label{tab:FQdataset} \end{table} \begin{figure}[!ht] \centering \includegraphics[scale=.45]{img/generic-class-diagram} \caption{UML class diagram of our \textbf{Splittable Compressor Meta-Codec}} \label{fig:classGeneric} \end{figure} \begin{figure}[!ht] \centering \includegraphics[scale=.35]{img/universal-class-diagram} \caption{UML class diagram of our \textbf{Universal Compressor Meta-Codec}} \label{fig:classUniversal} \end{figure} \newpage \begin{figure}[!ht] \centering \includegraphics[scale=.45]{img/dsrc-class-diagram} \caption{UML class diagram of HS\_DSRC } \label{fig:classDSRC} \end{figure} \clearpage \bibliographystyle{abbrv}
{ "timestamp": "2020-07-28T02:44:01", "yymm": "2007", "arxiv_id": "2007.13673", "language": "en", "url": "https://arxiv.org/abs/2007.13673" }
"\\section{Introduction}\n\nIn recent years, coarse-graining renormalization group methods for the t(...TRUNCATED)
{"timestamp":"2020-07-28T02:41:22","yymm":"2007","arxiv_id":"2007.13607","language":"en","url":"http(...TRUNCATED)
"\\subsection{Data sets}\nAs mentioned, we considered data sets from the UCI and LibSVM repositories(...TRUNCATED)
{"timestamp":"2020-12-18T02:19:57","yymm":"2007","arxiv_id":"2007.13532","language":"en","url":"http(...TRUNCATED)
"\\section{Introduction}\nAnalysis of an outcome response to a counterfactual shift in the covariate(...TRUNCATED)
{"timestamp":"2022-02-25T02:21:03","yymm":"2007","arxiv_id":"2007.13659","language":"en","url":"http(...TRUNCATED)
"\n\\section{Introduction}\nConsider the problem of explaining sequential decision-making on the bas(...TRUNCATED)
{"timestamp":"2021-03-31T02:37:21","yymm":"2007","arxiv_id":"2007.13531","language":"en","url":"http(...TRUNCATED)
"\\section{Introduction}\nThe condensation of water vapor onto solid surfaces is integral to many na(...TRUNCATED)
{"timestamp":"2020-07-28T02:40:44","yymm":"2007","arxiv_id":"2007.13586","language":"en","url":"http(...TRUNCATED)
"\\section{Introduction}\n\nThe so-called neutral component of the interstellar medium, despite bein(...TRUNCATED)
{"timestamp":"2020-07-28T02:40:58","yymm":"2007","arxiv_id":"2007.13593","language":"en","url":"http(...TRUNCATED)
End of preview. Expand in Data Studio

No dataset card yet

Downloads last month
8