Dataset Viewer
Auto-converted to Parquet Duplicate
text
stringlengths
1
2.26M
\section{Conventional DSR System} \label{sec:conventional_system} \subsection{Acoustic Beamforming} \label{sec:beamforming} Let us assume that a microphone array with $M$ sensors captures a sound wave propagating from a position and denote the frequency-domain snapshot as $\mbox{$\mathbf{X}$} (t,\omega_k) =[X_1 (t,\omega_k),\cdots,X_{M} (t,\omega_k)]^T$ for an angular frequency $\omega_k$ at frame $t$. With the complex weight vector of a array geometry type $g$ for source position $\mbox{$\mathbf{p}$}$ \vspace{-0.75em} \begin{equation} \mbox{$\mathbf{w}$}_g (t,\omega_k,\bp) = [ w_{g,1} (t,\omega_k,\bp), \cdots, w_{g,M} (t,\omega_k,\bp) ] , \label{eq:bfw} \vspace{-0.75em} \end{equation} the beamforming operation is formulated as \vspace{-0.75em} \begin{equation} Y_g (t,\omega_k,\bp) = \mbox{$\mathbf{w}$}_g^H (t,\omega_k,\bp) \mbox{$\mathbf{X}$} (t,\omega_k), \label{eq:bfo} \vspace{-0.75em} \end{equation} where $H$ is the Hermitian (conjugate transpose) operator. The complex vector multiplication~\erf{eq:bfo} can be also expressed as the real-valued matrix multiplication: \vspace{-0.75em} \begin{align} \begin{bmatrix} \mbox{$\ \operatorname{Re}$} (Y_g) \\ \mbox{$\ \operatorname{Im}$} (Y_g) \\ \end{bmatrix} &= \begin{bmatrix} \mbox{$\ \operatorname{Re}$} (w_{g,1}) & \mbox{$\ \operatorname{Im}$} (w_{g,1}) \\ -\mbox{$\ \operatorname{Im}$} (w_{g,1}) & \mbox{$\ \operatorname{Re}$} (w_{g,1}) \\ \vdots & \vdots \\ \mbox{$\ \operatorname{Re}$} (w_{g,M}) & \mbox{$\ \operatorname{Im}$} (w_{g,M}) \\ -\mbox{$\ \operatorname{Im}$} (w_{g,M}) & \mbox{$\ \operatorname{Re}$} (w_{g,M})\\ \end{bmatrix}^T \begin{bmatrix} \mbox{$\ \operatorname{Re}$} (X_{1}) \\ \mbox{$\ \operatorname{Im}$} (X_{1}) \\ \vdots \\ \mbox{$\ \operatorname{Re}$} (X_{M}) \\ \mbox{$\ \operatorname{Im}$} (X_{M}) \\ \end{bmatrix}, \label{eq:cat2} \vspace{-0.75em} \end{align} where $(t,\omega_k,\bp)$ is omitted for the sake of simplicity. It is clear from~\erf{eq:cat2} that beamforming can be implemented for a array configuration by generating $K$ sets of $2 \times 2M$ matrices where $K$ is the number of frequency bins. Thus, we can readily incorporate this beamforming framework into the DNN in either the complex or real-valued form. Notice that since our ASR task is classification of acoustic units, the real and imaginary parts can be treated as two real-valued feature inputs. In a similar manner, the hidden layer output can be treated as two separate entities. In that case, the DNN weights can be computed with the real-valued form of the back propagation algorithm~\cite{MinhuaICASSP2019}. A popular method in the field of ASR would be super-directive (SD) beamforming that uses the \emph{spherically isotropic noise} (diffuse) field~\cite{DocloM07,HimawanMS11}~\cite[S13.3.8]{Wolfel2009}. Let us first define the $(m,n)$-th component of the spherically isotropic noise coherence matrix for a array configuration~$g$ as \begin{equation} \mbox{$\Sigma_{\mathbf{N}}$}_{g,m,n} (\omega_k) = \text{sinc} \left( \omega_k d_{g,m,n} / c \right) \label{eq:NCM} \end {equation} where $d_{g,m,n}$ is the distance between the~$m$-th~and~$n$-th sensors for the array shape~$g$ and $c$ is speed of sound. This represents the spatial correlation coefficient between the $m$-th and $n$-th sensor inputs in the diffuse field. The weight vector of the SD beamformer for the array geometry $g$ can be expressed as \vspace{-0.75em} \begin{equation} \mbox{$\mathbf{w}$}^H_{\text{SD},g} = \left[ \mbox{$\mathbf{v}$}^H_g \mbox{$\bSigma_{\mathbf{N}_g}^{-1}$} \mbox{$\mathbf{v}$}_g \right]^{-1} \mbox{$\mathbf{v}$}^H_g \mbox{$\bSigma_{\mathbf{N}_g}^{-1}$} \label{eq:SD1} \vspace{-0.75em} \end {equation} where \( (\omega_k,\bp) \) are omitted and $\mbox{$\mathbf{v}$}_g$ represents the array manifold vector of the array geometry $g$ for time delay compensation. In order to control white noise gain, diagonal loading is normally adjusted~\cite[S13.3.8]{Wolfel2009}. Although speaker tracking has a potential to provide better performance~\cite[\S10]{Wolfel2009}, the simplest solution would be selecting a beamformer based on normalized energy from multiple instances with various look directions~\cite{HimawanMS11}. In our preliminary experiments, we found that competitive speech recognition accuracy was achievable by selecting a fixed beamformer with the highest total energy followed by trajectory smoothing over frames. Notice that highest-energy-based beamformer selection can be mimicked with a max-pooling layer as described in section \ref{sec:MCDNN}. \subsection{Acoustic Model with Signal Processing Front-End} \label{sec:baseline} As shown in figure~\ref{fig:dnn_baseline}, the baseline DSR system consists of audio signal processing, speech feature extraction and classification NN components. The audio front-end transforms a time-discrete signal into the frequency domain and selects the output from one of multiple beamformers based on the energy criterion. After that, the time-domain signal is reconstructed and fed into the feature extractor. The feature extraction step involves LFBE feature computation as well as causal and global mean-variance normalization~\cite{King2017}. The NN used here consists of multiple LSTM layers, affine transform and softmax layers. The network is trained with the normalized LFBE features in order to classify senones associated with the HMM state. In the conventional DSR system, the audio front-end can be separately tuned based on empirical knowledge. However, it may not be straightforward to jointly optimize the signal processing front-end and classification network~\cite{Heymann2018}, which will result in a suboptimal solution for the senone classification task. \begin{figure}[t] \addtolength{\belowcaptionskip}{-1.5em} \begin{minipage}[t]{0.45\linewidth} \includegraphics[width=0.9\linewidth]{Baseline.pdf} \vspace{-1.0em} \caption{Conventional system} \label{fig:dnn_baseline} \end{minipage} \begin{minipage}[t]{0.05\linewidth} \includegraphics[width=\linewidth]{white.pdf} \end{minipage} \begin{minipage}[t]{0.45\linewidth} \includegraphics[width=0.9\linewidth]{MCDNN.pdf} \vspace{-1.0em} \caption{Fully-learnable system} \label{fig:dnn_main} \end{minipage} \end{figure} \section{Conclusion} \label{sec:conclusion} We have proposed new spatial acoustic modeling methods. The ASR experiment results on the real far-field data have revealed that even when array geometry is mismatched to the training condition, the two-channel model can provide better recognition accuracy than the LFBE model with 7-channel beamforming. Furthermore, we have shown that training the MC DNN under the multiple array geometry conditions can improve robustness against the microphone placement mismatch. Moreover, we have demonstrated that our proposed method can provide a consistent improvement for multiple array configurations. We plan to combine multi-conditional training and unsupervised training~\cite{Hari2019,Mosner2019}. \section{ASR Experiment} \label{sec:ex1} We perform a series of the DSR experiments using over 1150 hours of unique speech utterances from our in-house dataset. The training and test data amount to approximately 1,100 and 50 hours respectively. The training data also contains the play back condition where music is being played with an internal loud speaker. The device-directed speech data from several thousand anonymized users was captured using 7 microphone circular array devices placed in real acoustic environments. The test data contains the real speech interactions between the users and devices under unconstrained conditions. Thus, the users may move while speaking to the device. Speakers in the test set were excluded from the training set. As a baseline beamforming method, we use robust SD beamforming with diagonal loading adjusted based on~\cite{DocloM07}. Therefore, the microphone array is well calibrated. The array geometry used here is an equi-spaced six-channel microphone circular array with a diameter of approximately 72 milli-meters (mm) and one microphone at the center. For SD beamforming, we used all the seven microphones. Multiple beamformers are built on the frequency domain toward different directions of interest and one with the maximum output energy is selected for the ASR input. It may be worth noting that conventional adaptive beamforming~\cite[S6,S7]{VanTrees2002} degraded recognition accuracy in our preliminary experiments due insufficient voice activity detection or speaker localization performance on the real data. Thus, we omit results of adaptive beamforming here. For the experiments with the MC DNN, we pick 2 or 4 microphones out of 7 sensors. As illustrated in figure~\ref{fig:AG}, we made three sets of training and test data with different microphone spacing, 73~mm, 63~mm and 36~mm, for two-channel experiments. The test datasets are split into the matched and mismatched array geometry conditions. In the mismatched geometry condition, the test array geometry is not seen in training. Each WER is calculated over the combined conditions. For the experiments with four-channel input, we created four sets of the training and test data with different relative microphone locations. In the four-channel experiment, we report the WER with respect to the number of sensor locations mismatched to the training array geometry. The number of look directions for the multi-channel layer is set to 12 in all the experiments. The baseline ASR system used a 64-dimensional LFBE feature with online causal mean subtraction~\cite{King2017}. For our MC ASR system, we used 127-dimensional complex DFT coefficients removing the direct and Nyquist frequency components (bin 0 and 128). The LFBE and FFT features were extracted every 10ms with a window size of 25ms and 12.5ms, respectively. Both features were normalized with the global mean and variances precomputed from the training data. The classification LSTM for both features has the same architecture, 5 LSTM layers with 768 cells followed by the affine transform with 3101 outputs. All the networks were trained with the cross-entropy objective using our DNN toolkit~\cite{Strom15}. The Adam optimizer was used in all the experiments. For building the DFT model, we initialize the classification layers with the LFBE model. Results of all the experiments are shown as relative word error rate reduction (WERR) with respect to the performance of the LFBE baseline system with a single array channel. The baseline system is powerful enough to achieve a single digit number in a high SNR condition. The larger WERR value indicates the bigger improvement in recognition accuracy. The LFBE LSTM model for the baseline system was trained and evaluated on the center microphone data. We also present the WERR relative to the LFBE with robust SD beamforming. Table~\ref{tab:Tab_devl} shows the relative WERRs of the LFBE LSTM with the conventional 7-channel beamformer, the elastic SF (ESF) network trained with the single and multiple array geometry data and weight-tied SF (WTSF) net trained under the multiple array geometry conditions. Each number enclosed in the parentheses indicates the WERR relative to the LFBE LSTM with 7-channel robust beamforming. Table~\ref{tab:Tab_devl} also shows how much recognition accuracy degrades with respect to the number of mismatched sensor locations indicated in the third column in table~\ref{tab:Tab_devl}. Here, the WERR results are split by estimated signal-to-noise ratio (SNR) of the utterances. The SNR was estimated by aligning the utterances to the transcriptions with an ASR model and subsequently calculating the accumulated power of speech and noise frames over an entire utterance. It is clear from table~\ref{tab:Tab_devl} that the recognition accuracy can be improved by multiple microphone systems, both conventional beamforming and fully learnable MC models. It is also clear from table~\ref{tab:Tab_devl} that the unified acoustic models with two channels outperform conventional beamforming with seven channels even if one sensor location is mismatched to the training condition. It is also apparent from table~\ref{tab:Tab_devl} that the use of 4 channels for the unified AM further improves recognition accuracy in the matched geometry condition but degrades performance in the mismatched array configuration condition. Moreover, we can see that the WTSF architecture trained under the multiple array geometry conditions provides slightly better recognition accuracy than the ESF. Notice that the CNN and max-pooling layers of the WTSF network can reduce the number of parameters compared to the fully connected ESF network architecture. Another advantage of multi-geometry spatial acoustic modeling is that multiple array configurations can be encoded in a single model. Figure~\ref{fig:ESF_wrt_AG} shows the relative WERRs of the WTSF networks trained with the single and multi-geometry data under all the SNR conditions. Here, all the models are trained with four-channel data. For generating the WERs of figure~\ref{fig:ESF_wrt_AG}, we build the single geometry WTSF network with the reference array configuration data only while training the multi-geometry model with four types of array geometry data so as to cover all the test array configurations. In figure~\ref{fig:ESF_wrt_AG}, the WERRs are plotted with respect to the dissimilarity measure from the reference array geometry; the dissimilarity index is calculated as the sum of the differences between relative sensor distances of reference and test arrays over four channels and described in the parentheses of the x-axis label. The x-axis label of figure~\ref{fig:ESF_wrt_AG} also shows the microphone index numbers used for each condition. It is clear from figure~\ref{fig:ESF_wrt_AG} that recognition accuracy of the single geometry model degrades as the array configuration of the test condition becomes more different from that of the training condition. It is also clear from figure~\ref{fig:ESF_wrt_AG} that the multi-geometry model can maintain the improvement for different array configurations. In fact, this is the new capability of the multi-geometry acoustic model in contrast to conventional multi-channel techniques. \section{Introduction} \label{sec:intro} A complete system for distant speech recognition (DSR) typically consists of distinct components such as a voice activity detector, speaker localizer, dereverberator, beamformer and acoustic model~\cite{Pearson96,Omolog2001,Wolfel2009,KumataniAYMRST12,KinoshitaDGHHKL16}. While it is tempting to isolate and optimize each component individually, experience has proven that such an approach cannot lead to optimal performance without joint optimization of multiple components~\cite{McDonough2008,Seltzer2008,VirtanenBook2012}. Conventional microphone array processing also requires meticulous microphone calibration to maintain signal enhancement performance~\cite[\S5.53]{Tashev2009}. The relative microphone placement mismatch between filter design and test conditions can degrade ASR accuracy~\cite{HimawanSM08}. Such a problem can be alleviated with self-calibration~\cite{HimawanSM08,McCowanLH08} or microphone selection\cite{WolfN14,Kumatani11channelselection,GuerreroTO18}. Reliable self-calibration typically requires a supervised signal such as time-stretched pulses~\cite{Habets2007} or accurate noise field assumption~\cite{McCowanLH08}. Accurate microphone calibration may not be necessary for DSR if we can build the acoustic model that encodes various relative microphone locations. It has been shown in~\cite{SainathASRU15,OchiaiICML17} that the dependency of specific microphone spacing can be reduced by training the deep neural network (DNN) with multi-channel (MC) input under multiple microphone spacing conditions in the unified manner. It is also straightforward to jointly optimize the unified MC DNN so as to achieve better discriminative performance of acoustic units from the MC signal~\cite{SainathASRU15,OchiaiICML17,Xiao16,MinhuaICASSP2019}. Moreover, the trained MC DNN can process streaming data in real time without the accumulation of signal statistics in contrast to batch processing methods such as maximum likelihood beamforming~\cite{Seltzer2004,Rauch2008}, source separation techniques~\cite{VirtanenBook2012,Bhiksha2010} and blind DNN clustering approaches. Another approach is the use of MC speech features such as the log energy-based features~\cite{SwietojanskiSPL14,Braun2018} or LFBE supplemented with the time delay feature~\cite{KimInterspeech16}. By doing so, the improvement with multiple sensors can be still maintained in the mismatched array geometry condition. However, the performance of those methods would be limited due to the lack of the proper sound wave propagation model~\cite{MinhuaICASSP2019}. As it will be clear in section~\ref{sec:MCDNN}, the DNN can subsume multiple beamformers with various array configurations. Moreover, the feature extraction components described in~\cite{Xiao16,SwietojanskiSPL14,Braun2018,KimInterspeech16} are not fully learnable. In this paper, we propose two MC network architectures that can model multiple array configurations. We initialize the MC input layer with beamformers' weights designed for multiple types of array geometry. This spatial filtering (SF) layer thus subsumes beamformers with various look directions and array configurations. It is implemented in the frequency domain for the sake of computational efficiency~\cite{Haykin2001}. The first network architecture proposed here combines the SF layer's output in a fully connected manner. In the second MC network, we combine the SF output of multiple look directions with the weights tied across all the frequencies followed by maximum energy selection. All the networks are optimized based on the ASR criterion in a stage-wise manner~\cite{MinhuaICASSP2019}. It is also worth noting that our method neither requires a bi-directional pass nor accumulation of signal statistics unlike DNN mask-based beamforming~\cite{OchiaiICML17,Heymann2018,Higuchi2018}. We demonstrate the effectiveness of the multi-geometry acoustic models through DSR experiments on the real-world far-field data spoken by thousands of real users, collected in various acoustic environments. The test data contains challenging conditions where speakers interact with the ASR system without any restriction under reverberant and noisy environments. This paper is organized as follows. In section~\ref{sec:conventional_system}, we review a relationship between beamforming and neural networks. In section~\ref{sec:MCDNN}, we describe our deep MC model architectures robust against the array geometry mismatch. In section~\ref{sec:ex1}, we analyze ASR results on the real-world data. Section~\ref{sec:conclusion} concludes this work. \section{Frequency Domain Multi-channel Network} \label{sec:MCDNN} \begin{figure}[t] \addtolength{\belowcaptionskip}{-1em} \begin{minipage}[t]{0.05\linewidth} \includegraphics[width=\linewidth]{white.pdf} \end{minipage} \begin{minipage}[t]{0.4\linewidth} \includegraphics[width=\linewidth]{MCDNN_BF3.pdf} \vspace{-0.8em} \centering \text{(a) Elastic SF} \label{fig:mcdnn1} \end{minipage} \begin{minipage}[t]{0.05\linewidth} \includegraphics[width=\linewidth]{white.pdf} \end{minipage} \begin{minipage}[t]{0.4\linewidth} \includegraphics[width=\linewidth]{MCDNN_BF4.pdf} \vspace{-0.8em} \centering \text{(b) Weight-tied SF} \label{fig:mcdnn3} \end{minipage} \caption{Multi-geometry spatial filtering (SF) network} \vspace{-1.5em} \label{fig:all_mcdnn} \end{figure} \begin{figure}[t] \addtolength{\belowcaptionskip}{-3em} \centering \includegraphics[width=0.75\linewidth]{WTSF.pdf} \vspace{-1.5em} \caption{Weight-tied SF output combination} \label{fig:WTSF_CNN} \vspace{-1.5em} \end{figure} Figure~\ref{fig:dnn_main} shows our whole DSR system with the fully-learnable neural network. As shown in figure~\ref{fig:dnn_main}, our DSR consists of 4 functional blocks, signal pre-processing, MC DNN, feature extraction (FE) DNN and classification LSTM. First, a block of each channel signal is transformed into the frequency domain through FFT. In the frequency domain, DFT coefficients are normalized with global mean and variance estimates. The normalized DFT features are concatenated and passed to the MC DNN that models different array geometry. Our FE DNN contains an affine transform initialized with mel-filter bank values, rectified linear unit (ReLU) and log component. Notice that the initial FE DNN generates the LFBE-like feature. The output of the FE DNN is then input to the same classification network architecture as the LFBE system, LSTM layers followed by affine transform and softmax layers. The DNN weights are trained in the stage-wise manner~\cite{MinhuaICASSP2019,Kumatani2017}; we first build the classification LSTM with the single channel LFBE feature, then train the cascade network of the FE and classification layers with the single-channel DFT feature, and finally perform joint optimization on the whole network with MC DFT input. In this work, we use training data captured with different array configurations. The proposed method can learn the spatial filters of different array geometry as well as feature extraction parameters solely from the observed data. This fully learnable network neither requires self microphone calibration, clean speech signal reconstruction nor perceptually-motivated filter banks~\cite{RichardSN13}. Figure~\ref{fig:all_mcdnn} shows new MC network architectures with multi-geometry affine transforms. The multi-geometry affine transforms correspond to beamformers with different look directions and array shapes. Figure~\ref{fig:all_mcdnn}~(a) depicts an elastic MC network architecture that combines the output of the SF layer with the fully connected network. This elastic MC DNN includes a block of the affine transforms initialized with beamformers' weights, signal power component, affine transform layer and ReLU. For initialization of the block affine transforms, we use SD beamformers' weights designed for various look directions and multiple array configurations. Let us denote the number of array geometry types as $G$ and the number of beamformer's look directions as $D$. The output power of the initial SF layer is expressed with $G \times D \times K$ blocks of frequency independent affine transforms as \vspace{-0.3em} \begin{eqnarray} \begin{bmatrix} Y_{1,1} (\omega_1) \\ \vdots \\ Y_{1,D} (\omega_1) \\ \vdots \\ Y_{g,d} (\omega_k) \\ \vdots \\ Y_{G,D} (\omega_K) \\ \end{bmatrix} = \text{pow} \left ( \begin{bmatrix} \mbox{$\mathbf{w}$}^H _{\text{SD,1}} (\omega_1, \mbox{$\mathbf{p}$}_1) \mbox{$\mathbf{X}$} (\omega_1) \\ \vdots \\ \mbox{$\mathbf{w}$}^H _{\text{SD,1}} (\omega_1, \mbox{$\mathbf{p}$}_D) \mbox{$\mathbf{X}$} (\omega_1) \\ \vdots \\ \mbox{$\mathbf{w}$}^H _{\text{SD,g}} (\omega_k, \mbox{$\mathbf{p}$}_d) \mbox{$\mathbf{X}$} (\omega_k) \\ \vdots \\ \mbox{$\mathbf{w}$}^H _{\text{SD,G}} (\omega_K, \mbox{$\mathbf{p}$}_D) \mbox{$\mathbf{X}$} (\omega_K) \\ \end{bmatrix} + \mbox{$\mathbf{b}$} \right), \label{eq:esf1} \vspace{-1.2em} \end{eqnarray} where $\text{pow}()$ is the sum of squares of real and imaginary values and $\mbox{$\mathbf{b}$}$ is a bias vector. As demonstrated in our prior work~\cite{MinhuaICASSP2019}, initializing the first layer with beamformer's weight leads to much more efficient optimization in comparison to random initialization. The output of the SF layer is combined with the fully connected weights. Accordingly, this could mix the different frequency components. Figure~\ref{fig:all_mcdnn}~(b) illustrates another MC network architecture proposed in this paper. The second MC network also connects the block of affine transforms associated with each array configuration independently. The weights of the block affine transforms are initialized with SD beamformers' weights in the same manner as the elastic SF network. We then apply the weight tied over all the frequencies in order to combine the multiple beamformers. Such a combination process is described in figure~\ref{fig:WTSF_CNN} where each element of the matrix is computed in the same manner as~\erf{eq:esf1}. As indicated in figure~\ref{fig:WTSF_CNN}, the SF layer output is convoluted with $1 \times D$ filters with $D$ width stride and one height stride. This 2D convolution process can avoid the permutation problem known in blind source separation, taking different look directions at different frequencies inconsistently. Finally, the SF layer output is selected with the max-pooling layer that corresponds to maximum energy selection. In contrast to the elastic SF network, this network can efficiently reduce the dimension with the max-pooling layer. We hypothesize that the SF layer combination has the similar effect with noise cancellation, subtracting one beamformer's output from another. This would be done with a large amount of training data rather than sample-by-sample adaptive way. Moreover, our network considers not only multiple look directions but also different array geometry. All the network parameters will be updated based on the cross entropy criterion in training. Both architectures maintain frequency independent processing at the input layer, which can reduce the number of parameters significantly. In this paper, the MC network architectures of (a) and (b) are referred as the multi-geometry elastic SF (ESF) and weight-tied SF (WTSF) network, respectively. The WTSF network has a stronger constraint than the ESF net since the same weights for combining spatial layer output are shared across all the frequencies. This weight-sharing structure maintains the consistent SF output combination over frequencies. However, it may lack of the flexibility such as smoothing over different frequencies.
\section{Introduction} One of the most promising ideas developed in Service Computing is the automatic generation of services' compositions~\cite{Papazoglou03service-orientedcomputing,SOCValery,Bartalos}. Multiple views and problems formulations have been proposed for this purpose~\cite{Rao04asurvey}. As a point of differentiation between these works, we consider the {\it functional structure} of the composition to built. By this term, we mean, the representation of the functioning of the composition, as a collaboration among small abstractions, that can each be realized by known services. Typically, oriented graphs, business processes and workflows have been used for such descriptions~\cite{Cardoso2004281,Weske2007,GoldmanNgoko}. Based on the functional structure, we distinguish between two dominant classes of services' compositions problems. In the first one, the general inputs of the composition problem consists of: (1) a basis of services whose behavior is described in the public interface; (2) a set of user constraints and goals, defining and framing the finality of the composition. One must infer from these data an interaction among services, meeting the constraints and goals~\cite{Bartalos,Sirin:2004:HPW:1741306.1741331,Jiang}. We put these formulations in the class of {\it structurally-unfixed}. Their particularity is that the functional structure of the composition is not an input data for the automation problem. It is the case in the second class of formulations that we refer to as the one of {\it structurally-fixed} problems. The challenge is reduced to a binding problem, in which concrete implementations must be associated with the abstractions of the functional structure such as to guarantee a minimal Quality of Service (QoS), while meeting some users level agreements (SLAs)~\cite{Alrifai,BenMokhtar,Yu,ZengMiddleware,Zheng,Ardagna,JISA,cpe3015}. This binding problem is also referred to as the service selection problem. It seems obvious that structurally-unfixed formulations include more automation in the design of services' compositions. Indeed, we can decompose the challenge in these cases in two: (1) find a functional structure that meets the constraints and goal; (2) solve a structurally-fixed problem. However, let us remark that in practice, it is hard to address structurally-unfixed problems without providing a formal description of the composition's behavior. The OWL-S language~\cite{Sirin:2004:HPW:1741306.1741331} and pre/post condition formalisms~\cite{Bartalos,Oh05acomparative} are some examples, utilized in this context. The existence of these additional inputs in practice tempers the high level of automation of structurally-unfixed formulations. In this work, we consider the service selection problem (structurally-fixed formulation). The functional structure in our work is given by a Hierarchical Services Graph (HSG)~\cite{GoldmanNgoko}. This modelling defines a composition as a graph with three layers: the machine, service and operations layers. The service composition logic is captured in the operations layer. The logic consists of BPMN~\cite{Weske2007} interactions among a set of operations. These operations are abstract in the sense that they can be implemented by different services located in the underneath layer. Given such a graph, we are interested in finding the {\it best} implementation for abstract operations while fulfilling the SLAs constraints. We restrict the SLAs definition to two QoS dimensions: the Service Response Time (SRT) and the Energy Consumption (EC). Although we use a particular representation of services' compositions, the core problem that we address is not new~\cite{Alrifai,BenMokhtar,Zheng,Ardagna,JISA}. But, our study has two main features. Firstly, we are interested in finding optimal solutions. This choice has a weakness: the NP-hardness of the service selection problem. However, we believe that by making an {\it intelligent search}, one can provide, within an acceptable runtime, exact solutions for {\it small or medium} services' compositions (around $20$ nodes for the service composition). It is important to notice that if we consider that the services compositions implement business processes or workflows, then there are many practical examples that will correspond to small or medium compositions. One can look for instance to the examples provided in~\cite{Omg,Freund}. The second feature of our work is that we adopt a view of the problem that has not been studied to the best of our knowledge. This is clearly stated by our main contribution, which consists of mapping the service selection problem on the Constraint Satisfaction Problem (CSP). This mapping opens a large potential in the resolution of the service selection problem. As we will see, it also captures another facet of service selection: the feasibility problem~\cite{Ardagna,JISA}. Among the existing techniques for solving CSP, we choose to investigate the backtracking~\cite{Baker95intelligentbacktracking}. We complete our contribution in proposing various backtracking-based algorithms for the service selection problem. The proposed variants are based on the notion of reduction order, introduced in our prior work~\cite{GoldmanNgoko} on QoS prediction. They are also inspired by two existing heuristics~\footnote{The referred heuristics provide an exact solution; but, their runtime can differ significantly from an instance to another} for the CSP: the max-degree and min-domain heuristics~\cite{Baker95intelligentbacktracking}. Finally, we did an experimental evaluation in which we demonstrate the runtime gain of the backtracking-based algorithms over two classical solutions for solving the service selection problem: exhaustive search and integer linear programming. The experiments also give us interesting insights about the size of the compositions on which we can expect results in real-time. The remainder of this paper is organized as follows. Section~\ref{Related} presents the related works. In Section~\ref{SSP}, we define the service selection problem and connect it to the CSP. Naive and backtracking algorithms for the problem are discussed in the sections~\ref{exhaustiveSearch} and~\ref{backtrackingSearch}. Section~\ref{ExperimentalEvaluation} gives an experimental evaluation of our work. In Section~\ref{Discussion}, we discuss about the potential in the development of service selection algorithms raised by our CSP mapping. We conclude in Section~\ref{Conclusion}. \section{Related work} \label{Related} We assumed that services' compositions problems can be structurally-unfixed or structurally-fixed. Our work will focus only on the latter case. However, interesting proposals for the former case can be found in the work of Seog Chan et al.~\cite{Oh05acomparative}, Sirin et al.~\cite{Sirin:2004:HPW:1741306.1741331} or in the survey of Rao and Su~\cite{Rao04asurvey}. A key idea that these works share is the usage of AI planning techniques for the resolution of the services' composition problem. Concomitantly, the Integer Linear Programming (ILP) is one of the most used techniques employed for tackling the service selection problems. One of the pioneer papers with this technique was done by Lee~\cite{Lee}. Though it was not its main purpose, his work demonstrated that SLAs can, in practice, be formulated as linear equations. This suggests that in many cases, the service selection problem can be solved by integer linear programming. The work of Lee focused only on two QoS dimensions: the price and the service response time. Similar modellings were proposed including other QoS dimensions like the price, duration, reputation, reliability and availability~\cite{ZengMiddleware,Yu,Ardagna}. In particular, the work of Zeng et al.~\cite{ZengMiddleware} shows that if we consider their natural formulation, constraints related to availability will result in non-linear equations. Then, they propose a method for transforming such equations into linear ones. In most papers based on linear programming, the services' composition is viewed as a collaboration among multiple services within a single business process. In practice however, collaborations among multiple processes exist. For these latter cases, the works of Ngoko et al.~\cite{JISA,mgc2012} stated how to use linear programming for the service selection problem. They focused on energy and service response time minimization. Many papers established in different contexts the NP-hardness of the service selection problem~\cite{Lee,Yu}. This means that exact solutions obtained by Integer Linear Programming (ILP) are computed in exponential runtime. For obtaining fast results, heuristics have been considered. Yu et al.~\cite{Yu,Yu:2004:SSA:1018413.1019049} proposed a branch and bound algorithm (BBLP) and a dynamic programming algorithm for service selection. These ideas are also discussed in~\cite{Lee}. In both cases the service selection problem is reduced to the multichoice knapsack problem. The branch and bound algorithm exploits the lagrangian relaxation of the ILP modelling for this problem. The dynamic programming solution adapts existing solutions for the multichoice knapsack problem. BBLP and dynamic programming improve in runtime the naive resolution of the service selection problem. In this work, we will also propose an exact resolution (as the branch and bound solution of Yu et al.) that improves the naive one. Ben Mokhtar et al.~\cite{BenMokhtar} proposed a two-phases heuristic for service selection. In the first phase, the heuristic classifies services regarding their {\it utility}. A search approach based on this classification completes the selection process in the second phase. The advantage of this approach is to propose near-exact solutions within {\it short} runtime. Ngoko et al.~\cite{JISA} and Zeng et al.~\cite{ZengMiddleware} proposed sub-optimal approaches and exact algorithms on special cases for the resolution of the service selection problem. As they stated, these solutions are efficient only on a small class of services' compositions. Yu et al.~\cite{Yu:2004:SSA:1018413.1019049} proposed {\it improvement heuristics} for finding near optimal solutions. The first step of the heuristic consists of finding a feasible solution. The solution is then improved by considering other potential combinations of services. As they showed, the proposed heuristic can improve the runtime required for {\it BBLP}. Let us however recall that solutions found are not necessarily optimal. Alrifai et al.~\cite{Alrifai} also proposed a heuristic for the service selection problem. The main idea is to decompose the global service selection problem onto subproblems that can be solved by local search. Doing so, they show that they can drastically improve the runtime of the heuristic proposed by Yu et al.~\cite{Yu}. Est\'evez-Ayres et al. ~\cite{DBLP:journals/concurrency/Estevez-AyresGBD11} proposed a heuristic for finding good approximations for the service selection problem under a runtime boundary. One of the main ideas that they describe will be used in our work. It consists of making partial evaluation on a sub-composition of services. In our proposal, we go deeper in the idea; we focus on the ordering that can be used for the set of partial evaluations to be made. Genetic programming approaches were also proposed~\cite{Jaeger,Canfora,Cao,cpe3015} in viewing a services' composition as a string where each character is a service operation. Finally, let us remark that as suggested by Cardoso et al.~\cite{Cardoso2004281} and developed in other works~\cite{DBLP:journals/concurrency/Estevez-AyresGBD11,Yu:2004:SSA:1018413.1019049}, heuristics for the service selection can be obtained from those solving the QoS prediction problem. We will come back on this point later. For now, let us point out that many QoS prediction algorithms as the SWR algorithm~\cite{Cardoso2004281} or other graph reduction algorithms~\cite{GoldmanNgoko,Zheng2} can serve for designing heuristics for the service selection problem. A large literature exists about the service selection problem. Existing works established connections between the service selection problem and the SAT problem~\cite{Oh05acomparative}, the multichoice Knapsack problem~\cite{Lee}, the multidimension multichoice Knapsack problem and the multiconstrained optimal path problem~\cite{Yu:2004:SSA:1018413.1019049}. However, we did not find any work that proposes to study this problem as a variant of the CSP. The solution that we propose is inspired by a CSP view of service selection. It also has some similarities with the work done by Est\'evez et al.~\cite{DBLP:journals/concurrency/Estevez-AyresGBD11} the branch and bound algorithm of Yu et al.~\cite{Yu} and our prior work~\cite{JISA}. For circumscribing our originality, let us also notice that our prior work was based on mixed integer linear programming. Though interesting, this solution is tuned to a particular QoS modelling. For instance, it does not work with the probabilistic modelling of Hwang et al.~\cite{Hwang20075484}. As Est\'evez et al.~\cite{DBLP:journals/concurrency/Estevez-AyresGBD11}, we propose to improve the runtime of the service selection algorithm by reducing the set of candidate solutions that are explored. We differ from their work by the usage of the backtracking technique. Similarly to Yu et al.~\cite{Yu}, we adopt a branching technique for reducing the set of candidate solutions during the resolution. But instead of using the lagrangian relaxation, we propose a novel estimation. Finally, let us observe that while most work focused on heuristics approaches for the service selection problem, we are interested in exact resolution. The NP-hardness of the service selection problem is the main weakness of this choice. However, we believe that in using an appropriate search algorithm, one can obtain in a short runtime optimal solutions for {\it small and medium services' compositions}. Our work will demonstrate this point in considering services' compositions implementing well-known business processes. Moreover our proposal can serve as a solid basis for the development of approximated solutions on the service selection problem. \section{The service selection problem} \label{SSP} \subsection{Structure of a services' composition} We model a services' composition as a Hierarchical Services Graph (HSG)~\cite{GoldmanNgoko,mgc2012,JISA}. In this representation, a services' composition is a three-layers graph, each (of the layers) encapsulating a particular abstraction of a service composition; these are the business processes, the services and the physical machines layers. An example of such a graph is given in Figure~\ref{ExampleHSG}. \begin{figure}[htbp] \centering \fbox{ \includegraphics[width=0.68\linewidth,height=2.1in]{./Figures/HSG.eps}} \caption{Example of Hierarchical Services Graph}\label{ExampleHSG} \end{figure} \begin{figure}[htbp] \centering \fbox{ \includegraphics[width=0.64\linewidth,height=2.4in]{./Figures/Pattern.eps}} \caption{Set of generic subgraph patterns in the operations layer of a HSG. Each $G_i$ is again a subgraph obtained from these patterns or an operation}\label{flows} \end{figure} A HSG comprises three layers organized as follows. The first layer, which we will also refer to as the \textbf{operations graph}, describes the functioning of the services' compositions as a business process interaction among abstract operations. For the sake of simplicity, we will reduce these interactions to the ones obtained by composing the subgraphs patterns of Figure~\ref{flows}. Abstract operations are implemented in the services that are in the layer underneath. Finally the last layer states the machines where services are deployed. An operation can be implemented in multiple services. This means that given an abstract operation, we have at the services layer, many concrete operations that can implement its behavior. A same service can be deployed on various machines. This accounts for capturing the possible migrations that can occur during the execution of a services' composition. However, for the sake of simplicity, we will assume that each service is deployed on a unique machine. Finally, let us remark that the HSG modelling we consider corresponds to the implementation of a single business process. Representations for multiple business processes also exist~\cite{mgc2012,JISA}; but, this is out of the scope of our study. The service selection problem is based on predefined relationships between abstract operations and services. The problem consists in choosing the best concrete operations for abstract ones such as to minimize the service response time and the energy consumption. Below, we provide a formal definition. \subsection{Problem formulation} Here, we consider the formulation introduced in~\cite{JISA,cpe3015} {\bf Problem's inputs: } Let us consider a HSG whose set of abstract operations is $O$. For each operation $u \in O$, there is a set of concrete implementations $Co(u) = \{u_1, \dots, u_{m_u}\}$. For each concrete implementation $u_v$, we have the mean response time $S(u_v)$ and the energy consumption $E(u_v)$. Finally, we assume two upper bounds that are issued from SLAs constraints: the bound $MaxS$ for the service response time and $MaxE$ for energy consumption. Finally, we have a tuning parameter $\lambda \in [0,1]$ used for giving more priority to the SRT or the EC in the problem optimization goal.\\ {\bf Problem objective: } We are looking for an assignment of concrete operations for $O$ that fulfills the following constraints: \begin{enumerate} \item[$C_1$:] each operation must be associated with a unique concrete implementation; \item[$C_2$:] the QoS of the resulting composition must not exceed $MaxS$ in response time and $MaxE$ in energy consumption; \item[$C_3$:] if $S$ is the service response time of the resulting composition and $E$ its energy consumption, then the assignment must minimize the global penalty $\lambda S + (1-\lambda)E$. \end{enumerate} In this formulation, the constraint $C_2$ defines users' SLAs for the response time and energy consumption. The composition has a penalty defined by the constraint $C_3$. For completing this formulation, it is important to explain how $S$ and $E$ are computed given a binding of abstract services to concrete ones. We address this issue in the following subsection by associating an execution semantics to HSGs. \subsection{Execution semantics} We divide the semantics in two parts. The first one states how we represent the QoS of a concrete operation; the second one determines how we compute the QoS of a request that {\it traverses} multiple concrete operations. \subsubsection{QoS of a concrete operation.} We use a deterministic modelling for QoS operation. The mean of the QoS of a concrete operation in a given dimension (response time, energy consumption) is expressed as a real value. Though criticizable, this modelling has been considered in multiple other works~\cite{Cardoso2004281,ZengMiddleware,GoldmanNgoko}. Moreover, the conclusion of our current work can be extended to other modellings like the probabilistic one of Hwang et al.~\cite{Hwang20075484}. Given the QoS of each concrete operation, we will now show how to aggregate them in order to capture the mean QoS of a request that is processed in a HSG. \subsubsection{QoS of a subgraph.} We compute the QoS of a service composition depending on the structure of the operations graph (upper layer graph of the HSG). The idea is to aggregate the operation QoS, in considering all possible execution cases of a request processed in a HSG. The aggregation rules are depicted in Table~\ref{tabAggRules}. They state how to compute the response time and the energy consumption, expected from a request that is processed in a HSG whose structure is matched to the patterns of Figure~\ref{flows}. $E(P)$ refers to the energy consumption of a request processed in the subgraph $P$. $S(P)$ is its response time. In {\it exclusive choice}, $p_i$ gives the probability for a request to be routed towards the subgraph $P_i$. For the sake of simplification in {\it inclusive choice} (see Figure~\ref{flows}), we assume that the request can only be routed towards the subgraph $P_1$, $P_2$ or simultaneously, to both. Each routing occurrence, has a known probability $p_{or1}$, $p_{or2}$ and $p_{or||}$. We added this restriction on inclusive choices for the sake of simplicity. Our solutions can however be generalized to the case where we have more routing occurrences. Finally, for any loop subgraph, we assume that we have the mean number $m$ of times in which the requests loop on it. \begin{table}[htbp] \centering \begin{tabular}{|p{4cm}|p{4cm}|p{2cm}|} \hline \small \textbf{Sequence} & \small \textbf{Fork} & \small \textbf{Loop} \\\hline $S(P_1) + S(P_2)$ & $\max\{ S(P_1),\dots S(P_n) \}$ & $m.S(P_1)$ \\ $E(P_1) + E(P_2)$ & $ E(P_1)+ \dots+ E(P_n)$ & $ m.E(P_1)$ \\\hline \small \textbf{Exclusive choice} & \multicolumn{2}{c|}{\small \textbf{Inclusive choice}} \\\hline $\sum_{i=1}^n p_i.T(P_i)$ & \multicolumn{2}{c|}{$p_{or1}.T(P_1)+ p_{or2}.T(P_2)$ $ + p_{or||}.\max \{T(P_1), T(P_2)\}$} \\ $\sum_{i=1}^n p_i.E(P_i)$ & \multicolumn{2}{c|}{$p_{or1}.E(P_1)+ p_{or2}.E(P_2) + p_{or||}.(E(P_1)+ E(P_2))$} \\\hline \end{tabular} \caption{Aggregation rules on subgraphs patterns}\label{tabAggRules} \end{table} \normalsize In these formula, we have almost the same aggregation rules for energy consumption and response time. The difference between these two dimensions is on how we interpret the parallelism: from an energy viewpoint, all paths of execution, {\it even parallel}, will induce an energy consumption. For additional explanations about these formula, we refer the reader to~\cite{JISA}. From the Lee result~\cite{Lee}, it is easy to establish that the described service selection problem under our execution semantics can be reduced to a multi-choice knapsack problem; this proves its NP-hardness. Below, we will show that we can use the constraint satisfaction problem for solving the service selection problem. \subsection{Service selection as a constraint satisfaction problem} \label{Decomposition} The Constraint Satisfaction Problem (CSP) is a classical problem in artificial intelligence and combinatorial optimization. A CSP is defined as a tuple $(V, D, C)$ where: \begin{itemize} \item $V = \{ v_1,\dots, v_n \}$ is a set of variables; \item $D = \{ D(v_1),\dots, D(v_n)\}$ is the set of variables' domains; \item $C = \{C_1,\dots, C_m\}$ is the set of constraints; each $C_i$ imposes a restriction on the possible values that can be assigned to a subset of variables; \item There are two classical objectives in this problem. In the {\it one-solution} objective, we are looking for an assignment of values to variables that does not violate any constraint. In the {\it all-solutions} objective, we are looking for all assignments that do not violate the constraints. \end{itemize} Regarding the objective function, we will show that the {\it one-solution} objective captures the feasibility problem in service selection~\cite{Ardagna,JISA} while the {\it all-solutions} objective captures the service selection problem. We recall that in the feasibility problem problem, the interest is in finding an assignment of concrete services that meet the SLAs. Firstly, let us assume the {\it all-solutions} objective. Let us also assume that we have to solve the service selection problem for an arbitrary HSG. We propose to map the problem onto a CSP throughout the following rules: \begin{enumerate} \item we consider that variables correspond to the operations of the HSG; \item the domain of a variable is the set of possible concrete operations that implements the abstract one to which it refers to; \item the constraints of the problem are the SLAs of the service selection problem. This means that we are looking for all assignments $f \subseteq D(v_1)\times \dots \times D(v_n)$, such that $E(f) \leq MaxE$ and $S(f) \leq MaxS$; here $E(f)$ and $S(f)$ are the energy consumption and the response time. \end{enumerate} The resolution of this problem will return all candidate solutions for the service selection problem. Let us suppose that it gives us $\omega$ assignments $f_0, \dots, f_{\omega -1}$. For solving the service selection problem, we select the assignment $f_{opt}$ such that $\displaystyle opt = \arg \{ \min_{0\leq u \leq \omega-1} \lambda.S(f_{u}) + (1-\lambda).E(f_u) \}$. CSPs are often classified according to the constraints formulation~\cite{Baker95intelligentbacktracking}. In binary CSPs for instance, each constraint is defined as a set of pair values that cannot be associated with two specific variables; this can be generalized to $k$-ary CSPs where constraints are defined as tuples of $k$ values that cannot be associated with $k$ distinct variables. In nonlinear CSPs, constraints are formulated as nonlinear inequalities on variables. With the proposed mapping, the service selection problem is a nonlinear-like CSP. It is straightforward to notice that in applying the given mapping with the {\it one-solution} objective, we obtain the feasibility problem in service selection. The proposed mapping can be extended to many other formulations of the service selection problem. For instance one can include in it other SLAs constraints on reputation, price, or availability. One of its main benefits is to suggest that the service selection problem can be solved by adopting CSPs algorithms. For this, we need to provide an {\it evaluation algorithm} that states how given an assignment $f_u$, we compute $E(f_u)$ and $S(f_u)$. In the following text, we describe this algorithm and a first solution for the service selection problem \section{Evaluation algorithm and exhaustive search} \label{exhaustiveSearch} \subsection{Evaluation algorithm} \label{evaluationAlgorithm} We propose to use our prior QoS prediction algorithm~\cite{GoldmanNgoko}. We will recall below some key points of the evaluation algorithm. In the algorithm proposed in~\cite{GoldmanNgoko}, the QoS are computed with the graph reduction technique. We consider as input a HSG whose operations graph that is obtained by composing the patterns of Figure~\ref{flows}. We will say that such a graph is {\it decomposable}. The algorithm will successively seek in the operations graph, subgraphs whose structure is defined as in Figure~\ref{flows}, but with the $P_i$s, corresponding here to operations. We will use the term \textit{elementary subgraphs} for qualifying them. As soon as an elementary subgraph is found, it is reduced. This means that its QoS are computed and the subgraph is replaced by a single node with the same QoS. Then, the execution continues until the reduction of the operations graph reaches a single node. For optimizing the algorithm's runtime, a reduction order is computed at the beginning. The reduction order is a stack of subgraphs such that: (1) the top subgraph is elementary, (2) as soon as the top subgraph is reduced, the new one in the top is also elementary. The reduction is done according to this order. Goldman and Ngoko~\cite{GoldmanNgoko} showed that elementary subgraphs can be characterized by two frontier-nodes: a root and a leaf one. This fact eases the subgraphs' representation in the reduction order. In Figure~\ref{Reduction-w-order}, an illustration of the reduction process is provided. Initially, we have the graph of Figure~\ref{Reduction-w-order}(1). The first phase of the algorithm will generate the reduction order (or the reduction stack) for the graph. We represent it as a stack in which subgraphs are given by a root and a leaf node. The second phase begins with the unstacking of the top element in the order and then the reduction of the corresponding elementary subgraph. This leads to the graph of Figure~\ref{Reduction-w-order}(2). The algorithm continues in the same way until the reduction stack is empty. At this step, the operations graph will be reduced to a unique node. \begin{figure}[htbp] \centering \fbox{ \includegraphics[width=0.8\linewidth,height=1.7in]{./Figures/Reduction_with_order.eps}} \caption{Example of graph reduction.}\label{Reduction-w-order} \end{figure} As stated in the introduction, our resolution of the service selection problem will be based on the notion of reduction order. We will recall here some important details about its representation. The reduction order is made of pairs $(x,y)$ that each defines a subgraph to reduce. Regarding the composition of each pair, four cases can be distinguished: a) $x$ and $y$ are operations; in this case, the referred reduction is an elementary sequence with $x$ and $y$; in Figure~\ref{Reduction-w-order}(2) for example, we have the reduction $(B, CD)$; b) in the second case, $x$ is a split connector and $y$ is a join connector (for instance $(g_3, g_4)$ in Figure~\ref{Reduction-w-order}(1)); then, the reduction refers to the elementary split/join subgraphs; c) in the third case, $x$ is a split connector and $y$ is an operation; then, the reduction refers to the subgraphs whose root is $x$ and leaf is $y$; in Figure~\ref{Reduction-w-order}(1), we have the reduction $(g_1, F)$ that refers to the subgraph comprising $g_1, B, g_3, C, D, g_4, E, g_2, F$; d) in the last case, $x$ is an operation and $y$ a split connector; then, the reduction refers to the subgraph whose root is $x$ and leaf is {\it the leaf of $y$}. For instance, $(B, g_3)$ comprises $B, g_3, C, D, g_4$. Now that the evaluation algorithm has been detailed, we can derive a service selection algorithm by stating how we solve the CSP. One can envision at this stage to use a generic CSP solver. But, let us observe that in our CSP mapping (See Section~\ref{Decomposition}), we cannot easily map the description of the SLAs constraints ($E(f_0) \leq MaxE$ and $S(f_0) \leq MaxS$) to classical options used in CSP solver (e.g set of unauthorized values, linear equations). This is why in the sequel, we will consider a proper resolution. \subsection{Exhaustive search algorithm} We propose to consider the exhaustive search algorithm for the CSP~\cite{Baker95intelligentbacktracking,DBLP:journals/concurrency/Estevez-AyresGBD11}. Given a CSP $(V, D, C)$, the principle of this algorithm is to randomly generate assignments of values taken in $D$ to the variables $V$. Each time that an assignment $f$ is generated, one evaluates whether or not it fulfills the constraints $C$. If it is the case, we return $f$ as solution; otherwise, we generate another assignment. In using this algorithm in our mapping of the service selection problem (see Section~\ref{Decomposition}), we obtain the Algorithm~\ref{alg:Exhaustive} for the service selection problem. The proposed scheme is based on the following notations: \begin{itemize} \item $|f|$ is the number of abstract services. \item With each abstract operation, we associated a distinct integer number (the abstract services have the numbers: $0, \dots, |f|-1$). \item For defining assignments, we use the array $f$. $f[i]$ will denote the concrete operation associated with the abstract operation $i$. \item $E(f)$ and $S(f)$ are the energy and the service response time of the partial or concrete assignment made in $f$. \item The over-defined notion $Co(index)$ refers to the set of concrete services that can be assigned to the abstract operation $index$. \end{itemize} Though we deduced the exhaustive search algorithm from the know-how in CSP resolution, let us observe that the proposal can be found in other work~\cite{DBLP:journals/concurrency/Estevez-AyresGBD11,Yu:2004:SSA:1018413.1019049}. The main difference between our solution and theirs is the evaluation algorithm. The exhaustive search proposes an exact solution for the service selection problem. However, considering the way we solve the CSP, this solution is not necessarily the best for the runtime viewpoint. In what follows, we propose a faster algorithm that includes more in depth the know-how in CSP's resolution. \scriptsize \begin{algorithm}[H] \begin{algorithmic}[1] \scriptsize \Function{Main}{} \State $OptPenalty = +\infty$; $index = 0$; \State Create an uninitialized array $f$ of values for abstract operations; \State Call exhaustive($f$, $H$, $OptPenalty$, $index$); \State Return the best assignment and $OptPenalty$; \EndFunction \Function{exhaustive}{$f$, $H$, $OptPenalty$, $index$} \If{$index = |f|$} \State Compute $E(f)$ and $S(f)$ from the evaluation algorithm with $H$ and $Q$; \If{ $S(f) \leq MaxS$ and $E(f) \leq MaxE$ } \If{$\lambda.S(f) + (1-\lambda).E(f) < OptPenalty$} \State Save $f$ as the best assignment; \State $OptPenalty = \lambda.S(f) + (1-\lambda).E(f)$; \EndIf \EndIf \State Return; \EndIf \For{ all concrete operations $u \in Co(index)$} \State $f[index] =$ $u$; \State Call exhaustive($f$, $H$, $OptPenalty$, $index+1$); \EndFor \EndFunction \normalsize \end{algorithmic} \caption{\scriptsize SS-Exh (Exhaustive search for service selection). \\ {\bf INPUT:} a HSG $H$ and a QoS matrix $Q$ giving the energy consumption and service response time of each concrete operation; \\ {\bf OUTPUT:} An assignment of concrete operations to abstract ones } \label{alg:Exhaustive} \end{algorithm} \normalsize \section{A Backtracking search for the Service Selection Problem} \label{backtrackingSearch} Our objective is to improve the runtime exhaustive search algorithm. Our conviction is that this algorithm has an amount of {\it useless work} that can be avoided. This section is organized in two parts. In the first one, we discuss about useless work in the exhaustive search. Then, we propose an algorithm for avoiding them. \subsection{Useless work in exhaustive search} Our prior work~\cite{JISA} highlights a critical situation that happens in the service selection problem: {\it the infeasibility problem}. Indeed, given a HSG $H$, it might be impossible to respect the constraints defined in the SLAs because for a sub-HSG $H' \subset H$, the service selection problem does not have any solution. As illustration, let us consider the service selection problem with the operations graph of Figure~\ref{noSolution}. In the SLAs constraints, if it is set that the service response time must be lower than $13$ms, then the infeasibility of the problem can be established in considering the possible assignments for the subgraph $H' = (g_1, g_2)$. The exhaustive search here is not always optimal regarding the amount of work. Indeed, a better search would have consisted of exploring the possible assignments that can be made for the abstract operations in $H'$ and then checking each time whether or not these assignments respect the SLAs. Doing so, the infeasibility could have been established in exploring only a part of the search space. \begin{figure}[htbp] \centering \fbox{ \includegraphics[width=0.9\linewidth,height=1.7in]{./Figures/uselessWork.eps}} \caption{Example of operations graph with related concrete operations. $D$ is implemented by $D_1$ and $D_2$ and $E$ is implemented by $E_1, E_2, E_3$}\label{noSolution} \end{figure} The second instance of useless work is similar to the first one. We suppose now that the problem is feasible; but, there are multiple assignments that do not respect the SLAs. If the constraints violation were already identified in a sub-HSG $H' \subset H$, a part of the useless work could have been avoided. As illustration, if we consider in Figure~\ref{noSolution} that the response time must be lower or equal to $14$ms, then there is only one assignment to the subgraph $(g_1, g_2)$ that can lead to a feasible solution. It is only this assignment that must be joined with the other possibilities for $A$ and $B$. The last situation of useless work is related to the quality of a partial assignment. Let us assume that we already have a correct assignment whose penalty is $p$. It might happen in the exhaustive search that we made an assignment to a sub-HSG $H'$ whose total penalty already exceed the value of $p$. In these cases, we must not try to {\it complete} this assignment (to all operations of $H$) since we already have a better one. Let us observe that this analysis is often done in branch and bound algorithms. In the context of service selection, a discussion can also be found in the work of Yu et al.~\cite{Yu}. Summarizing, we have useless work in the exhaustive search if we can find a sub-HSG for which multiple assignments are infeasible or already dominated by an existing solution. For optimizing these situations we propose to use a backtracking search that we discuss in the following. \subsection{Backtracking algorithms} We consider the CSP resolution with the backtracking technique applied with a static initial ordering. Given a CSP tuple $(V, D, C)$, the technique starts by defining an ordering of the variables $V$. Let us assume that the resulting order is $v_1, \dots, v_n$. Then, one successively assigns values to the variables according to the ordering and their domain definition. In the processing, we can reach a situation where values are assigned to $v_1, \dots, v_i$. Then, one checks if no constraint is violated by this partial assignment and if the assignment is not already dominated by another one. If it is not the case, one assigns a value to $v_{i+1}$. Otherwise, one will assign another value to $v_i$ (one backtracks). The backtracking technique might reduce the useless work that we have identified before. In exhaustive search, we evaluate all possible assignments to the variables. In backtracking, this is not the case. If for instance, there is no assignment for $v_1$ that satisfies the constraints, then backtracking will consider only $|D(v_1)|$ assignments instead of $|D(v_1)|\times \dots \times |D(v_n)|$ for exhaustive search. To demonstrate the gain expected from backtracking, we illustrate in Figure~\ref{backvsExh} the search spaces that we explore. This is the case where, for the graph of Figure~\ref{noSolution}, an SLA constraint states that the service response time must be lower than $13$ms. \begin{figure}[htbp] \centering \fbox{ \includegraphics[width=0.9\linewidth,height=1.7in]{./Figures/SearchSpace.eps}} \caption{Exhaustive search (in a) vs backtracking search (in b). The dark sub-trees correspond to assignments made to $A, B, F, G, H$. With backtracking, these assignments will not be explored.}\label{backvsExh} \end{figure} For applying the backtracking technique, we will now discuss two points. Firstly, we present the basic principles that we use for ordering variables. Then we discuss the implementations of the principles in a backtracking algorithm for service selection. \subsubsection{Ordering principles} The ordering of variables is important in backtracking. The literature~\cite{Baker95intelligentbacktracking} proposes multiple static orderings. We propose to consider two of the most popular: the min-domain and the max-degree ordering. In the min-domain ordering, the abstract operations whose concrete set of operations are smaller must be considered first in the assignments. In the max-degree ordering, the abstract operations that are the most connected to the other operations will be considered first. We map these orderings in the resolution of the service selection problem by the mean of two principles that we introduce further. Let us first consider the following definitions. \begin{definition}[Correct partial evaluation] Given a decomposable HSG $H$ whose abstract operations are bound to concrete one, we define a correct partial evaluation of QoS as the vector of QoS values (energy consumption, response time) of a decomposable subgraph of $H$. \end{definition} For instance, given the operations graph of Figure~\ref{noSolution}, if $D$ is assigned to $D_1$ and $E$ to $E_1$, then a correct partial evaluation is the vector $(22, 1.6)$ for the response time and energy consumption of the subgraph $(g_1, g_2)$. We only compute partial evaluations on decomposable graphs. Let us recall that such graph are obtained by composing the regular patterns that we consider in the semantics of the operations graph. The objective in making partial evaluations is twice: (1) the partial evaluations bounds are compared to the SLAs bounds to check if there is a violation; (2) these bounds are compared to the local optimum found for the service selection problem to see if it is not already dominated. It is important to notice that in the comparisons, the partial evaluations bounds must in some cases be weighed. In Figure~\ref{noSolution}, let us consider the QoS vector (SRT, EC) issued from the reduction of $(g_3, g_4)$. Since this subgraph is included in the xor subgraph $(g_5, g_6)$, we cannot directly compare the response time of this vector to the SLA bound on response time. For taking into account the semantics, we need to multiply this value with the probability for a request to be routed towards $g_3$. In our prior work~\cite{mgc2012}, for such situations, we introduced the notion of {\it reachability probabilities}. In a simplified manner, for each subgraph, this gives the probability for a request to be routed to it. We will use these probabilities for weighting the comparison of QoS vectors with SLAs bounds. The correct partial evaluations capture a facet of our backtracking algorithms: we will regularly evaluate sub-assignments of concrete services to abstract ones such as to detect whether or not we must continue in this exploration path. As stated before, the order of assignments of concrete services to abstract ones can have a great influence in the run. We introduce below the partial evaluation precision for characterizing the possible orderings. \begin{definition}[Partial Evaluation Precision] Given a decomposable HSG $H$ and an ordered list of abstract operations $L = [ u_1, \dots, u_k ]$, we define the precision of the partial evaluation of $L$ as the difference $\displaystyle pep(L) = |L| - neo(L)$. In this formula, $neo(L)$ is the maximal number of $L$' operations from which one can compute a correct partial evaluation of QoS from $H$. \end{definition} This definition relies on the fact that any partial assignment will not lead to a lower bound on SRT and EC that includes all abstract operations. At the beginning of the backtracking algorithm, we must define an ordered list $L$ of abstract operations. This list is such that $L[1]$ is the first abstract operation that will be assigned, then follows $L[2]$ and so one. At a moment in the algorithm, we could have assigned a concrete operation to $L[1], L[2], \dots, L[i]$; but it might happen that the assigned operations do not constitute a decomposable graph (See Section~\ref{evaluationAlgorithm} for decomposable graph). In these cases, a correct partial evaluation will be obtained only from a subset of the assignments ($neo(L)$). In Figure~\ref{noSolution} for instance if $L = [D, A]$ then $pep(L) = 2$. If $L = [D, E]$, then $pep(L) = 0$. Indeed, $[D, A]$ do not {\it shape} a decomposable graph. We can generalize the notion of precision. Let us assume that $L[1..i]$ defines the sublist having the first $i$ elements of $L$. \begin{definition}[Total Evaluation Precision] Given a decomposable HSG $H$ and an ordered list $L = [ u_1, \dots, u_k ]$ of its abstract operations, we define the precision of the total evaluation of $L$ as $\displaystyle \sigma(L) = \sum_{i = 2}^{|L|-1} pep(L[1..i])$. \end{definition} $\sigma(L)$ captures the distance between two numbers of operations: those to which a concrete service is assigned and those from which a correct evaluation can be made. The following result is straightforward. \begin{property} Let us consider a decomposable HSG for which all abstract operations are assigned to concrete ones. Let us also consider an ordered list $L = [ u_1, \dots, u_k ]$ of its abstract operations. Then, $pep(L[1..i]) \geq 0$, $pep(L[1..k]) = 0$ and $\displaystyle \sigma(L) \geq 0$. \end{property} Based on these definitions, we can then define the first principle that we will use for ordering abstract operations. \begin{principle}[Partial Evaluation of QoS First (PEQF)] Let us consider a decomposable HSG $H$ for which all abstract operations are assigned to concrete ones. The generated ordering list $L$ for $H$ must minimize the precision of the total evaluation of $L$. \end{principle} With this principle, our objective is to maintain, each time during the search, an updated correct partial evaluation that we can use for checking whether or not SLAs are violated. As one can remark, this is not the case with large values of $\sigma(L)$. In Figure~\ref{noSolution} for instance, with the ordering $L = [B, A, E, D]$, we have $\displaystyle \sigma(L) = 5$ ( $[B, A, E]$ do not describe a decomposable graph). In this case, the backtracking will not improve the exhaustive search. This is because we must wait for the assignment of a concrete operations to each abstract one for expecting a QoS evaluation. In choosing however the ordering $E, D, A, B$ we have $\displaystyle \sigma(L) = 0$. As one can observe, we can quickly have here a correct partial evaluation that can then be used for checking SLAs violation. Regarding the implementation of the PEQF principle, it is important to consider the following result. \begin{property} We can find a decomposable HSG $H'$ for which there exist two ordering lists $L_1$ and $L_2$ of abstract operations for which $\sigma(L_1) = \sigma(L_2) = 0$. \end{property} This is the case in Figure~\ref{noSolution} with the lists: $L_1 = [D, E, A, B]$ and $L_2 = [D, E, B, A]$. The question in these settings is to choose among the two lists. We adopt for this following principle. \begin{principle}[Min Domain First (MDF)] The ordering of abstract operations must consider in priority, the correct partial evaluations with the shortest number of concrete operations. A random ordering must be adopted if we have multiple options. \end{principle} This principle is inspired from the min-domain heuristic in constraint satisfaction. Let us indeed assume that $B$ has less concrete operations than $A$ (i.e. $|C_o(B)| < |C_o(A)|$). Then, the ordering to choose in Figure~\ref{noSolution} is $L_2 = [D, E, B, A]$. The objective is to detect quickly invalid assignments by considering small domains before. One can criticize the choice of min-domain because in CSP resolution, the max-degree heuristic also performs well for detecting invalid assignments. However, let us observe that the idea of max-degree (to start with the most connected variable) is partially included in the first principle (PEQF). Indeed, we will see that for finding quickly decomposable graphs, we must consider nested operations in priority. To summarize, we modelled our ordering goals within principles, below we consider their implementation for deriving backtracking algorithms. \subsubsection{Implementation of the PEQF principle} For implementing the PEQF principle, we propose to use the ordering of abstract operations suggested by the reduction order of the evaluation algorithm. For instance, in Figure~\ref{noSolution}, from the reduction order $(g_1, g_2); (g_1, B), (A, g_1)$, we deduce the possible ordering $E, D, B, A$. What is challenging is to derive systematically such an ordering. For this we will introduce two data structures. We will also manipulate the deepness concept, introduced in prior work~\cite{GoldmanNgoko}. Below, we recall its definition. \begin{definition}[Deepness~\cite{GoldmanNgoko}] Given an operation graph $G_o$, let us suppose that for a node $u$ (operation or connector), we have $n$ paths $Pt_1, \dots Pt_n$ leading to it. In each path $Pt_i$, we have $\alpha_i$ split connectors and $\beta_i$ join connectors. The deepness of $u$ is defined as $deep(u) = \underset{1 \leq i \leq n}{\max} \{\alpha_i - \beta_i\}$. \end{definition} For example in Figure~\ref{dataStructure}, $deep(A) = deep(g_1) = 0$, $deep(B) = deep(E) = 1$. The first data structure that we consider for the implementation of the PEQF principle is the {\it nested list for subgraphs' nodes (NeLS)}. For split/join subgraphs, the list gives the operation nodes whose deepness are equal to the one of the split node plus $1$. In Figure~\ref{noSolution}, the NeLS will have an entry $(g_1, g_2)$ pointing towards a list with the operations $D$ and $E$. This is because we have a unique split/join graph that comprises these operations. For the operations graph of Figure~\ref{dataStructure}, the NeLS has two entries. The first entry ($(g_3, g_4)$) points towards a list with $C, D$. The second entry ($(g_1, g_2)$) points towards $B, E$. Let us remark that we did not include $C$ here because $deep(C) = deep(g_1)+2$. Given a NeLS $h_s$, we will use the term $hs(x,y)$ for referring to the list that the entry $(x,y)$ points to. For instance, if $h_s$ is the name of the NeLS of Figure~\ref{dataStructure}, then $h_s(g_3, g_4)$ points towards $C$ and $D$. \begin{figure}[htbp] \centering \fbox{ \includegraphics[width=0.8\linewidth,height=2.5in]{./Figures/DataStructure.eps}} \caption{Data structures for the PEQF principle}\label{dataStructure} \end{figure} The second data structure is the {\it nested list for operations' ordering (NeLO)}. The main entries of a NeLO consist of a list of ordered operations. Each entry points towards a list of subgraphs, defined with their root and leafs. The idea is that once a value is assigned to the abstract operation in one entry, the pointed subgraphs can be reduced. In our algorithms, while the NeLO will be used for storing the ordering of abstract operations, we will use the NeLS for the generation of this order. In particular, {\it the NeLS will serve us for detecting when to assign a value to an abstract operation that is not included in the reduction order}. Figure~\ref{dataStructure} shows a NeLO related to an operations graph. This NeLO describes the ordering in which abstract operations must be considered in backtracking assignments. The first operation to which a concrete one must be assigned is $C$. This entry does not point towards any list. Therefore, after this assignment, no reduction is done. Then, we must consider $D$. The assignment of a concrete operation to $D$ implies that we can reduce the subgraph $(g_3, g_4)$. Then, we must continue with $B, E,$ and so on. Let us notice that it is easy to redefine the notion of {\it total evaluation precision} for computing the value of $\sigma(h_o)$ given the NeLO $h_o$. This is because the entries of a NeLO constitute an ordered list of abstract operations. With the defined data structures, we will now state how we implement the PEQF principle. We view the implementation of the PEQF principle as the computation of a NeLO for which the total precision is minimal. The computation of this NeLO is done from an operations graph and a NeLS. The process is the following. Firstly, from the operations graph, we generate a reduction order and a NeLS. Then, we pick the top element of the reduction order. This element corresponds to a pair $(x,y)$ defining the frontiers of a subgraph. We process this element for generating new entries in the NeLO to build. The rules of this processing are given in Figure~\ref{Cases}. Once $(x,y)$ is processed, we consider the next element of the reduction order. We continue in the same way until processing the last element of the reduction order. In Figure~\ref{dataStructureExample}, we illustrate the application of this process. \begin{figure}[!htbp] \begin{center} \begin{tabular}{|p{13cm}|} \hline \small [We have a pair $(x,y)$ in the reduction order and a NeLS denoted $h_s$]\\ \small case \#1 [$x$ and y are operations]: we create two new entries referring to $x$ and $y$ in the NeLO. We chain the last entry with a list pointing towards $(x,y)$; this is for stating that this subgraph can be reduced after assigning a value to $x$ and $y$. We then remove $x$ and $y$ from all lists of $h_s$.\\ \small case \#2 [$x$ is an operation and $y$ is a split connector]: we create a unique entry $x$ and chain it with a list towards $(x,y)$. We remove $x$ from all lists of the the $h_s$. \\ \small case \#3 [$x$ is a split connector and $y$ is an operation]: we create a unique entry $y$ and chain it towards $(x,y)$. We remove $y$ from all lists of the the $h_s$. \\ \small case \#4 [$x$ is a split connector and $y$ is a join connector]: If the list $h_s(x,y)$ is not null, then we create an entry in the NeLO for all elements of $h_s(x,y)$. We chain the list of the last element of the NeLO with $(x,y)$. Then, we delete $h_s(x,y)$. \\ \hline \end{tabular} \caption{Processing of an element $(x,y)$ for the generation of the operations hash table.} \label{Cases} \end{center} \end{figure} \begin{figure}[htbp] \centering \fbox{ \includegraphics[width=1\linewidth,height=2.6in]{./Figures/PEQF.eps}} \caption{Computation of the ordering}\label{dataStructureExample} \end{figure} One can understand the role of the NeLS in this computation as follows. The objective in the NeLO is to have a list of abstract operations pointing towards reductions to be done. For obtaining partial evaluations quickly, we generate the NeLO based on the reduction order. However, the representation of this order might not include some operations. For instance, in Figure~\ref{dataStructureExample}, $C$ and $D$ do not appear in the reduction order. In the NeLO generation, we find the missing operations from the NeLS. In particular, when we have a subgraph reduction, we first explore the NeLS (see case \#3) for including the subgraph' operations in the NeLO. As one can remark in Figure~\ref{dataStructureExample}, $C$ and $D$ are referred to in the NeLS. There are two important observations. The first one is that given an element $(x,y)$ of the reduction order, in Figure~\ref{Cases}, we consider all cases regarding the type of $x$ and $y$~\footnote{The cases are described in Section~\ref{evaluationAlgorithm}}. The second observation is that the described generation process has a polynomial runtime in the number of nodes of the operations graph. More precisely, we have the following result. \begin{lemma} Giving an operations graph, a reduction order and a NeLS, the NeLO generation can be done in $O(n^2)$ where $n$ is the number of nodes of the operations graph. \end{lemma} \begin{proof} Firstly, let us observe that the generation process that we described loops on the number of elements of the reduction order. If we have $n$ nodes in the operation graph, then our prior work~\cite{GoldmanNgoko} guarantees that the number of elements of the reduction order is in $O(n)$. In the treatment of an element of the reduction order, we have the cases listed in Figure~\ref{Cases}. These cases are dominated by two instructions: the creation of NeLO entries and the deletion of NeLS elements. Since the number of elements of NeLO and NeLS are in $O(n)$, these two instructions are in $O(n)$. We have the proof in considering that we loop on elements of the reduction order. \end{proof} Regarding the PEQF principle, the quality of the process proposed for NeLO computations can be perceived throughout the following result. \begin{lemma} Let us consider an operations graph with $n$ abstract operations. If the maximal outgoing degree of a split connector in the graph is $2$ then, the generated NeLO $h_o$ is such that $\displaystyle \sigma(h_o) \leq \frac{n}{2}$. If the graph is a sequence, then $\displaystyle \sigma(h_o) = 0$. \end{lemma} \begin{proof} We obtain the result from an analysis of the process of Figure~\ref{Cases}. Firstly, let us assume that the graph is a sequence. The first element $(x,y)$ of the reduction order in this case refers to two operations. According to the process of Figure~\ref{Cases}, this will generate two entries in the NeLO such that the last entry points towards the reduction $(x,y)$. Consequently, $pep(h_o[1..2]) = 0$. According to the reduction order algorithm~\cite{GoldmanNgoko}, the second element $(x',x)$ of the reduction order will refer to two operations $x'$ and $x$ (already in the NeLO). Therefore, we will put an operation in the NeLO entry with a reduction to operate. This implies that $pep(h_o[1..2])+pep(h_o[1..3]) = 0$. The third element of the reduction order will have the form $(x",x')$. Consequently $pep(h_o[1..2])+pep(h_o[1..3])+ pep(h_o[1..4])= 0$. Generalizing, we will have for the resulting NeLO $h_o$, $\sigma(h_o) = 0$. Let us now assume that we have split connectors in the operation graph. Then, for the first element $(x,y)$ of the reduction order, we have two cases. Either we have two operations or we have a split connector ($x$) and a join ($y$). There are no other possibilities from the reduction algorithm. In the first case, we can easily guarantee from what precedes that $pep(h_o[1..2]) = 0$. In the second case, the process of Figure~\ref{Cases} states that we will add two operations in the NeLO entry such that the last operation will point towards the reduction $(x,y)$. Consequently, $pep(h_o[1..2]) = 0$. For the processing of the next element $(x', y')$ of the reduction order, we can have multiple cases. Either $x'$ or $y'$ is an operation, or, they correspond to connectors. In the former case, we can ensure that $pep(h_o[1..3]) = 0$; in the latter case, we can ensure that $pep(h_o[1..3]) \leq 1$ and $pep(h_o[1..4]) \leq 1$. Generalizing, $\displaystyle \sigma(h_o) \leq \frac{n}{2}$ \end{proof} An interesting question is the one of knowing whether or not better total precisions can be expected. The answer is no. For sequences graphs, the optimality of the result is guaranteed. For arbitrary structures, we have a lower bound. Indeed, let us consider a sequence of subgraphs of two elements. On such graphs, it is impossible to build an ordering $h_o$ such that $\sigma(h_o) < \frac{n}{2}-1$. This comes from the fact that for reducing an internal subgraph, we must assign a concrete operation to each abstract one. When assigning a value to the first concrete operation, an evaluation is not possible. It is only when making an assignment to the second operation that we can evaluate. Consequently, we can consider that the proposed implementation of the PEQF principle is optimal. In the following, we state how to implement the MDF principle. \subsubsection{Implementation of the MDF principle} The objective in the MDF principle is to start the assignments with abstract operations whose set of concrete ones are small. The applicability of this principle however can be in conflict with the implementation of the NeLO. It is the case in Figure~\ref{dataStructureExample} if $|Co(F)| > |Co(A)|$. Indeed, the evaluation of $(A, g_1)$ will concern a domain that is smaller than the evaluation of $(g_1, F)$. However the latter evaluation is done before in the NeLO. Therefore, how can we conciliate the two principles? For this, we consider the following result. \begin{lemma}[Free permutation of operations and subgraphs]\label{freePermutation} Let us consider a decomposable HSG $H$; \\ a) let us assume that in the operations graph, we have a sequence $(x, y)$ where $x$ and $y$ are either operations or decomposable subgraphs. The HSG $H'$ in which we reversed the subgraphs $x$ and $y$ (so we have the sequence $(y, x)$) has the same mean response time and energy consumption as $H$; \\ b) let us assume that we have a split connector $g$ in $H$. The $H'$ in which we reversed two branches of the split connector has the same mean response time and energy consumption as $H$. \end{lemma} \begin{proof} The results come from the commutativity of computations in the QoS aggregation rules (see Table~\ref{tabAggRules}). The response time of the graph $(x, y)$ will be $S(x)+S(y) = S(y) + S(x)$. The energy consumption will be $E(x)+E(y) = E(y) + E(x)$. Since $S(y) + S(x)$ and $E(y) + E(x)$ are the response time and energy consumption of $(y,x)$, we have the proof in the case of sequences. In the case of split connectors, we can establish the proof in the same way. For instance, given an elementary Fork with the operations $x, y$ its response time is $\max\{x, y\} = \max\{y, x\}$. \end{proof} The interest in this result is that it suggests a solution for applying the MDF principle without violating the PEQF principle. The idea is that the generation of the NeLO is based on a particular exploration of the operations node. Before this generation, one can use the reverse instructions of Lemma~\ref{freePermutation} for obtaining a topology of the operations graph, adjusted to partial evaluations for small domains in priority. As a simple illustrative example, let us reconsider the operations graph of Figure~\ref{dataStructureExample}. In the case where $|Co(F)| > |Co(A)|$ we can switch the nodes $A$ and $F$. As a result, the reduction order will be $(g_3,g_4); (B, g_3); (g_1,g_2); (g_1, A); (F,g_1)$. In it, the evaluation $(g_1, A)$ is now {\it before} $(F,g_1)$. For the implementation of the MDF principle, we propose to make a topological sorting of the initial operations graph that is done based on an extended NeLS (denoted NeLS+). The extensions concern the following points: a) we distinguish between branches of the elements of each split/join subgraph; b) we include split/join subgraphs in the list towards which a NeLS entry points; c) We assume that the root and leaf nodes of the operations graph are branches of a virtual graph $(g_0, g'0)$. \begin{algorithm}[H] \begin{algorithmic}[1] \scriptsize \Function{Main}{} \State Generate the reduction order and store it in $ORD$; \State Build a NeLS+ $hs^+$; \State Topological Sorting of $H$ according to $hs^+$. The result is $H'$; \State Build a NeLS $hs$ for $H'$; \State Generation of a NeLO $h_o$ from $H'$ and $hs$; \State $OptPenalty = +\infty$; $index = 0$; \State Create an uninitialized array $f$ of values for abstract operations; \State Call backtrack($f$, $H'$, $h_o$, $OptPenalty$, $index$); \State Return the best assignment and $OptPenalty$; \EndFunction \Function{backtrack}{$f$, $H'$, $h_o$, $index$} \If{$index = |f|$} \State Compute $E(f)$ and $S(f)$ from the evaluation algorithm with $H$ and $Q$; \If{ $S(f) \leq MaxS$ and $E(f) \leq MaxE$ } \If{$\lambda.S(f) + (1-\lambda).E(f) < OptPenalty$} \State Save $f$ as the best assignment; \State $OptPenalty = \lambda.S(f) + (1-\lambda).E(f)$; \EndIf \EndIf \State Return; \EndIf \For{ all concrete operations $u \in Co(h_o[index])$} \State $f[index] =$ $u$; \If{ $h_o[index]$ points towards some reductions} \State Update $E(f)$ and $S(f)$ by making the reductions; \State Get the reachabilities probabilities $pa$ of the last reduction; \If{ $pa.S(f) \leq MaxS$ and $pa.E(f) \leq MaxE$ } \If{ $\lambda.pa.S(f) + (1-\lambda)pa.E(f) < OptPenalty$ } \State Call backtrack($f$, $H$, $h_o$, $OptPenalty$, $index+1$); \EndIf \EndIf \Else \State Call backtrack($f$, $H$, $h_o$, $OptPenalty$, $index+1$); \EndIf \EndFor \EndFunction \normalsize \end{algorithmic} \caption{\scriptsize SS-b-PM (Backtracking search for service selection with the PEQF and MDF principle). \\ {\bf INPUT:} a HSG $H$ and a QoS matrix $Q$ giving the energy consumption and service response time of each concrete operation; \\ {\bf OUTPUT:} An assignment of concrete operations to abstract ones } \label{alg:Backtracking} \end{algorithm} Based on the lemma~\ref{freePermutation}, we will consider two main instructions: the {\it branch sorting} and the {\it sequence sorting}. In branch sorting, the objective is to revert the branches of a split/join subgraph such as to ensure that when computing the reduction order, in the first place, we will consider the branch with the smallest domain. Such an instruction is meaningful because the computation of the reduction order occurs through a depth first search algorithm in which the branches are explored according to a number assigned on them. In our current implementation, the branch with the greatest number is explored in priority. The sequence sorting considers a sequence of operations and subgraphs and revert them such as to ensure that the operations or subgraphs with smallest domain will be considered in priority. In the computation of the reduction ordering, let us observe that the last operations of each sequences are considered in priority. Therefore, the reverting must ensure that these operations have the smallest domain size. In Figure~\ref{transClosureExample} for example, the sequence sorting will reverse in the entry $(g_0, g'_0)$ the nodes $A$ and $F$. The topological sorting is done as follows. Initially, we sort the entries of the NeLS+ based on their deepness. The idea is to have the deeper subgraph at the top and the less deep one at the end. Then, we compute the total domain size to which correspond each entry. The domain size of an entry is the sum of domain size of the lists of elements to which it gives access. Let us remark that because of the initial sorting, we can simply evaluate the domain size starting from the top entry to the last one. In Figure~\ref{transClosureExample} for instance the domain size of $g_3$ is {\it dom-size($g_3,g_4$)} = $|Co(C)| + |Co(D)|$. The domain size of $(g_1, g_2)$ is $|Co(E)| + |Co(B)| + $ {\it dom-size($g_3,g_4$)}. Once we have the domain size of each entry, we make branches sorting first and sequence sorting next. \begin{figure}[htbp] \centering \fbox{ \includegraphics[width=0.75\linewidth,height=1.4in]{./Figures/TransitiveClosure.eps}} \caption{Example of NeLS+}\label{transClosureExample} \end{figure} Now that we stated how we implement the principles, in Algorithm~\ref{alg:Backtracking}, we give our general backtracking scheme. The proposed scheme is based on the following notations: \begin{itemize} \item We assume that $h_o[i]$ refers to the variable of $h_o$ in the $i^{th}$ entry. \item For defining assignments, we use the array $f$. $f[i]$ will comprise the concrete service associated with $h_o[i]$. \item The over-defined notion $Co(h_o[index])$ refers to the set of concrete services that can be assigned to $h_o[index]$. \end{itemize} We will refer to this global algorithm as {\it SS-b-PM}. We will also consider its variant {\it SS-b-P} where we do not apply the MDF principle. \section{Experimental evaluation} \label{ExperimentalEvaluation} Throughout the experimental evaluations, our objectives were the following: \begin{enumerate} \item to demonstrate that the backtracking based heuristics outperforms the exhaustive search; \item to demonstrate that the backtracking based heuristics outperforms integer linear programming; \item to compare the different heuristics. \end{enumerate} \subsection{Backtracking versus exhaustive search} For these experiments, we used $4$ types of operations graphs, each based on reference BPMN processes~\cite{Omg,Freund}. The structure of the processes is given in Figure~\ref{figProcess}. From each processes, we created $300$ service selection problems. Depending on the SLAs, we ranged instances in three classes of $100$ instances: simple, medium, hard. We chose these names because our experiments globally showed that in increasing the size of the bounds, $MaxS$ and $MaxE$, we increase the runtime of all algorithms. Intuitively, the reason is that with big bounds, there are more candidate solutions. Depending on the number of concrete implementations per abstract operations, we ranged our instances in $5$ classes of $60$ instances each. The setting of instances is resumed in Table~\ref{instanceSetting}. For each instance, we randomly draw the service response time of operation between $1,\dots, 1500$. Given a service response time $S$, we deduced the energy consumption from the formula $E = P.S$, where $P$ is a power consumption value randomly drawn between $100,\dots,150$. \begin{figure}[ht] \centering \fbox{ \subfloat[Shipment]{ \includegraphics[scale=0.30]{./Figures/shipment.eps} } \subfloat[Procurement]{ \includegraphics[scale=0.30]{./Figures/procurement.eps} } } \fbox{ \subfloat[Disbursement]{ \includegraphics[scale=0.30]{./Figures/Disbursement.eps} } \subfloat[Meal Options]{ \includegraphics[scale=0.30]{./Figures/MealOptions.eps} } } \caption{Process examples} \label{figProcess} \end{figure} \begin{table}[htbp] \centering \begin{tabular}{c|c|c} \hline {\bf Type of SLAs } & {\bf (MaxS; MaxE)} & {\bf Domain sizes} \\\hline \textit{Simple} & {$(3500; 7000)$} & \multirow{3}{*}{$4$, $6$, $8$, $10$, $12$} \\ \textit{Medium} & {$(3000; 5500)$} \\ \textit{Hard} & {$(2700; 4000)$} \\\hline \end{tabular} \caption{Instance settings} \label{instanceSetting} \end{table} \begin{figure}[ht] \centering \subfloat[Shipment]{ \includegraphics[width=0.45\linewidth,height=1.6in]{./Figures/ShipmentSimple.eps} } \subfloat[Procurement]{ \includegraphics[width=0.45\linewidth,height=1.6in]{./Figures/ProcurementSimple.eps} } \subfloat[Disbursement]{ \includegraphics[width=0.45\linewidth,height=1.6in]{./Figures/DisbursementSimple.eps} } \subfloat[Meal Options]{ \includegraphics[width=0.45\linewidth,height=1.6in]{./Figures/MealSimple.eps} } \caption{Runtime of the exhaustive search on {\it simple} instances} \label{Time-ss-Exh} \end{figure} In Figure~\ref{Time-ss-Exh}, we depict the mean runtime obtained from the exhaustive search algorithm ({\it ss-Exh}) on the class of {\it simple} instances. As one can notice, these runtime exponentially increase with the domain size which we recall, is the number of concrete services that can be assigned to an abstract operation. In the same example, the mean runtime {\bf in all experiments } was lower than $2$ seconds for the backtracking based algorithms. The same trends were also observed in considering instances of the {\it medium} and {\it hard} class. This demonstrates that there is a large class of instances on which the backtracking based algorithms outperforms the exhaustive search. For validating our prior intuition regarding the considerable amount of useless work in exhaustive search (see Section~\ref{backtrackingSearch}), we quantified and computed this work. We defined the useless work of an algorithm as a complete assignment that does not improve the current local solution. In Figure~\ref{UselessExh}, we depict the useless work observed in some instances of the disbursement process. As one can notice, this quantity was always greater in exhaustive search. The same trend was observed in the other instances. \begin{figure}[ht] \centering \subfloat[Disbursement Medium]{ \includegraphics[width=0.45\linewidth,height=1.4in]{./Figures/UsefulUseless.eps} } \subfloat[Disbursement Hard]{ \includegraphics[width=0.45\linewidth,height=1.4in]{./Figures/UsefulUseless2.eps} } \caption{Useless searches in exhaustive search and backtracking} \label{UselessExh} \end{figure} \subsection{Backtracking versus Integer Programming} We also compared the backtracking algorithms with integer linear programming. For this purpose, we used the integer modelling that we proposed in prior work~\cite{JISA}. This modelling was proposed for a more general class of services compositions. However, it supports our restricted setting. The integer model was run with the GLPK solver~\cite{GLPK}. In the search of the optimal solution, the solver internally computes a lagrangian relaxation for avoiding useless searches. Doing so, the run of our integer model is very close to the one of the BBLP algorithm. In the first experiments, we compared the solvers and our approaches on the previously defined instances. We did not however see any significant differences in performances. We increased the domain sizes to $140$; but no significant differences appeared. The mean runtime was around $2.5$ seconds in either approach. For exhibiting runtime differences, we considered two other processes: the motif network and the genelife2 workflow. Both were taken from the Pegasus database~\cite{Pegasus} and come with different sizes (small, medium large). For both, we chose the small size variants. An illustrative representation of the chosen processes is given in Figure~\ref{Workflow}. \begin{figure}[ht] \centering \subfloat[Motif]{ \includegraphics[width=0.35\linewidth,height=1.4in]{./Figures/motif.eps} } \subfloat[Genelife2]{ \includegraphics[width=0.35\linewidth,height=1.4in]{./Figures/gene2life.eps} } \caption{Pegasus workflows} \label{Workflow} \end{figure} On these two processes, we randomly generated $30$ instances using {\it simple} SLAs constraints and $30$ ones with {\it hard constraints}. The domain size of the instances were taken between $4$, $8$, $16$, $32$, $64$ \begin{figure}[ht] \centering \subfloat[Genelife2 Hard]{ \includegraphics[width=0.45\linewidth,height=1.6in]{./Figures/Genelife2Hard.eps} } \subfloat[Motif Hard]{ \includegraphics[width=0.45\linewidth,height=1.6in]{./Figures/MotifHard.eps} } \subfloat[Genelife2 Simple]{ \includegraphics[width=0.45\linewidth,height=1.6in]{./Figures/Genelife2Simple.eps} } \subfloat[Motif Simple]{ \includegraphics[width=0.45\linewidth,height=1.6in]{./Figures/MotifSimple.eps} } \caption{Runtime of the exhaustive search on {\it simple} instances} \label{Time-ss-Exh} \end{figure} The experimental results are depicted in Figure~\ref{Time-ss-Exh}. We noticed that the integer programming (MILP in the Figure) was dominated by the backtracking algorithms when we increase the domain size. In particular, for the motif network with a domain size equal to $32$, we were not able to get a solution from integer programming after $1$ week. We believe that differences between integer programming and backtracking were due to the more important number of operations of genelife2 and the motif network. Indeed, we finally had $15$ operations nodes for the motif network and $11$ operations nodes for the genelife2 workflow. In addition to the superiority of backtracking, these experiments also revealed that if we can expect real-time results with backtracking in the case where we have less than $9$ abstract operations, with more than $10$ abstract operations, the algorithm becomes time consuming. But it is important to remark that given few number of abstract operations, we can have quick solutions even if the number of concrete services is important (greater than $1400$ for instance). \subsection{What is the best backtracking algorithm?} The objective here was to determine what is the faster backtracking algorithm. On this point, our experiments did not reveal a clear trend. In Figure~\ref{Time-ss-Exh} for instance, one can notice that the algorithms are quite similar even if there are some runtime differences. We did additional experiments where instead of a fixed number of concrete services per abstract operations, we set a random number chosen each time between $3$ and a maximal domain size. In Figure~\ref{Time-ss-b}, we depict the results obtained on {\it hard } instances. As one can notice, they do not define a particular trend. We believe that the small variations that we observed state that depending on the distribution of energy consumption and service response time, a backtracking algorithm can detect earlier some partial assignments that must not be complete. \begin{figure}[ht] \centering \subfloat[Shipment Hard]{ \includegraphics[width=0.45\linewidth,height=1.6in]{./Figures/ShipmentHard.eps} } \subfloat[Disbursement Hard]{ \includegraphics[width=0.45\linewidth,height=1.6in]{./Figures/DisbursementHard.eps} } \subfloat[Procurement Hard]{ \includegraphics[width=0.45\linewidth,height=1.6in]{./Figures/ProcurementHard.eps} } \subfloat[Meal Hard]{ \includegraphics[width=0.45\linewidth,height=1.6in]{./Figures/MealHard.eps} } \caption{Runtime of the backtracking algorithms on {\it hard} instances} \label{Time-ss-b} \end{figure} Our experiments globally showed that the backtracking algorithms can outperform classical exact solutions for the service selection problem. But, we did not see any clear difference between the backtracking algorithms. \section{Discussion} \label{Discussion} In this paper, we proposed a novel approach for solving the service selection problem. The main idea here is to view this problem as a CSP. We now introduce a short discussion about the potential of this viewpoint. As already mentioned, our proposal can also be adapted for the resolution of the feasibility problem. In an algorithmic viewpoint, it suffices in {\it SS-b-PM} to add a control that will stop the execution when a first solution is found. The experiments demonstrated that backtracking can be envisioned for real time service compositions on small processes. In the case of large processes, we propose to modify theses algorithms for obtaining quick solutions that are not necessarily optimal. A classical and simple idea for this is to include a cutoff time. Cutoff time means that in the algorithm, we include a control for returning the best computed solution when the cutoff time is reached. The CSP mapping that we propose can also inspire other algorithms for the service selection problem. While our proposal only focuses on the backtracking technique, it might be interesting to consider other CSP resolution technique like the forward checking or the backjumping~\cite{Baker95intelligentbacktracking}. Moreover, we can also revisit our backtracking proposal to adapt it to other ordering techniques used in constraint satisfaction. We have for instance the max-domain, min-degree, max-domain/degree and the set of dynamic orderings. Finally there also exist many parallel algorithms for constraint satisfaction. The techniques employed for achieving parallelization can certainly be reused in the service selection problem. \section{Conclusion} \label{Conclusion} This paper proposes a novel approach for the resolution of the service selection problem. The main idea is to consider this problem as a particular case of the constraint satisfaction problem. We gave the theoretical mapping that supports this idea. We then derive from it two backtracking algorithms for service selection. We showed in the experiments that the proposed algorithms can outperform classical exact solutions for the service selection problem. However, for using them in real-time context, one must consider small services compositions. This result was expected due to the NP-hardness of the problem; however the techniques that we propose can drastically reduce the search space explored for finding the optimal services composition. For continuing this work, we envision two main directions. In the first one, we are interesting in developing a parallel algorithm for the service selection problem based on what is done in CSP parallelization. Our second direction consists of applying our algorithms for service selection on real services compositions. \section*{Acknowledgments} The experiments conducted in this work were conducted on the nodes B500 of the university of Paris 13 Magi Cluster and available at http://www.univ-paris13.fr/calcul/wiki/ \bibliographystyle{hplain}
\section{Introduction} In four-dimensional general relativity, there is a maximum charge and angular momentum that can be added to a black hole of given mass. In Einstein-Maxwell theory, these extremal black holes are characterized by having a degenerate horizon with zero Hawking temperature. In theories that also have (real) scalars fields exponentially coupled to the Maxwell field, such as supergravity or string theory, the extremal limit is either singular \cite{Garfinkle:1990qj}, or similar to Einstein-Maxwell due to an attractor mechanism \cite{Andrianopoli:2006ub}. In this paper, we show that there are black holes with a different type of extremal limit. In the theory we consider, black holes again have a maximum charge for given mass\footnote{For simplicity, we will restrict our attention to static black holes with no rotation.}, but the extremal black hole can have a nondegenerate horizon with nonzero Hawking temperature. Our theory will include a scalar field, but unlike some of the theories mentioned above, the usual Reissner-Nordstr\"om (RN) solution (describing static charged black holes in Einstein-Maxwell theory) remains a solution in the theory with scalar added. It has recently been shown that if a massless scalar field is appropriately coupled to $F^2\equiv F_{ab}F^{ab}$, RN can become unstable to forming scalar hair, \emph{i.e.}, static scalar fields outside the horizon \cite{Herdeiro:2018wub,Fernandes:2019rez}. This is because $F^2 < 0$ for an electrically charged black hole, and acts like a negative potential for the scalar near the horizon. When the charge is large enough, this destabilizes the scalar field and causes it to become nonzero. We add a massive, charged scalar field $\psi$ to Einstein-Maxwell with a simple $|\psi|^2 F^2$ coupling. As before, when the electric charge is large enough, RN becomes unstable and develops charged scalar hair. Our original motivation for exploring this model was that its solutions are asymptotically flat analogs of the asymptotically anti-de Sitter (AdS) solutions known as holographic superconductors \cite{Gubser:2008px,Hartnoll:2008vx,Hartnoll:2008kx}, which have been extensively studied. In AdS, the charged scalar condenses at low temperature without any explicit coupling between the scalar and Maxwell field. Without the cosmological constant, however, this does not happen and one needs to add a coupling like $|\psi|^2 F^2$. (There are also hairy black holes without this coupling if the charged scalar has an appropriate potential \cite{Herdeiro:2020xmb}, but they do not branch off from RN.) It was shown in \cite{Hartnoll:2020fhc} that the dynamics inside the horizon of a holographic superconductor is quite intricate. We study the dynamics inside the horizon of this asymptotically flat analog in a companion paper \cite{dhs}. Here we focus on the solutions outside the horizon, and look at their extremal limit. As seen before \cite{Garfinkle:1990qj,Herdeiro:2018wub}, the hairy black holes can exceed the usual extremal limit and have $Q^2 > M^2$. However, unlike previous examples, we find that for some range of parameters, the maximum charge solution for fixed mass is a nonsingular hairy black hole with nonzero Hawking temperature. So although it is ``extremal" in the sense of having maximum charge, it is not a familiar ``extremal black hole" with either zero temperature or a singular horizon. We will call this new type of extremal black hole a ``maximal warm hole". The existence of maximal warm holes raises puzzling questions about the endpoint of Hawking radiation. If a black hole continues to radiate neutral gravitons when it reaches its extremal limit, it would appear to create a naked singularity. Unlike the standard Planck mass naked singularity expected at the endpoint of the evaporation of a neutral black hole, this could create a naked singularity with large mass. We will argue that this does not occur. Our theory also contains charged solitons, and for completeness, we include a discussion of them. We find that they have a minimum mass, so they do \emph{not} exist arbitrarily close to Minkowski space. They also cannot be viewed as the limit of the hairy black holes as the black hole radius goes to zero. \section{Equations of motion} We start with the action \begin{equation}\label{eq:action} S= \int \mathrm{d}^{4}x \sqrt{-g}\left[R- F^2-4(\mathcal{D}_a\psi)(\mathcal{D}^a \psi)^\dagger-4 m^2 |\psi|^2-4 \alpha F^2 |\psi|^2\right]\,, \end{equation} where $\mathcal{D}=\nabla-i\,q\,A$ and $F=\mathrm{d}A$\,. This theory satisfies all the usual energy conditions if the coupling constant $\alpha$ is positive, which we will assume is the case. The equations of motion for this general action read \begin{subequations}\label{EOM:S} \begin{multline} R_{ab}-\frac{R}{2}g_{ab}=2\left(1+4 \alpha |\psi|^2\right)\left(F_{ac}F_b^{\phantom{b}c}-\frac{g_{ab}}{4}F^{cd}F_{cd}\right) \\ +2\left[(\mathcal{D}_a \psi) (\mathcal{D}_b \psi)^\dagger+(\mathcal{D}_a \psi)^\dagger(\mathcal{D}_b \psi) -g_{ab} (\mathcal{D}_c \psi)(\mathcal{D}^c \psi)^\dagger-g_{ab} m^2 |\psi|^2 \right]\,, \end{multline} \begin{equation} \nabla_a\left[\left(1+4 \alpha |\psi|^2\right)F^{ab}\right]=i\,q\, \left[(\mathcal{D}^b \psi)\psi^\dagger-(\mathcal{D}^b \psi)^\dagger \psi \right]\,, \end{equation} and \begin{equation} \mathcal{D}_a \mathcal{D}^a \psi-\alpha F^{cd}F_{cd} \psi-m^2 \psi=0\,. \label{eq:linearscalar} \end{equation} \end{subequations} In order to understand the static, spherical solutions to the above equations of motion, we use the following standard ansatz \begin{subequations} \label{eq:ansatzout} \begin{equation} \mathrm{d}s^2=-p(r)\,g(r)^2\,\mathrm{d}t^2+\frac{\mathrm{d}r^2}{p(r)}+r^2 (\mathrm{d}\theta^2 +\sin^2\theta\ \mathrm{d}\phi^2 ) \end{equation} For the scalar and Maxwell potential we take \begin{equation} A=\Phi(r)\,\mathrm{d}t\,,\qquad \psi=\psi^\dagger=\psi(r)\,. \end{equation} \end{subequations} The equations of motion restricted to our ansatz become \begin{subequations}\label{EOM} \begin{align} &\frac{g}{r^2}\left[\frac{r^2}{g}(1+4\,\alpha\, \psi^2)\Phi^\prime\right]^\prime-\frac{2\,q^2\,\psi^2}{p}\Phi=0\,, \\ &\frac{1}{r^2g}\left(r^2\,g\,p\,\psi^\prime\right)^\prime+\frac{2\,\alpha\,{\Phi^\prime}^2}{g^2}\psi+\left(\frac{q^2\,\Phi^2}{p\,g^2}-m^2\right)\psi=0\,, \\ &\frac{g^\prime}{g}-2\,r\,\left(\frac{q^2\Phi^2\psi^2}{p^2g^2}+{\psi^\prime}^2\right)=0\,, \\ &\frac{1}{r^2g}\left(r\,g\,p\,\right)^\prime-\frac{1}{r^2}+2\,m^2\psi^2+\frac{1+4\,\alpha\,\psi^2}{g^2}{\Phi^\prime}^2=0\,, \end{align} \end{subequations}% where ${}^\prime$ denotes a derivative with respect to $r$. Note that there are second order differential equations for $\Phi$ and $\psi$, but only first order equations for $g$ and $p$. The event horizon $r=r_+$ is the largest root of $p(r)$, and we will focus on the region outside the horizon, $r\geq r_+$. (The behavior inside the horizon is studied in \cite{dhs}.) For numerical convenience we work with a compact radial coordinate \begin{equation} z=\frac{r_+}{r}\in(0,1)\,, \end{equation} and change variables as \begin{equation} p(r)=\left(1-z\right)q_1(z)\,,\quad \Phi(r) = \left(1-z\right)q_2(z)\,,\quad \psi(r)=q_3(z)\quad \text{and}\quad g(r)^2=q_4(z)\,. \end{equation} (This imposes the gauge condition that $A_t = 0$ on the horizon.) We then solve for $q_1$, $q_2$, $q_3$ and $q_4$ subject to appropriate boundary conditions. At asymptotic infinity, located at $z=0$, we demand \begin{equation} q_1(0)=q_4(0)=1\,,\quad q_3(0)=0\,,\quad\text{and}\quad q_2(0)=\mu \end{equation} with $\mu$ being the electrostatic potential. The hairy black hole solutions depend on several parameters. In addition to the parameters in the action $\{m, q, \alpha\}$, black holes are characterized by their mass $M$ and charge $Q$. These turn out to be given by \begin{equation} M = \frac{r_+}{2}[1-\dot{q}_1(0)]\,, \qquad Q = r_+\,[\mu-\dot{q}_2(0)]\,, \end{equation} where $\dot{}$ denotes a derivative with respect to $z$. There is a scaling symmetry, so we will present our results using the four dimensionless quantities $\{q/m, \alpha, M\,m, Q\,m\}$. However, to find the solutions numerically, it is more convenient to use a slightly different set of dimensionless quantities: $\{q/m,\alpha, y_+, \mu\}$, where $y_+ \equiv m \, r_+$ controls directly the area of the black hole event horizon (located at $z=1$ in our compact coordinates). At the horizon, smoothness determines the behaviour of all functions, giving a Dirichlet boundary condition, and three Robin boundary conditions for $q_2$, $q_3$ and $q_4$. For concreteness we present the Dirichlet condition which takes the form \begin{equation} q_1(1)=1-2 y_+^2 q_3(1)^2-\frac{q_2(1)^2}{q_4(1)} \left[1+4\,\alpha\, q_3(1)^2\right]\,. \end{equation} The strategy is now clear: for each value of $\{q/m, \alpha, y_+, \mu\}$ we solve the resulting equations of motion as a boundary value problem with the above boundary conditions. We solve these via a standard relaxation method on a Gauss-Lobatto collocation grid (see \cite{Dias:2015nua} for a review of such numerical methods). At several points in the main text, we will refer to the entropy and temperature of the black holes. These are given by \begin{equation} m^2\,S=\pi\,y_+^2\quad\text{and}\quad \frac{T}{m}=\frac{q_1(1)\sqrt{q_4(1)}}{4\pi y_+}\,. \end{equation} It is a simple exercise to show that the mass $M$, charge $Q$, chemical potential $\mu$, entropy $S$ and Hawking temperature $T$ obey the first law of black hole mechanics \begin{equation} \mathrm{d}M= T\,\mathrm{d}S+\mu\,\mathrm{d}Q\,, \end{equation} which we check numerically throughout. All solutions in this manuscript satisfy this relation to at least the $10^{-4}\%$ level of confidence. Finally, we note that when the scalar field vanishes, \emph{i.e.} $\psi=0$, the only black hole is given by the familiar Reissner-Nordstr\"om (RN) solution for which \begin{equation} \label{RNsol} p(r)=p_{\mathrm{RN}}(r)\equiv\frac{(r-r_+)(r-r_-)}{r^2}\,,\quad g(r)=1\,,\quad\text{and}\quad \Phi(r)=\Phi_{\mathrm{RN}}(r)\equiv\left(1-\frac{r_+}{r}\right)\mu \end{equation} with $Q=\mu\,r_+$ and $r_{\pm}\equiv M\pm\sqrt{M^2-Q^2}$. The RN temperature is $T_{\mathrm{RN}}=\frac{r_+-r_-}{4\pi r_+^2}$ and, at extremality, one thus has $r_-=r_+=M=Q$ and $\mu=1$. Note that $r_-/r_+ = \mu^2$. \subsection{Asymptotic condition} There is another condition that must be satisfied in order to obtain hairy black holes. The scalar field will be bound to the black hole only if it falls off appropriately at infinity. In our gauge with $A_t (r_+) = 0$, and $A_t(r=\infty) =\mu $, this is only possible if \begin{equation}\label{condition} q^2 \mu^2 \le m^2\,. \end{equation} The necessity of this condition can be seen by considering the asymptotic behavior of the scalar field. If $ q^2 \mu^2< m^2$, the scalar field behaves at large radius like \begin{equation}\label{outpsiinf1} \psi =\frac{e^{- r \sqrt{m^2-q^2 \mu^2}}}{r^{1+\eta}}\left [b + \mathcal{O}(r^{-1}) \right ], \end{equation} for a constant $b$, where \begin{equation}\label{outpsiinf2} \eta \equiv \sqrt{m^2-q^2 \mu^2}\,M-\frac{\mu\,q^2\,(\mu M-Q)}{\sqrt{m^2-q^2 \mu^2}}\,. \end{equation} The exponential decay at large distance is characteristic of a bound state. If $ m^2=q^2 \mu^2$, the scalar field still decays exponentially like \begin{equation}\label{outpsiinf3} \psi = \frac{e^{-2 \sqrt{2} \,q \sqrt{\mu }\sqrt{Q-\mu M}\; \sqrt{r}}}{r^{3/4}}\left[b+\mathcal{O}(r^{-1/2})\right]. \end{equation} However, if $ q^2 \mu^2 > m^2$, the scalar field oscillates asymptotically indicating that the scalar field is not bound to the black hole. More importantly, such solutions would have infinite energy. \section{Linear instability}\label{sec:linear} The familiar RN metric with $\psi = 0$ is clearly always a solution to our equations of motion \eqref{EOM:S}. However, this solution can become unstable to forming scalar hair. This is because $F^2 < 0$ for an electrically charged black hole, so the last term in the action acts like a negative contribution to the scalar mass. This can become large enough near the horizon to dominate the $m^2$ term in the action. In this section we determine when this instability sets in using a linearized analysis. In particular, we will take Eq.~(\ref{eq:linearscalar}) and set the metric and gauge field to be those of the RN black hole \eqref{RNsol}. Furthermore, we will take the scalar field $\psi$ to be radially symmetric and Fourier expand in time as \begin{equation} \psi(t,r) = \tilde{\psi}(r)\,e^{-i\,\omega\,t}\,, \end{equation} which introduces the frequency $\omega$ of the perturbation and brings the scalar equation (\ref{eq:linearscalar}) to the following form \begin{equation} \frac{1}{r^2}\left[r^2 p_{\mathrm{RN}}(r)\tilde{\psi}^\prime(r)\right]^\prime+\left\{\frac{\left[\omega+q\,\Phi_{\mathrm{RN}}(r)\right]^2}{p_{\mathrm{RN}}(r)}-m^2+2\,\alpha\,{\Phi_{\mathrm{RN}}^\prime(r)}^2\right\}\tilde{\psi}(r)=0\,. \label{eq:linear} \end{equation} We would like to understand whether finite energy excitations, regular on the future event horizon of the RN black hole, exist for which $\mathrm{Im}\, \omega>0$, in which case we have a mode whose amplitude grows in time and the system develops an instability. Searching for such excitations amounts to studying a generalised eigenvalue problem in $\omega$, which we present in Appendix \ref{sec:Appendix}. Here we present a simple criterion for when RN is unstable, and compute the onset of the instability by looking for $\omega = 0$ modes. \subsection{The near horizon analysis}\label{sec:linearNH} Since the RN black hole has a maximum electric field at extremality, we expect that the minimum charge ratio $q/m$ and minimum $\alpha$ needed to herald an instability can be determined by analysing the extremal solution. The near horizon geometry of the extremal RN black hole takes the direct product form $\mathrm{AdS}_{2}\times S^2$ where $\mathrm{AdS}_{2}$ stands for 2-dimensional anti-de Sitter spacetime. This is best seen by first setting $r_-=r_+$, introducing new coordinates $(\tau,\rho)$ as \begin{equation}\label{NHcoord} t = \frac{r_+\,\tau}{\lambda}\,,\quad\text{and}\quad r=r_+(1+\lambda\,\rho) \end{equation} and taking the limit $\lambda\to0$. Once we do this, one obtains \begin{subequations}\label{NHsolution} \begin{equation} \mathrm{d}s^2_{\mathrm{AdS}_{2}\times S^2}= L^2_{\mathrm{AdS}_{2}}\left(-\rho^2\mathrm{d}\tau^2+\frac{\mathrm{d}\rho^2}{\rho^2}\right)+r_+^2\,\left(\mathrm{d}\theta^2+\sin^2\theta\,\mathrm{d}\phi^2\right) \end{equation} and \begin{equation} A_{\mathrm{AdS}_{2}\times S^2}=\mu_{\mathrm{AdS}_2}\,\rho\,\mathrm{d}\tau\,, \end{equation} \end{subequations where the first factor in the line element corresponds to the two-dimensional AdS$_2$ with $L_{\mathrm{AdS}_{2}}=r_+$ and $\mu_{\mathrm{AdS}_2}=r_+$. The near-horizon solution \eqref{NHsolution} solves \eqref{EOM} with $\psi=0$. It is a well know fact that \emph{neutral} massive scalar waves propagating on asymptotically AdS spacetimes possess a value for the mass squared below which AdS is unstable and negative energy solutions to the wave equation can be constructed. This is the so-called Breitenl\"ohner-Freedman (BF) bound \cite{Breitenlohner:1982bm,Breitenlohner:1982jf}. In particular, for a neutral massive scalar field in $\mathrm{AdS}_2$ this bound reads \begin{equation} m^2_{\mathrm{AdS}_2}L_{\mathrm{AdS}_{2}}^2\geq -\frac{1}{4}\,. \end{equation} However, a \emph{charged scalar} field not only gets contributions from bare mass terms in its equation of motion, but also from the gauge fields, since these can act as effective two-dimensional masses. It was first conjectured in \cite{Denef:2009tp}, and proved in certain cases in \cite{Dias:2010ma}, that the the \emph{full} extreme black hole is unstable with respect to charged perturbations if \begin{equation} m^2_{\mathrm{eff}}L_{\mathrm{AdS}_{2}}^2\equiv m^2_{\mathrm{AdS}_2}L_{\mathrm{AdS}_{2}}^2-q^2\mu_{\mathrm{AdS}_2}^2<-\frac{1}{4}\,. \end{equation} This is a sufficient, but not necessary condition in general. In the Appendix~\ref{sec:Appendix} we argue that for our case, this condition is also necessary (see, in particular, Sec.~\ref{sec:A3} and the discussion associated to Fig.~\ref{fig:extremalfrequency}). Note that an instability will only be physically acceptable if it is possible to keep $m^2$ positive from the perspective of the asymptotic flat ends, and yet have $m^2_{\mathrm{eff}}L_{\mathrm{AdS}_{2}}^2<-1/4$ in the near horizon $\mathrm{AdS}_{2}\times S^2$ region. It remains to compute $m^2_{\mathrm{eff}}L_{\mathrm{AdS}_{2}}^2$ in our particular theory. This is a rather standard procedure and we refer the reader to \cite{Dias:2010ma} for details\footnote{In short, we apply the coordinate transformation \eqref{NHcoord} to the linearized scalar equation \eqref{eq:linear}, set $\omega = \lambda \tilde{\omega}$ and keep only the leading terms in the $\lambda\to 0$ expansion while keeping $\tilde{\omega}$ fixed. Then, one compares the resulting equation to that of a charged, massive scalar living on a rigid AdS$_2$ with mass $m^2_{\mathrm{AdS}_2}$, charge $q$ and frequency $\tilde{\omega}$. From this, we can reconstruct $m^2_{\mathrm{eff}}L_{\mathrm{AdS}_{2}}^2$.}. In our case we find that the $\mathrm{AdS}_2$ BF bound is violated when \begin{eqnarray} \label{BFviolation} && m^2_{\mathrm{eff}}L_{\mathrm{AdS}_{2}}^2+\frac{1}{4}=\frac{1}{4}+(m^2-q^2)L^2_{\mathrm{AdS}_2}-2\alpha<0 \nonumber\\ && \qquad\qquad\qquad\qquad\qquad \Rightarrow \alpha>\frac{1}{2}\left[\frac{1}{4}+(m^2-q^2)L^2_{\mathrm{AdS}_2}\right]\,. \label{eq:bound} \end{eqnarray} When the background RN black hole is extremal, \emph{i.e.} when $\mu=1$, the bound state condition given in Eq.~(\ref{condition}) simplifies to $m>|q|$, so that the term on the right hand side of the above inequality is always positive. This is essentially the reason why we need the new coupling $\alpha$ if we want to make the RN black hole unstable. \subsection{The onset of hairy black holes}\label{sec:linearOnset} When \eqref{BFviolation} is satisfied, the extremal RN black hole is unstable, so the onset of the instability starts at some $Q<M$. This onset can be found by searching for static, finite energy perturbations, so we set $\omega = 0$ in \eqref{eq:linear}. Typically, the onset occurs when $q^2\mu^2 < m^2$. In this case we require that $\psi$ fall off as in \eqref{outpsiinf1} and \eqref{outpsiinf2}. It is convenient not to work directly with $\psi$, but instead define a new function $\hat{\psi}$ through the relation \begin{equation} \psi \equiv e^{-\sqrt{m^2-q^2\mu^2}\,r}\left(\frac{r_+}{r}\right)^{1+\eta}\hat{\psi}\,. \label{eq:off} \end{equation} Numerically, it is hard to work with infinite domains so we introduce a compact coordinate $y$ given by \begin{equation} r=\frac{r_+}{1-y}\,, \label{eq:ycoord} \end{equation} with the horizon located at $y=0$ and asymptotic infinity at $y=1$. The boundary conditions for $\hat{\psi}$ are then found by demanding $\hat{\psi}$ to have a regular Taylor expansion at $y=0$ and $y=1$. This procedure yields rather cumbersome Robin boundary conditions at $y=0$ and $y=1$ which we do not present here. If we now fix $\alpha$, $q/m$, and $m \, r_+$, the equation for $\hat{\psi}$ is a generalized eigenvalue equation in $\mu$. By computing these eigenvalues, we determine a curve in the space of RN black holes that marks the onset of the scalar hair. This is how the blue curve was generated in Fig.~\ref{fig:phasediag}. For $q^2 > m^2$, modes with $q^2\mu^2 = m^2$ can also branch off from RN. These are the beginning of the solutions that we discuss in the next section. To find them, we require that $\psi$ satisfy \eqref{outpsiinf3} asymptotically, and set \begin{equation} \psi = e^{-2 \sqrt{2}\,q \sqrt{\mu } \sqrt{Q-\mu M}\; \sqrt{r}}\left(\frac{r_+}{r}\right)^{3/4}\hat{\psi}\,. \end{equation} It is again convenient to introduce a compact coordinate \begin{equation} r = \frac{r_+}{y^4}\,, \end{equation} so that the higher order terms in $r^{-1/4}$ appearing in the expansion \eqref{outpsiinf3} now become integer powers of $y$. The boundary conditions for $\hat{\psi}$ can then be found by assuming that $\hat{\psi}$ has a regular Taylor series at $y=0$ (asymptotic infinity) and $y=1$ (black hole event horizon). They again turn out to be Robin boundary conditions. For fixed $\alpha$ and $q/m$, we regard the equation for $\hat{\psi}$ as an eigenvalue equation for $m \, r_+$, and solve for these eigenvalues. This is how the onset line was generated in Fig.~\ref{fig:qm}. In the Appendix~\ref{sec:Appendix} we show that these $\omega=0$ modes indeed mark a transition between stable and unstable perturbations (see, in particular, Sec.~\ref{sec:A2} and the discussions associated to Figs.~\ref{fig:example}-\ref{fig:3D}). \section{Maximal warm holes} We now discuss the full nonlinear solutions, and start with the case $q/m =1$.\footnote{From now on we assume charges are positive, but our results remain valid if $q$ and $Q$ are replaced by their absolute value.} So the condition \eqref{condition} is satisfied for $\mu \le 1$. A phase diagram of these solutions is shown in Fig.~\ref{fig:phasediag}, for coupling $\alpha =1$. The green region below the horizontal dashed line with $Q-M=0$ describes the standard RN solutions. The blue line denotes the onset of the scalar instability in RN and thus the merger between the RN and the hairy black holes. The latter exist in the brown shaded region, and the red line denotes the curve $\mu = 1$ which represents the largest charge on a black hole of mass $Mm \gtrsim 0.8$. Notice that the vertical axis is proportional to $Q-M$, so when this is positive, the hairy black holes exceed the usual extremal limit $Q=M$. It is not surprising that one can create black holes with $Q>M$ by adding matter with $q=m$, since one can also do this with neutral matter. The point is simply that the equation of motion (2.2b) with $q=0$ implies that the conserved charge is $\oint (1+4\alpha|\psi|^2)\star F$. So the electric charge $Q_{\mathcal{H}}$ on the black hole, defined as \begin{equation} Q_{\mathcal{H}} \equiv \frac{1}{4\pi} \oint_{\mathcal{B}}\star F\,, \end{equation} where $\mathcal{B}$ is the bifurcating Killing surface, will be less than the total charge $Q$ measured at infinity. \begin{figure}[h!] \centering \vspace{1cm} \includegraphics[width=0.85\textwidth]{phase_diagram.pdf} \caption{The phase diagram of solutions with $q/m =1$ and $\alpha=1$. Hairy black holes exist in the brown shaded region. The blue line denotes the onset of the scalar instability, and the red line denotes the curve with $\mu =1$. Note that these black holes can slightly exceed the usual extremal bound $Q=M$. } \label{fig:phasediag} \end{figure} As mentioned in the introduction, the extremal limit of a black hole with scalar hair is often singular, with vanishing horizon area. This is true for the black holes along the left boundary of the phase diagram. However despite having the largest charge for given mass, the black holes along the red line with $\mu =1 $ are nonsingular ($S\neq 0$), and remarkably have nonzero Hawking temperature. This is shown in Fig.~\ref{fig:constmu} which shows various physical properties of the $\mu =1$ black holes including their entropy $S = A/4$, temperature $T$, $F^2$ on the horizon, and charge on the black hole $Q_{\mathcal{H}}$. \begin{figure}[h!] \centering \vspace{1cm} \includegraphics[width=1\textwidth]{constant_mu_1.pdf} \caption{Physical properties of the maximal warm holes along the red line in Fig.~\ref{fig:phasediag}. The plots show the entropy $S$, temperature $T$, $F^2$ on the horizon, and charge on the black hole, $Q_{\mathcal{H}}$, as a function of black hole mass. Note that despite having the maximum charge for a given mass, these black holes have nonzero Hawking temperature! The implications for Hawking evaporation are discussed in section \ref{sec:hawkingeva}.} \label{fig:constmu} \end{figure} The reason these black holes exist can be understood as follows. As one increases their charge (for fixed mass), the region near the horizon behaves as a typical black hole with scalar hair and wants to become singular. However, if the mass is large enough, before one reaches a singular horizon, the asymptotic condition \eqref{condition} is saturated. Since one cannot support scalar hair if this bound is violated, and there are no black holes without hair having $Q>M$, the extremal black hole has $T>0$. This is a new kind of extremal black hole that we are calling a maximal warm hole. Increasing the coupling $\alpha$ increases the charge that these maximal warm holes can carry. But it also increases the minimum mass required for the extremal black hole to be nonsingular. Both of these effects are shown in Fig.~\ref{fig:alpha} which shows the maximal warm holes in theories with $q=m$ and different couplings $\alpha$. These curves all have $\mu = 1$ and generalize the red curve in Fig.~\ref{fig:phasediag} to larger $\alpha$. The physical properties of these black holes are qualitatively similar to Fig.~\ref{fig:constmu}. In particular, they are all nonsingular with nonzero Hawking temperature. For example, the properties of the black holes when $\alpha = 100$ are shown in Fig.~\ref{fig:tenalpha}. Notice that increasing $\alpha$ increases the extremal temperature only slightly (top-right panel), but greatly decreases the fraction of the charge $Q_{\mathcal{H}}$ that is carried by the black hole (bottom-right panel). Most of the charge is now in the scalar hair, which is not surprising since we have increased the scalar instability. \begin{figure}[h!] \centering \vspace{1cm} \includegraphics[width=0.85\textwidth]{changing_alpha} \caption{Maximal warm holes in theories with $q=m$ and different couplings $\alpha$. These are all nonsingular ($S> 0$) black holes with maximum charge and nonzero $T$. As they approach the solution with minimum mass, $S\to 0$ and $T\to 0$.} \label{fig:alpha} \end{figure} \begin{figure}[h!] \centering \vspace{1cm} \includegraphics[width=\textwidth]{alpha_100} \caption{Physical properties of the maximal warm holes with $q=m$ and $\alpha = 100$. Comparing with the $\alpha=1$ case of Fig.~\ref{fig:constmu}, we see that increasing $\alpha$ increases the extremal temperature only slightly (top-right panel), but decreases substantially the fraction of the charge carried by the black hole (bottom-right panel).} \label{fig:tenalpha} \end{figure} Next we return to $\alpha = 1$, and consider the effects of changing $q/m$. The existence of maximal warm holes turns out to be very sensitive to this parameter. The smooth black holes with maximum charge for given mass again have the maximum possible potential difference $\mu$ allowed by \eqref{condition}. They are shown in Fig.~\ref{fig:qm}, and all have $T > 0$ (except the leftmost point that approaches $S\to 0$ and $T\to 0$). \begin{figure}[h!] \centering \vspace{1cm} \includegraphics[width=1\textwidth]{changing_q_m} \caption{Black holes with $q\mu = m$ as a function of $q/m$, with $\alpha=1$. When $Q>M$, these are maximal warm holes. The green shaded region denotes RN black holes, and the bottom blue curve denotes the onset of their instability when $q\mu = m$. For masses outside the range of the maximal warm holes, the extremal black hole is singular.} \label{fig:qm} \end{figure} This figure has several interesting features. First, black holes with $q/m > 1$ scalar hair only exist when the black hole is small enough. This can be understood as follows. If we increase $q/m > 1$, the maximum value of $\mu$ is reduced to $\mu \le m/q < 1$. Since the maximum allowed $\mu$ is reduced, the maximum electric field on the horizon is also reduced. But for the RN black hole to become unstable, we need a large enough electric field. Since the electric field increases as one decreases the size of the black hole, only small black holes can have this kind of hair. Second, the mass where maximal warm holes become singular rapidly decreases to zero as $q/m$ increases, and for $q/m \gtrsim 1.1$, maximal warm holes can have arbitrarily small mass. This is also easy to understand: increasing $q/m$ decreases the maximum allowed $\mu$, so this maximum is reached sooner, before the horizon becomes singular. Third, the maximum charge the hairy black hole can carry also decreases as $q/m$ increases, and for $q/m \gtrsim 1.3$, it falls below $Q = M$. At this point, the maximum charge black hole is the usual RN solution with no scalar hair. However, when they exist, the hairy black holes always have larger entropy than a RN solution with the same $M$ and $Q$. As one increases $Q$ for fixed $M$, the RN solution becomes unstable as before, but if one continues to increase $Q$, one reaches a point where the hair no longer exists and the solution returns to RN. Next we consider decreasing $q / m < 1$. This increases the maximum allowed $\mu$, making it easier for the horizon to become singular before reaching this limit. So the minimum mass required for a maximal warm hole increases, as shown in Fig.~\ref{fig:qm} for the case $q/m=0.994$. There is also a maximum mass, but unlike the case $q/m > 1$, it is not because they no longer satisfy $Q > M$. Instead, it is because a solution with $m = q\mu$ requires $Q > \mu M$; see \eqref{outpsiinf3}. This constraint was not an issue when $\mu \le 1$, but since we have increased $\mu$, this constraint is violated for large $M$ and the extremal limit again becomes singular. The finite range of masses for which the black hole has a smooth extremal limit rapidly shrinks as we decrease $q/m$ and vanishes completely for $q/m \lesssim 0.99$. When the maximal warm holes only exist for large enough masses (as in the top three curves of Fig.~\ref{fig:qm}), the singular extremal black holes lie along curves that extend from the maximal warm hole with smallest mass to $Q=M=0$. For $q/m<1$, they also extend from the maximal warm hole with largest mass to arbitrarily large $M$. For smaller scalar field charges, \emph{i.e.} $q/m \lesssim 0.99$, there are no nonsingular extremal black holes. In a phase diagram like Fig.~\ref{fig:qm}, hairy black holes with this scalar charge are bounded from above by a single curve that describes singular extremal black holes that extends from $Q=M=0$ to arbitrarily large $Mm$. We might then ask what happens \emph{e.g.} to a nonextremal hairy black hole family with fixed $Mm$ as it approaches the singular extremal curve. The evolution of the physical properties of such a black hole with $q/m = 1/2$ and $Mm =1$ as it approaches extremality are shown in Fig.~\ref{fig:halfq2}. (Other choices of mass $Mm$ and small $q/m$ are similar.) One sees that both the black hole entropy and temperature go to zero in the extremal limit (largest $(Q-M)m$ solution). The charge on the black hole also vanishes in this limit, since any residual charge would produce a diverging Maxwell field increasing the scalar instability. Note that even though the condition \eqref{condition} allows $\mu \le 2$ in this case, the solution becomes singular when $\mu$ is only slightly larger than one. \begin{figure}[h!] \centering \vspace{1cm} \includegraphics[width=1\textwidth]{halfq2.pdf} \caption{Physical properties of black holes approaching extremality, with $q = m/2$, $\alpha = 1$, and $Mm = 1$. This is a representative example of $q/m \lesssim 0.99$ solutions where the maximum charge hairy black holes always approach a singular extremal solution.} \label{fig:halfq2} \end{figure} As illustrated in Fig.~\ref{fig:qm}, the only case which allows maximal warm holes to have arbitrarily large mass is the original one we studied with $q=m$ (see Fig.~\ref{fig:phasediag} for $\alpha=1$). The reason for this is that, from \eqref{condition}, the maximum allowed $\mu$ is then $\mu = 1$ which is just the potential for an extremal RN black hole of any mass. This has two consequences. First, since $q=m$ is on the threshold of charged superradiance for extremal RN, any extra source of instability (such as the scalar-Maxwell coupling we added) will cause the scalar field to condense (see also Eq.~(\ref{eq:bound})). One does not need the electric field to be ``large enough" in this case. Second, once the black hole has $Q> M$, the second constraint ($Q > \mu M$) that follows from \eqref{outpsiinf3} is satisfied for all $M$. So there is no upper limit on the mass. \subsection{Hawking evaporation\label{sec:hawkingeva}} Typically, if a theory does not have particles with $q > m$, a near extremal black hole will Hawking radiate neutral massless particles such as gravitons and become extremal. Since an extremal RN black hole has zero Hawking temperature, it is a stable endpoint for this process. For some dilatonic black holes with singular extremal limits, the Hawking temperature does not go to zero at extremality \cite{Garfinkle:1990qj}. But in those cases, it has been shown that evaporation stops because the effective potential in the scalar wave equation does not vanish on the horizon as usual in the extremal limit \cite{Holzhey:1991bx}. Since the horizon is at $r_\star = -\infty$ in the usual ``tortoise" coordinate in which the wave equation is simple, this produces an infinite potential barrier allowing no particles to escape. Since maximal warm holes are smooth black holes with maximal charge and nonzero temperature, we need to find another scenario for the endpoint of their Hawking evaporation. We will not perform a complete analysis including the potentials outside the horizon. Instead, we give a simple plausible explanation for why these black holes will \emph{not} form naked singularities, despite the fact that they have maximal charge and nonzero temperature. Consider first the case $\alpha =1$ and $q=m$. Since the temperature of the hairy black holes is low, charged particles are only created by the Schwinger mechanism with a rate proportional to $e^{-\pi m^2/qE}$, while neutral photons and gravitons are produced thermally. Since the photons acquire a mass inside the charged condensate, they will be surpressed compared to gravitons. Nevertheless, since charged particle emission appears exponentially suppressed, one might expect that in the late stages of Hawking evaporation, the black hole will lose mass but not charge. Comparing the scales on the horizontal and vertical axes in Fig.~\ref{fig:phasediag} this would correspond to an essentially vertical line in the figure. So if $Mm$ is large enough, Hawking evaporation would appear to end on the red line. But since these black holes have nonzero temperature, they would appear to keep radiating. This is the puzzle we want to resolve. The resolution is that the rate of charged particle production is not actually exponentially suppressed, since all the factors in the Schwinger exponent are order one: we have assumed $q = m$, and Fig.~\ref{fig:constmu} shows that $E/m \sim O(1)$. In contrast, the temperature is $T \sim 10^{-3}$ so the rate of thermal radiation would be proportional to $T^4 \sim 10^{-12}$ and is highly suppressed. Thus the late stages of Hawking radiation will be dominated by the production of $q = m $ particles which should keep $Q-M$ approximately constant. As a result, Hawking radiation causes the black hole to evolve along a horizontal line in Fig.~\ref{fig:phasediag}, rather than a vertical line. This ends in a singular solution as expected. The physical quantities evolve as shown in Fig.~\ref{fig:evap}. Note that the charge on the black hole goes to zero linearly with the mass, as expected from the production of $q=m$ particles. \begin{figure}[h!] \centering \vspace{1cm} \includegraphics[width=1\textwidth]{evaporation} \caption{Physical properties of the hairy black hole with $q=m$ and $\alpha = 1$ are shown along a line of constant $(Q-M)m = 5\times 10^{-3}$. In the late stages of Hawking evaporation, the black hole is expected to approximately follow such a line with decreasing $M$.} \label{fig:evap} \end{figure} Now suppose $q \ne m$. If we increase $q/m$ above 1.1, we have seen (see discussion of Fig.~\ref{fig:qm}) that there are no singular extremal black holes. But Hawking radiation of these hairy black holes is likely to again be dominated by charged particle emission which will decrease the black hole charge more than its mass. So the black hole will evolve away from extremality. On the other hand, if we decrease $q/m$ below $.99$ even charged particle emission will increase $Q-M$, so evaporating black holes will always follow an essentially vertical line in a phase diagram like Fig.~\ref{fig:phasediag}. But we have seen (Fig.~\ref{fig:qm}) that in this case the maximal warm holes disappear and all extremal limits are singular. Thus when $\alpha \approx 1$, the natural endpoint of Hawking evaporation is either a singular extremal solution or possibly a neutral black hole that evaporates completely. The physics of the singular endpoint will of course require a complete quantum theory of gravity. However, the story changes when we increase $\alpha$, since this decreases the electric field on the horizon and increases the black hole temperature. Eventually (certainly before $\alpha = 100$) the electric field becomes too small to create charged particles, and Hawking radiation is dominated by thermal gravitons. Thus we are again faced with the question of what happens when these black holes evaporate past extremality. Since the black hole is evaporating but not loosing charge, the horizon area will shrink and the potential difference $\mu$ between the horizon and infinity should increase. But $\mu$ was already at the maximum value that allows static scalar hair. So the evaporation past extremality will cause the scalar hair to become unbound and start radiating to infinity. At this point there are a couple possible outcomes depending on how much scalar field is radiated away. If the scalar field only radiates enough to recover $\mu = 1$, the evolution will essentially follow the $\mu = 1$ curve as $M$ decreases. At the other extreme, all the hair could classically radiate away leaving a RN black hole. (This option is only possible if the resulting black hole is classically stable.) Finally, it is possible that a fraction of the hair is radiated leaving a hairy black hole with $\mu < 1$. We will leave it to future investigations to determine which of these possibilities the black hole actually follows. But notice that in no case does the black hole immediately turn into a naked singularity. \section{Solitons} Unlike analogous theories with neutral scalars \cite{Herdeiro:2019oqp}, the theory we are considering also admits soliton solutions, \emph{i.e.} regular horizonless solutions. For completeness we describe them in this section. We will see that their mass and charge satisfy $Q^2 < M^2$ so they coexist with RN black holes. But unlike other systems with scalar condensation, these solitons are not the zero horizon radius limit of the hairy black holes studied in the previous section. Since solitons have no horizon, we have to change our ansatz (\ref{eq:ansatzout}) which was tailored to enforce a zero of $p(r)$. In this section, we thus consider the gravitational ansatz \begin{equation} \mathrm{d}s^2=-f(r)\,\mathrm{d}t^2+\frac{\mathrm{d}r^2}{g(r)}+r^2(\mathrm{d}\theta^2 +\sin^2\theta\ \mathrm{d}\phi^2 )\,, \end{equation} for the soliton with $r\in(0,+\infty)$. As before, for the Maxwell and scalar fields we take \begin{equation} A=\Phi(r)\,\mathrm{d}t \quad \text{and}\quad \psi = \psi^\dagger = \psi(r)\quad \,. \end{equation} The equations of motion read \begin{subequations} \begin{align} & \frac{1}{r^2}\sqrt{\frac{g}{f}}\left(\sqrt{f\,g}\,r^2\psi^\prime\right)^\prime+\left(\frac{q^2\,\Phi^2+2\,\alpha\,g\,{\Phi^\prime}^2}{f}-m^2\right)\psi=0\,,\label{eq:firstsoliton} \\ & \frac{1}{r^2}\sqrt{\frac{g}{f}}\left[\sqrt{\frac{g}{f}}\left(1+4\,\alpha\,\psi^2\right)r^2\Phi^\prime\right]^\prime-\frac{2\,q^2\,\psi^2}{f}\Phi=0\,,\label{eq:secondsoliton} \\ & \frac{1}{r^2}\left(r\,g\right)^\prime-\frac{1}{r^2}+\frac{g}{f}(1+4\,\alpha\,\psi^2){\Phi^\prime}^2+2\,q^2\,\psi^2\frac{\Phi^2}{f}+2m^2\psi^2+2\,g\,{\psi^\prime}^2=0\,,\label{eq:thirdsoliton} \\ &\frac{g}{r^2\,f}\left(r\,f\right)^\prime-\frac{1}{r^2}+\frac{g}{f}(1+4\,\alpha\,\psi^2){\Phi^\prime}^2-2\,q^2\,\psi^2\frac{\Phi^2}{f}+2m^2\psi^2-2\,g\,{\psi^\prime}^2=0\,.\label{eq:lastsoliton} \end{align} \end{subequations} We can now use \eqref{eq:lastsoliton} to express $g$ as a function of $f$, $\psi$, $\Phi$ and their first derivatives: \begin{equation} g = \frac{2 r^2 \psi^2\left(q^2\Phi^2-m^2f\right)+f}{\left(r\,f\right)^\prime+(1+4\,\alpha\,\psi^2)r^2{\Phi^\prime}^2-2 r^2\,f\,{\psi^\prime}^2}\,. \end{equation} This expression for $g$ can now be plugged in (\ref{eq:firstsoliton})-(\ref{eq:lastsoliton}) to reduce the problem to studying a system of three second order coupled ordinary differential equations for $f$, $\Phi$ and $\psi$. At the spacetime origin, located at $r=0$, we impose regularity, which amounts to requiring \begin{equation} f^\prime(0)=\psi^\prime(0)=\Phi^\prime(0)=0\,. \end{equation} (Note in particular that these conditions imply $g(0)-1=g^\prime(0)=0$, as required.) At the asymptotic boundary we demand \begin{equation} \lim_{r\to+\infty}\psi = 0\,,\quad\lim_{r\to+\infty}f=1\quad \text{and}\quad \lim_{r\to+\infty}\Phi=\mu\,. \end{equation} We now introduce a compact radial coordinate $y$ defined as \begin{equation} y=\frac{m\,r}{1+m\,r} \end{equation} so that $y\in(0,1)$ with $y=0$ being the regular center and $y=1$ the spatial infinity. The moduli space of solutions is then three-dimensional depending on $\{\mu,\alpha,q/m\}$ or alternatively $\{m M,\alpha,q/m\}$. However, as we shall shortly see, these parameters do not uniquely parametrize a soliton. Therefore, we will use instead the value of the scalar field at the origin, $\psi_0\equiv\psi(0)$, to move along the moduli space and determine $\{m M,m\,Q\}$ at fixed $\{\alpha,q/m\}$. It turns out that $\psi_0$ is one-to-one with the soliton solutions, at fixed $\{\alpha,q/m\}$. In Fig.~\ref{fig:solitons} we plot the chemical potential $\mu$ as a function of $\psi_0$ for fixed $\alpha=1$ and for several values of $q/m$. The behaviour at large $\psi_0$ is consistent with the following functional form \begin{equation} \mu = \mu_{\infty}+\hat{\mu}_{\infty}e^{-\alpha\,\psi_0^2}\sin\left(\Omega_{\infty}\,\psi_0^2+\gamma_{\infty}\right)\,. \end{equation} For instance, for $q/m=1$ we find $\mu_{\infty}\approx0.9736$, $\hat{\mu}_{\infty}\approx0.0274$, $\Omega_{\infty}\approx 4.061$ and $\gamma_{\infty}\approx-2.70745$. The above asymptotic expression was inspired by the work developed in \cite{Bhattacharyya:2010yg}, where a class of supersymmetric solitonic solutions was studied in great detail. \begin{figure}[h!] \centering \vspace{1cm} \includegraphics[width=1.0\textwidth]{wiggles_soliton} \caption{Chemical potential $\mu$ of the solitons as a function of $\psi_0$ at fixed $\alpha=1$. The legend shows curves with different values of $q/m$.} \label{fig:solitons} \end{figure} Each of the oscillations in Fig.~\ref{fig:solitons} is mapped into characteristic swallowtail curves in the corresponding phase diagram of Fig.~\ref{fig:moduli_solitons}. This Fig.~\ref{fig:moduli_solitons} has $\alpha=1$ and serves to illustrate that the properties of solitonic solutions in this theory are somehow intricate. For any value of $0<|q|/m<1$ we find that solitons only exist in a window of masses $ M\in(M_{\min},M_{\max})$. For each $q/m$ curve, $M_{\min}$ in Fig.~\ref{fig:moduli_solitons} corresponds to approach $\psi_0\to 0$ in Fig.~\ref{fig:solitons}. As we decrease $|q|/m$ towards zero, $M_{\min}$ appears to approach $0$ and the curve becomes increasingly steep (see for instance the curve with $|q|/m=5\times 10^{-3}$ in Fig.~\ref{fig:moduli_solitons}). On the other hand, for each $q/m$, $M_{\max}$ in Fig.~\ref{fig:moduli_solitons} corresponds to the first minimum in the corresponding curve of Fig.~\ref{fig:solitons}. The solution with $q=m$ is special. In this case when we let $\psi_0\to0$ we approach $M\to+\infty$ and the line $Q-M\to 0$ from below. But there is still a minimum value of the mass $M_{\min}$, which is given by the corresponding minimum in Fig.~\ref{fig:solitons}. Finally, we also found solitons with $|q|/m>1$. In this case, solitons again exist in a window $M\in(M_{\min}, M_{\max})$, with the window shrinking as we increase $|q|/m$ and disappearing altogether at a critical value of $|q|=q_c$. For $\alpha=1$, we find that $q_c\simeq 1.05\,m$. For $|q|>m$, we also find that $\psi_0$ never approaches zero, and is instead cut off by a value $\psi_0^c$ at which point the solution becomes singular since the Kretschmann curvature scalar at the origin grows unbounded. \begin{figure}[h!] \centering \vspace{1cm} \includegraphics[width=1.0\textwidth]{solitons_moduli_space} \caption{Moduli space of solitonic solutions for fixed $\alpha=1$, and for several values of $q/m$ labelled on the left. The green region indicates where RN black holes exist. } \label{fig:moduli_solitons} \end{figure} By comparing the mass and charge of the solitons in Fig.~\ref{fig:moduli_solitons} with the mass and charge of the hairy black holes, one finds that they do not overlap. So the solitons cannot be viewed as the limit of a hairy black holes as $r_+\rightarrow 0$. \section {Discussion} We have shown that just by adding a simple coupling between a charged scalar field and a Maxwell field, one can change some basic properties of four-dimensional, asymptotically flat, extremal black holes. In particular, for a range of parameters the black hole with maximum charge (for given mass) has a smooth horizon with nonzero Hawking temperature. We have called these objects maximal warm holes. The existence of maximal warm holes raises a number of questions. We have (partially) addressed perhaps the most obvious one concerning the endpoint of Hawking evaporation. But in addition to gaining a more complete understanding of this process, there are a number of other questions which we leave for future investigation. These include the following: \begin{enumerate} \item What characterizes the class of theories in which maximal warm holes occur? \item Do maximal warm holes have implications for black hole physics besides Hawking radiation? \item Can maximal warm holes exist in more than four spacetime dimensions? \item Are there asymptotically anti-de Sitter examples of maximal warm holes? If so, what are the implications for the AdS/CFT correspondence? They do not exist in the simplest models of holographic superconductors \cite{Horowitz:2009ij}, but they might exist in theories with additional interactions. \item Are there asymptotically de Sitter examples of maximal warm holes? This seems unlikely since one would need to ensure that there is no flux across both the cosmological and event horizons. \item How does the addition of rotation affect maximal warm holes? It is known that Kerr black holes can develop massive scalar hair near extremality even without additional interactions \cite{Herdeiro:2014goa,Herdeiro:2015gia}. Can extremal neutral black holes have nonzero temperature? \end{enumerate} \noindent We hope to report on some of these questions in the future. \subsection*{Acknowledgments} O.J.C.D. acknowledges financial support from the STFC Grants ST/P000711/1 and ST/T000775/1. The work of G.~H. was supported in part by NSF Grant PHY-2107939. J.~E.~S has been partially supported by STFC consolidated grants ST/P000681/1, ST/T000694/1. The numerical component of this study was partially carried out using the computational facilities of the Fawcett High Performance Computing system at the Faculty of Mathematics, University of Cambridge, funded by STFC consolidated grants ST/P000681/1, ST/T000694/1 and ST/P000673/1. The authors further acknowledge the use of the IRIDIS High Performance Computing Facility, and associated support services at the University of Southampton, in the completion of this work.
"\\section{Introduction}\n\\input{Parts/Introduction11}\n\n\n\n\n\\section{Data}\\label{Data}\n\\inp(...TRUNCATED)
"\\section*{Abstract}\nWe consider a model of two tunnel-coupled one-dimensional Bose gases\nwith ha(...TRUNCATED)
"\\section{Introduction}\n\nThe solar wind expansion from the Sun is highly non-adiabatic, partly no(...TRUNCATED)
"\\section{introduction}\n\n\nParticles with a macroscopic decay length, \nranging from a few {centi(...TRUNCATED)
"\n\n\n\\section{The muon gyromagnetic factor and ``anomalous'' moment}\n\nAs a result of more than (...TRUNCATED)
"\\section{Introduction}\r\n \r\n The abstract theory of Banach modules or Banach algebras plays an(...TRUNCATED)
"\\section{Introduction}\n\\label{sec:Intro}\n\\setcounter{equation}{0}\n\nLong-baseline neutrino os(...TRUNCATED)
End of preview. Expand in Data Studio

Dataset Card for "check_arxiv"

More Information needed

Downloads last month
5