Dataset Viewer
Auto-converted to Parquet Duplicate
text
stringlengths
1
1.41M
meta
dict
\section{Introduction} Einstein's gravity produced a gigantic understanding of our universe throughout both theoretical and experimental evidences. The observation of gravitational waves from a binary black hole merger, as reported in Ref. \cite{Abbott:2016blz}, was expected as one of the crucial tests for general relativity (GR). Although this huge success, the last three decades brought some questions that could not be answered by GR. These questions are related to both theoretical aspects and to observational results. The very interesting review \cite{Capozziello:2011et} points out basically two classes of ``shortcomings in GR'' on the UV and IR scales. In the UV region one has the quantum gravity problem, and in the IR regime, the dark energy and dark matter issues. In order to address these questions it were proposed new approaches known as extended theories of gravity (ETG). Such theories start with the inclusion of higher-order terms in curvature invariants in the effective Lagrangian as, for instance, $R^2$ and $R^{\alpha \beta \gamma \delta} R_{\alpha \beta \gamma \delta}$ \cite{Gottlober:1989ww, Adams:1990pn, Amendola:1993bg}, or through minimal or non-minimal coupling of scalar fields with the geometry as, for example, $\phi^2 R$ \cite{Maeda:1988ab, Wands:1993uu, Capozziello:1998dq}. The approach which takes into account a single scalar field in general relativity is known as Horndeski’s gravity \cite{Horndeski, Charmousis:2011bf, Charmousis:2011ea, Starobinsky:2016kua, Bruneton:2012zk, Cisterna:2014nua, Maselli:2016gxk, Heisenberg:2018vsk, Hajian:2020dcq}. This model is quite interesting because it is the most general scalar-tensor theory with second order field equations in four dimensions. Besides Horndeski’s gravity, in this work we will consider other two essential components. The first one is related to the AdS/CFT correspondence or duality, which fundamental concepts can be seen in Refs. \cite{Maldacena:1997re, Gubser:1998bc, Witten:1998qj, Aharony:1999ti}. Over the past two decades since its emergence, many investigations around AdS/CFT still brings us great insights on the study of strong coupled systems. Among many interesting features of this correspondence one should notice the possibility of building models in the gravity side which are duals to phases of a nonconformal plasma at finite temperature or density. It is worthwhile to mention that in recent Refs. \cite{Jiang:2017imk, Baggioli:2017ojd, Liu:2018hzo, Li:2018kqp, Li:2018rgn}, the authors present some applications of the AdS/CFT in the Horndeski scenario. The inclusion of another boundary in the original AdS/CFT duality leads to the AdS/BCFT correspondence, which has attracted a lot of attention in the last years. This proposal was presented by Takayanagi \cite{Takayanagi:2011zk} and soon after by Fujita, Takayanagi and Tonni \cite{Fujita:2011fp} as an extension of the standard AdS/CFT correspondence. The main point in the AdS/CFT is based on the fact that the AdS$_{d+1}$ space is dual a conformal field in $d$ dimensions. In this case the AdS$_{d+1}$ symmetry, which is $SO(2,d)$, is the same of the conformal symmetry of the CFT$_d$. However when one adds a new boundary with $d-1$ dimension to the CFT$_d$, one notice the breaking of $SO(2,d)$ into a $SO(2,d-1)$ group. In this sense, due to the insertion of this new boundary, this theory is known as boundary conformal field theory (BCFT) and then one can construct a correspondence called as AdS/BCFT \cite{Takayanagi:2011zk, Fujita:2011fp, Nozaki:2012qd, Fujita:2012fp, Melnikov:2012tb, Magan:2014dwa}.\footnote{As pointed out in Ref. \cite{Fujita:2011fp} the relation between holography and BCFT was presented in the early 2000's as shown in Refs. \cite{Karch:2000ct, Karch:2000gx}. } In particular, in this work we are going to deal with an AdS$_3$/BCFT$_2$ correspondence. As we know in the standard AdS/CFT correspondence, we have an asymptotically AdS spacetime N, which has a boundary M with a Dirichlet boundary condition on it. On the other hand, following the AdS/BCFT prescription, we introduce an additional boundary\footnote{Note that the boundary Q in general is not asymptotically AdS.} Q wrapping N, which interception with M is the manifold P, as shown in Fig. \ref{BCFT}. In the hypersurface Q, the bulk-metric of N should satisfy a Neumann boundary condition. Also by looking at Fig. \ref{BCFT} one should notice that the $d$-dimensional spacetime M is bounded by P, which will bound Q also. Within this construction, the $d+1$-dimensional spacetime N is limited by a region defined by $M \cup Q$. \begin{figure}[!ht] \begin{center} \includegraphics[scale=0.30]{BCFT.pdf} \caption{The figure depicts the holographic description of BCFT, where we have the asymptotically AdS bulk spacetime N with conformal boundary M and addtional boundary Q. Here P is the intersection of M and Q.}\label{BCFT} \end{center} \end{figure} The second essential component of this work is to deal with a finite temperature theory within the AdS/CFT correspondence. Following the standard procedure we include a black hole in the bulk geometry and interpret the Hawking temperature as the temperature of the CFT side. In the past, $(2+1)$-dimensional gravity was considered as a toy model since it, as pointed out in Ref. \cite{Carlip:1995qv}, does not have a Newtonian limit nor any propagating degrees of freedom. However, after the work of Bañados, Teitelboim, and Zanelli \cite{Banados:1992wn, Banados:1992gq}, it was realized that such a kind of (2+1) theory has a solution, known as the BTZ black hole, with some interesting features: an event horizon (and in some cases an additional inner horizon, if one includes rotations) presenting thermodynamic properties somehow similar to the black holes in (3+1)-dimensions and being asymptotically anti-de Sitter.\footnote{Usually black holes are asymptotically flat.} For our purposes in this paper we will choose to work with a planar BTZ black hole with a non-trivial axis profile. \section{Methodological route and achievements} Motivated by the recent application of the AdS/CFT duality in Horndeski gravity together with the emergence of AdS/BCFT and taking into account the importance of (2+1)$-$dimensional black holes, here in this work we establish the AdS/BCFT in Horndeski gravity and study the thermodynamics of the corresponding AdS-BTZ black hole. Here we will present a summary of the main results achieved in this work: \begin{itemize} \item First, we studied the influence of the Horndeski parameters on the BCFT theory. Apart from a complete numerical solution, we derived an approximate analytical solution useful to determine the role of Q profile and perform the analysis of all quantities in this work; \item We constructed a holographic renormalization for this setup and computed the free energy for both AdS-BTZ black hole and thermal AdS; \item From the free energy, we computed the total and boundary entropies. In the case of the boundary entropy, one can see it as an extension of the results found in Refs. \cite{Takayanagi:2011zk, Fujita:2011fp}; \item Assuming that the total entropy and the total area of the AdS-BTZ black hole are related by the Bekenstein-Hawking formula, we could see that the influence of the Horndeski gravity enables an increasing of the black hole area as we increase the absolute value of the Horndeski parameter. This feature of our model is not present in the usual BCFT theory as discussed, for instance, in Refs. \cite{Takayanagi:2011zk, Fujita:2011fp, Magan:2014dwa}. \item At zero temperature our setup exhibits a non-zero or residual boundary entropy, at least in certain conditions which depends on the tension of the Q profile. Besides, zero entropy seems to imply a minimum non-zero temperature. \item From the free energy we also computed the thermodynamic observables as the heat capacity, sound speed and trace anomaly and plot their behavior against the temperature. In particular, the trace anomaly goes to zero at high temperatures indicating a restoration of the conformal symmetry or a non-trivial BCFT. \item We studied the Hawking-Page phase transition (HPPT) in this setup. The presence of the Horndeski term allow us to analyse this transition of the free energy as a function of the temperature, as in other higher-dimensional theories. This differs from the results presented in Refs. \cite{Takayanagi:2011zk, Fujita:2011fp}, where the authors plot the free energy as a function of the tension of the Q profile. \end{itemize} This work is organized as follows. In Section \ref{v1}, we present our gravitational setup and how to combine it to BCFT theory. In the Section \ref{v2}, we consider a BTZ black hole in Horndeski gravity and study the influence of the Horndeski's parameter on Q profile. In Section \ref{v3}, by performing a holographic renormalization we computed the euclidean on-shell actions associated with the BTZ and thermal AdS space. In Section \ref{BTZentro}, from the euclidean on-shell action, we derive the BTZ black hole entropy, and in Section \ref{v4}, we present a systematic study its thermodynamic quantities. In section \ref{v5}, we present the Hawking-Page phase transition between the BTZ black hole and thermal AdS space. Finally, in Section \ref{v6} we present our conclusion and final comments. \section{The Setup}\label{v1} \subsection{Horndeski’s Lagrangian} Here, in this section, we present the outline of Horndeski's gravity. The complete Horndeski’s Lagrangian density can be written in a general form as: \begin{eqnarray}\label{LH} {\cal L}_H = {\cal L}_{EH} + {\cal L}_2 + {\cal L}_3 + {\cal L}_4 + {\cal L}_5\,, \end{eqnarray} \noindent where ${\cal L}_{EH}=\kappa(R-2\Lambda)$ is the Einstein-Hilbert Lagrangian density with $\kappa=(16\pi G_N)^{-1}$, $G_N$ the Newton's gravitational constant, $R$ the Ricci scalar, $\Lambda$ the cosmological constant, and\footnote{Since the publication of Ref. \cite{Charmousis:2011bf} one usually refers to ${\cal L}_2, {\cal L}_3, {\cal L}_4$ and ${\cal L}_5$ in Eq. \eqref{4L} as the {\it Fab Four} Lagrangians.} \begin{eqnarray} {\cal L}_2 &=& G_2(X, \phi)\,, \nonumber \\ {\cal L}_3 &=& -G_3(X, \phi) \Box \phi\,, \nonumber \\ {\cal L}_4 &=& -G_4(X, \phi) R + \partial_X G_4(X, \phi) \delta^{\mu \nu}_{\alpha \beta} \nabla^{\alpha}_{\mu} \phi \nabla^{\beta}_{\nu} \phi\,, \nonumber \\ {\cal L}_5 &=& -G_5(X, \phi) G_{\mu \nu} \nabla^{\mu} \nabla^{\nu}\phi - \frac{1}{6} \partial_X G_5(X, \phi) \delta^{\mu \nu \rho}_{\alpha \beta \gamma} \nabla^{\alpha}_{\mu} \phi \nabla^{\beta}_{\nu} \phi \nabla^{\gamma}_{\rho} \phi \,, \label{4L} \end{eqnarray} \noindent $G_2$, $G_3$, $G_4$, and $G_5$ being arbitrary functions of the scalar field $\phi$ and $X$ defined by $X \equiv - \frac{1}{2} \nabla_{\mu} \phi \nabla^{\mu} \phi$, while $G_{\mu \nu}=R_{\mu\nu} -\frac 12 g_{\mu\nu}R$ is the Einstein tensor, and $g_{\mu\nu}$ is the spacetime metric. For a detailed review on Honderski's gravity, one can see Ref. \cite{Kobayashi:2019hrl}. In particular, we are interested in a special subclass of Horndeski’s gravity which has a non-minimal coupling between the standard scalar term and the Einstein tensor \cite{Charmousis:2011bf,Charmousis:2011ea,Starobinsky:2016kua,Bruneton:2012zk,Brito:2019ose,Santos:2020xox}. In this sense, Eq. \eqref{LH} becomes: \begin{equation}\label{HFE} {\cal L}_H \equiv {\cal L}_{EH} + {\cal L}_2 =(R-2\Lambda)-\frac{1}{2}(\alpha g_{\mu\nu}-\gamma G_{\mu\nu})\nabla^{\mu}\phi\nabla^{\nu}\phi\,, \end{equation} \noindent where the parameters $\alpha$ and $\gamma$, which control the strength of the kinetic couplings, have mass dimensions zero and -2, respectively. Note that the Lagrangian density in Eq. \eqref{HFE}, is invariant under the displacement symmetry $\phi\to\phi\, +$ constant and under the parity transformation $\phi\to-\phi$. \subsection{AdS$_3$/BCFT$_2$ correspondence with Horndeski $\gamma$-dependence} Here, in this section, we discuss the AdS/BCFT correspondence within the Horndeski gravity. As discussed by \cite{Takayanagi:2011zk,Fujita:2011fp}, for the construction of boundary systems we need to take into account a Gibbons-Hawking surface term. In addition, such a surface term for the Horndeski $\gamma$-dependent gravity was proposed in Ref. \cite{Li:2018rgn}. Motivated by these works, we propose the total action including the contributions coming from the surfaces N, Q and P, besides matter terms from N and Q and the counterterms from P:\footnote{One can recall the AdS/BCFT geometry from Fig. \ref{BCFT}.} % \begin{eqnarray} S&=&S^{N}+S^{N}_{mat}+S^{Q}+S^{Q}_{mat}+S^{P}_{ct}\,, \label{S} \end{eqnarray} where $S^{N}_{mat}$ describes an ordinary matter that is supposed to be a perfect fluid, and \begin{eqnarray} &&S^{N}=\kappa\int_{N}{d^{3}x\sqrt{-g}\mathcal{L}_{H}}\\ &&S^{Q}=2\kappa\int_{bdry}{d^{2}x\sqrt{-h}\mathcal{L}_{bdry}}\\ &&S^{Q}_{mat}=2\int_{Q}{d^{2}x\sqrt{-h}\mathcal{L}_{mat}}\\ &&S^{P}_{ct}=2\kappa\int_{ct}{d^{2}x\sqrt{-h}\mathcal{L}_{ct}}\,, \end{eqnarray} where $\mathcal{L}_{H}$ was defined in Eq. \eqref{HFE} and \begin{eqnarray} \mathcal{L}_{bdry}&=&(K-\Sigma)+\frac{\gamma}{4}(\nabla_{\mu}\phi\nabla_{\nu}\phi n^{\mu}n^{\nu}-(\nabla\phi)^{2})K+\frac{\gamma}{4}\nabla_{\mu}\phi\nabla_{\nu}\phi K^{\mu\nu}\,, \label{3}\\ \mathcal{L}_{ct}&=&c_{0}+c_{1}R+c_{2}R^{ij}R_{ij}+c_{3}R^{2}+b_{1}(\partial_{i}\phi\partial^{i}\phi)^{2}+...\label{4} \end{eqnarray} Note that $\mathcal{L}_{mat}$ is a Lagrangian of possible matter fields on Q and $\mathcal{L}_{bdry}$ corresponds to the Gibbons-Hawking $\gamma$-dependent terms associated with the Horndeski gravity. In the boundary Lagrangian, Eq. \eqref{3}, $K_{\mu\nu}=h^{\beta}_{\mu}\nabla_{\beta}n_{\nu}$ is the extrinsic curvature, $h_{\mu\nu}$ is the induced metric and $n^\mu$ is the normal vector both on the hypersurface Q. The traceless contraction of $K_{\mu\nu}$ is $K=h^{\mu\nu}K_{\mu\nu}$, and $\Sigma$ is the boundary tension on Q. Furthermore, ${\cal L}_{ct}$ are boundary counterterms localized on P which is required to be an asymptotic AdS spacetime. By imposing a Neumann boundary condition in Eq. \eqref{3}, we obtain\footnote{For more details on the geometry one can see \cite{Takayanagi:2011zk,Fujita:2011fp,Melnikov:2012tb,Magan:2014dwa}. Regarding the choice for the boundary condition, one can see in Ref. \cite{Compere:2008us} where the authors discussed about Neumann boundary condition, among others.} \begin{eqnarray} K_{\alpha\beta}-h_{\alpha\beta}(K-\Sigma)+\frac{\gamma}{4}H_{\alpha\beta}=\kappa {\cal S}^{Q}_{\alpha\beta}\,,\label{5} \end{eqnarray} where we defined \begin{eqnarray} &&H_{\alpha\beta}\equiv(\nabla_{\alpha}\phi\nabla_{\beta}\phi n^{\alpha}n^{\beta}-(\nabla\phi)^{2})(K_{\alpha\beta}-h_{\alpha\beta}K)-(\nabla_{\alpha}\phi\nabla_{\beta}\phi)h_{\alpha\beta}K\,,\label{6}\\ &&{\cal S}^{Q}_{\alpha\beta}=-\frac{2}{\sqrt{-h}}\frac{\delta S^{Q}_{mat}}{\delta h^{\alpha\beta}}\,.\label{7} \end{eqnarray} Considering $S^{Q}_{mat}$ as a constant one has ${\cal S}^{Q}_{\alpha\beta}=0$. Then, we can write \begin{eqnarray} K_{\alpha\beta}-h_{\alpha\beta}(K-\Sigma)+\frac{\gamma}{4}H_{\alpha\beta}=0\,.\label{8} \end{eqnarray} On the gravitational side for Einstein-Horndeski gravity assuming $S^{N}_{mat}$ as constant, varying $S^N$ with respect to $g_{\alpha\beta}$ and $\phi$, and $S^Q$ with respect to $\phi$, respectively, we have: \begin{eqnarray} {\cal E}_{\alpha\beta}[g_{\mu\nu},\phi]=-\frac{2}{\sqrt{-g}}\frac{\delta S^{N}}{\delta g^{\alpha\beta}}\,,\quad {\cal E}_{\phi}[g_{\mu\nu},\phi]=-\frac{2}{\sqrt{-g}}\frac{\delta S^{N}}{\delta\phi} \,,\quad {\cal F}_{\phi}[g_{\mu\nu},\phi]=-\frac{2}{\sqrt{-h}}\frac{\delta S^{Q}}{\delta\phi} \,.\nonumber\\ \end{eqnarray} % Then, one finds: \begin{eqnarray} {\cal E}_{\mu\nu}[g_{\mu\nu},\phi]&=&G_{\mu\nu}+\Lambda g_{\mu\nu}-\frac{\alpha}{2}\left(\nabla_{\mu}\phi\nabla_{\nu}\phi-\frac{1}{2}g_{\mu\nu}\nabla_{\lambda}\phi\nabla^{\lambda}\phi\right)\label{11}\nonumber\\ &-&\frac{\gamma}{2}\left(\frac{1}{2}\nabla_{\mu}\phi\nabla_{\nu}\phi R-2\nabla_{\lambda}\phi\nabla_{(\mu}\phi R^{\lambda}_{\nu)}-\nabla^{\lambda}\phi\nabla^{\rho}\phi R_{\mu\lambda\nu\rho}\right)\nonumber\\ &-&\frac{\gamma}{2}\left(-(\nabla_{\mu}\nabla^{\lambda}\phi)(\nabla_{\nu}\nabla_{\lambda}\phi)+(\nabla_{\mu}\nabla_{\nu}\phi)\Box\phi+\frac{1}{2}G_{\mu\nu}(\nabla\phi)^{2}\right)\nonumber\\ &+&\frac{\gamma g_{\mu\nu}}{2}\left(-\frac{1}{2}(\nabla^{\lambda}\nabla^{\rho}\phi)(\nabla_{\lambda}\nabla_{\rho}\phi)+\frac{1}{2}(\Box\phi)^{2}-(\nabla_{\lambda}\phi\nabla_{\rho}\phi)R^{\lambda\rho}\right),\\ {\cal E}_{\phi}[g_{\mu\nu},\phi]&=&\nabla_{\mu}[(\alpha g^{\mu\nu}-\gamma G^{\mu\nu})\nabla_{\nu}\phi]\,,\label{12}\\ {\cal F}_{\phi}[g_{\mu\nu},\phi]&=&\frac{\gamma}{4}(\nabla_{\mu}\nabla_{\nu}\phi n^{\mu}n^{\nu}-(\nabla^{2}\phi))K+\frac{\gamma}{4}(\nabla_{\mu}\nabla_{\nu}\phi)K^{\mu\nu}\,.\label{12.1} \end{eqnarray} Note that, from the Euler-Lagrange equation, ${\cal E}_{\phi}[g_{\mu\nu},\phi]={\cal F}_{\phi}[g_{\mu\nu},\phi]$. \section{Q-profile within BTZ black hole in Horndeski gravity}\label{v2} In this section, we will describe our BTZ black hole and construct the profile of the hypersurface Q taking into account the influence of the Horndeski gravity. The black hole BTZ is defined in the three-dimensional as \cite{Banados:1992wn,Banados:1992gq}: \begin{eqnarray} ds^{2}=\frac{L^{2}}{r^{2}}\left(-f(r)dt^{2}+dy^{2}+\frac{dr^{2}}{f(r)}\right)\,.\label{13} \end{eqnarray} A condition that deals with static configurations of black holes, which can be spherically symmetric for certain Galileons, was presented in Ref. \cite{Bravo-Gaete:2013dca} to discuss the no-hair theorem. However, to escape this no-hair theorem, we have to keep the radial component of the conserved current disappearing identically without restricting the radial dependence of the scalar field: \begin{equation} \alpha g_{rr}-\gamma G_{rr}=0\label{14}. \end{equation} From this condition we have ${\cal E}_{\phi}[g_{rr},\phi]=0$. Thus, we consider just $\phi=\phi(r)$ and define $\phi^{'}(r)\equiv\psi(r)$. It can be shown that the equation ${\cal E}_{\phi}[g_{rr},\phi]={\cal E}_{rr}[g_{rr},\phi]=0$ are satisfied and it will be used to calculated the horizon function $f(r)$ and $\psi(r)$, so that: \begin{eqnarray} f(r)&=&\frac{\alpha L^{2}}{3\gamma}-\left(\frac{r}{r_{h}}\right)^{2},\label{15}\\ \psi^{2}(r)&=&-\frac{2L^{2}(\alpha+\gamma\Lambda)}{\alpha\gamma r^{2}f(r)}.\label{16} \end{eqnarray} In addition, in Eq. \eqref{15}, we choose the effective AdS radius as $L^{-2}=\alpha/(3\gamma)$ \cite{Anabalon:2013oea,Santos:2020xox}. One can note that these solutions can be asymptotically dS or AdS for the following conditions $\alpha/\gamma>0$ and $\alpha/\gamma<0$, respectively. The scalar field given by the Eq.~(\ref{16}) should be real and then we impose the constraints $\alpha>0$ and $\gamma<0$. The Hawking temperature is given by % \begin{equation}\label{hawk} T_{H}= \dfrac{1}{4\pi} |f'(r_h)| =\frac{1}{2\pi r_{h}}\,, \end{equation} which is equal to the temperature of the dual BCFT theory $T_{BCFT}=T_{H}$. Now, in order to construct the Q boundary profile, one has the induced metric on the BCFT theory given by \begin{eqnarray} ds^{2}_{\rm ind}=\frac{L^{2}}{r^{2}}\left(-f(r)dt^{2}+\frac{g^{2}(r)dr^{2}}{f(r)}\right)\,, \end{eqnarray} where $g^{2}(r)=1+{y'}^{2}(r)f(r)$ and $y{'}(r)=dy/dr$. Then, we have that their normal vectors on Q can be presented by \begin{eqnarray} n^{\mu}=\frac{r}{Lg(r)}\, \left(0,\, 1, \, -{f(r)y{'}(r)}\right)\,.\label{17} \end{eqnarray} Fulfilling the no-hair theorem, meaning ${\cal F}_{\phi}[h_{rr},\phi]=0$, one can solve the Eq. \eqref{8}, so that \begin{eqnarray} y{'}(r)&=&\frac{(\Sigma L)}{\sqrt{1+\dfrac{\gamma\psi^{2}(r)}{4}-(\Sigma L)^{2}\left(1-\left(\dfrac{r}{r_{h}}\right)^{2}\right)}}\,, \label{19} \end{eqnarray} \noindent with $\psi(r)$ given by Eq. \eqref{16}, and \begin{eqnarray}y{'}(r)&=&\frac{(\Sigma L)}{\sqrt{1-\dfrac{\xi}{r^{2}\left(1-\left(\dfrac{r}{r_{h}}\right)^{2}\right)}-(\Sigma L)^{2}\left(1-\left(\dfrac{r}{r_{h}}\right)^{2}\right)}}\,, \end{eqnarray} where \begin{eqnarray}\label{xi} \xi&=&\frac{6\gamma}{\alpha}\left(1+\frac{\gamma\Lambda}{\alpha}\right)\,. \end{eqnarray} Note that $\xi$ is negative since $\alpha>0$ and $\gamma<0$. Besides, we can introduce $\Sigma L=\cos(\theta{'}) $ with $\theta{'}$ the angle between the positive direction of the $y$ axis and the hypersurface Q. \begin{figure}[!ht] \vskip 1cm \begin{center} \includegraphics[scale=0.48]{f01.pdf} \includegraphics[scale=0.48]{f02.pdf} \caption{The figures show the Q boundary profile for the BTZ black hole within Horndeski gravity considering the values for $\theta=\theta'=2\pi/3$, $\kappa=1/4$, $\Lambda=-1$, $\alpha=8/3$ with $\gamma=0$ ({\sl solid}), $\gamma=-0.1$ ({\sl dashed}), $\gamma=-0.2$ ({\sl dot dashed}), and $\gamma=-0.3$ ({\sl thick}). The dashed parallel vertical lines represent the UV solution, Eq. \eqref{19.2}. The region between the curves Q represent the bulk N. {\sl Left panel:} we show the complete numerical solution for the Eq. (\ref{19}). {\sl Right panel:} we show the approximated solution for small values of $\xi$, from the Eq. (\ref{19.3}). }\label{p0} \label{ylinhaz} \end{center} \end{figure} The equation for $ y{'}(r)$ can be solved numerically, and we can obtain the Q-profile for the $\gamma$-dependent Horndeski terms as shown in the left panel of Fig. \ref{p0}. Beyond the numerical solutions, we can analyze some particular cases regarding the study of the UV and IR regimes. Thus, for the UV case, performing an expansion at $r\to 0$ the Eq. (\ref{19}) becomes \begin{eqnarray} y_{_{UV}}(r)=y_{0}+\frac{r\cos(\theta{'})}{\sqrt{-\xi}}.\label{19.1} \end{eqnarray} In the above equation, considering $\xi\to-\infty$, we have \begin{eqnarray} y_{_{UV}}(r)=y_{0}={\rm constant}.\label{19.2} \end{eqnarray} This is equivalent to keep $\xi$ finite and a zero tension limit $\Sigma\to 0$. Now, for the IR case, we take $r\to\infty$, so that from Eq. \eqref{16} implies $\psi(r\to\infty)=0$, and then $\phi=$ constant, which ensures a genuine vacuum solution. Plugging this result in Eq. (\ref{19}), in the limit $r\to\infty$, we have \begin{eqnarray} y_{_{IR}}(r)=y_{0}+r_{h}\ln(r).\label{19.22} \end{eqnarray} Another approximate analytical solution for $y(r)$ can be obtained by performing an expansion for $\xi$ very small in Eq. (\ref{19}). Considering this expansion up to first-order, we obtain \begin{eqnarray} y_{_Q}\equiv y(r)&=&y_{0}+r_{h}\sinh^{-1}\left[\frac{r}{r_{h}}\cot(\theta{'})\right] +\frac{\xi\cos(\theta{'})}{2r_{h}}\tan^{-1}\left[\frac{r}{r_{h}\sqrt{1-\cos^{2}(\theta{'})f(r)}}\right] \cr &+&\frac{\xi\cos(\theta{'})}{2} \frac{\sqrt{1-\cos^{2}(\theta{'})f(r)}}{{r(-1+\cos^{2}(\theta{'}))^{2}}} \left[{1+\frac{r^{2}\cos^{4}(\theta{'})}{r^{2}_{h}-r^{2}_{h}\cos^{2}(\theta{'})f(r)}}\right] +\mathcal{O}(\xi^{2})\,. \label{19.3} \end{eqnarray} In the right panel of Fig. \ref{p0}, we plot the $y_{_Q}=y(r)$ profile from Eq. \eqref{19.3}, which represents our holographic description of BCFT within Horndeski's theory. Note that the bulk spacetime N is asymptotically AdS with two boundaries M and Q. The interception of M and Q is represent by P in Fig. \ref{BCFT}. It is worthwhile to mention that the Q profile is obtained from the solution $y_{_Q}=y(r)$. Note that the UV solution $y_{_{UV}}(r)=$ constant, Eq. \eqref{19.2} is similar to a lower-dimensional Randall-Sundrum (RS) brane, which is perpendicular to the boundary M.\footnote{A gravity theory containing solutions with non-zero tension of the RS branes was presented in Ref. \cite{Nozaki:2012qd}.} These RS-like branes could be represented in Fig \ref{p0} by the dashed parallel vertical lines. Further, as one increases the Horndeski parameter $\gamma$, one can see that the surface Q gets closer to the RS-like branes. \section{Holographic renormalization}\label{v3} In this section we will present the holographic renormalization scheme in order to compute the euclidean on-shell action which is related to the free energy of the corresponding thermodynamic system.\footnote{One should notice that the free energy can also be calculated {\it via} canonical thermodynamic potential by using the black hole entropy and the first law of thermodynamics. This approach can be seen, for instance, in Ref. \cite{Gursoy:2017wzz}.} The holographic renormalization, as it is called within the AdS/CFT program, is a steady approach to remove divergences from infinite quantities in the gravitational side of the correspondence \cite{Henningson:1998gx,deBoer:1999tgo}. Such a renormalizaton in the gravity side will work similarly to usual renormalization of the gauge field theory on the boundary. Our holographic scheme will take into account the contributions of AdS/BCFT correspondence within Horndeski gravity. Let us start with the euclidean action given by $I_{E}=I_{bulk}+2I_{bdry}$, i.e., \begin{eqnarray} &&I_{bulk}= -\frac{1}{16\pi G_{N}}\int_{N}{d^{3}x\sqrt{g}\left[(R-2\Lambda)-\frac{\gamma}{2}G_{\mu\nu}\nabla^{\mu}\phi\nabla^{\nu}\phi\right]}\cr &&-\frac{1}{8\pi G_{N}}\int_{M}{d^{2}x\sqrt{\bar{\gamma}}\left[(K^{(\bar{\gamma})}-\Sigma^{(\bar{\gamma})})+\frac{\gamma}{4}(\nabla_{\mu}\phi\nabla_{\nu}\phi n^{\mu}n^{\nu}-(\nabla\phi)^{2})K^{(\bar{\gamma})}\right. \left.+\frac{\gamma}{4}\nabla^{\mu}\phi\nabla^{\nu}\phi K^{(\bar{\gamma})}_{\mu\nu}\right]},\cr &&\label{BT} \end{eqnarray} where $g$ is the determinant of the metric $g_{\mu\nu}$ on the bulk N, the induced metric and the surface tension on M are $\bar{\gamma}$ and $\Sigma^{(\bar{\gamma})}$, respectively and the trace of the extrinsic curvature on the surface M is $K^{(\bar{\gamma})}$. On the other hand, for the boundary, one has \begin{eqnarray} I_{bdry}&=&-\frac{1}{8\pi G_{N}}\int_{Q}{d^{2}x\sqrt{h}\left[(K-\Sigma)+\frac{\gamma}{4}(\nabla_{\mu}\phi\nabla_{\nu}\phi n^{\mu}n^{\nu}-(\nabla\phi)^{2})K+\frac{\gamma}{4}\nabla^{\mu}\phi\nabla^{\nu}\phi K_{\mu\nu}\right]}\nonumber\\ &&-\frac{1}{16\pi G_{N}}\int_{N}{d^{3}x\sqrt{g}\left[(R-2\Lambda)-\frac{\gamma}{2}G_{\mu\nu}\nabla^{\mu}\phi\nabla^{\nu}\phi\right]}.\label{BT1} \end{eqnarray} Through the AdS/CFT correspondence, we know that IR divergences at AdS corresponds to the UV divergences at CFT. This relation is known as IR-UV connection. Thus, for the AdS-BTZ black hole, we can remove this IR divergence by introducing a cutoff $\epsilon$: \begin{eqnarray} I_{bulk}&=&\frac{1}{8\pi G_{N}}\int^{2\pi r_{h}}_{0}\int^{y}_{y_{0}}\int^{r_{h}}_{\epsilon}{\frac{L}{r^{3}}d\tau dydr}+\frac{1}{32\pi G_{N}}\int^{2\pi r_{h}}_{0}\int^{y}_{y_{0}}\int^{r_{h}}_{\epsilon}{\frac{L^{3}}{r^{3}}\gamma G^{rr}\psi^{2}d\tau dydr}\nonumber\\ &&-\frac{1}{8\pi G_{N}}\int^{2\pi r_{h}}_{0}\int^{y}_{y_{0}}{\frac{L\sqrt{f(\epsilon)}}{\epsilon^{2}}d\tau dy}\,.\label{BT2} \end{eqnarray} Note that the coordinate $y$ in this equation, associated with AdS-BTZ black hole, is not the same as $y_{_Q}=y(r)$ related to the Q-profile discussed in section \ref{v2}. Then, we have for the bulk term: \begin{eqnarray} &&I_{bulk}=-\frac{L\Delta y}{8r_{h}G_{N}}\left(1-\frac{\xi}{4L^{2}}\right)+\mathcal{O}(\epsilon)\,, \label{BT3} \end{eqnarray} where $\Delta y\equiv y-y_0$. Analogously, for the boundary term we have \begin{eqnarray} I_{bdry}&=&\frac{1}{4\pi G_{N}}\int^{2\pi r_{h}}_{0}\int^{y_{_Q}}_{y_{0}}\int^{r_{h}}_{\epsilon}{\frac{L}{r^{3}}d\tau dy\, dr}\cr &+&\frac{1}{32\pi G_{N}}\int^{2\pi r_{h}}_{0}\int^{y_{_Q}}_{y_{0}}\int^{r_{h}}_{\epsilon}{\frac{L^{3}}{r^{3}}\gamma G^{rr}\psi^{2}d\tau dy\, dr}\label{BT4}\nonumber \\ &-&\frac{1}{8\pi G_{N}}\int^{2\pi r_{h}}_{0}\int^{r_{h}}_{\epsilon}{\frac{\Sigma L^{2}d\tau dr}{r^{2}\sqrt{1-(\Sigma L)^{2}f(r)}}}\cr &+&\kappa\Sigma^{3}L^{2}\left(1+\frac{\gamma\Lambda}{\alpha}\right)\frac{1}{8\pi G_{N}}\int^{2\pi r_{h}}_{0}\int^{r_{h}}_{\epsilon}{\frac{\Sigma L^{2}d\tau dr}{r^{2}\sqrt{1-(\Sigma L)^{2}f(r)}}}\,. \end{eqnarray} This boundary action can be written as \begin{eqnarray} I_{bdry}&=&\frac{r_{h}L}{2G_{N}}\left(1-\frac{\xi}{8L^{2}}\right)\int^{r_{h}}_{\epsilon}{\frac{\Delta y_{_Q}(r)}{r^{3}}dr}\cr &+&\left(1-\frac{\xi\cos^{3}(\theta{'})}{2}\right)\frac{L\cot(\theta{'})\csc(\theta{'})}{4G_{N}}+\mathcal{O}(\epsilon),\label{BT5} \end{eqnarray} where $\Delta y_{_Q}(r)\equiv y(r)-y_0$, with $y(r)$ given by Eq.(\ref{19.3}), and the euclidean action $I_{E}=I_{bulk}+2I_{bdry}$ is given by: \begin{eqnarray} I_{E}&=&-\frac{L\Delta y}{8r_{h}G_{N}}\left(1-\frac{\xi}{4L^{2}}\right)+\frac{L}{G_{N}}\left(1-\frac{\xi}{8L^{2}}\right)w(\xi,r_{h})\nonumber\\ &+&\left(1-\frac{\xi\cos^{3}(\theta{'})}{2}\right)\frac{L\cot(\theta{'})\csc(\theta{'})}{2G_{N}}\,,\label{BT6} \end{eqnarray} \noindent with \begin{eqnarray} &&w(\xi,r_{h})=\int^{r_{h}}_{\epsilon}{\frac{r_{h}\Delta y_{_Q}(r)}{r^{3}}dr}\,.\nonumber \end{eqnarray} Using the Q-profile for $\Delta y_{_Q}(r)$ from Eq. \eqref{19.3} in $w(\xi,r_{h})$, we can extract an approximated analytical expression for the euclidean action $I_{E}$ as \begin{eqnarray}\label{freeEBH} I_{E}&=&-\frac{L\Delta y}{8r_{h}G_{N}}\left(1-\frac{\xi}{4L^{2}}\right)-\frac{L}{2G_{N}}\left(1-\frac{\xi}{8L^{2}}\right)\sinh^{-1}(\cot(\theta{'}))\nonumber\\ &+&\frac{\xi q(\theta{'})L}{2G_{N}}+\frac{\xi h(\theta{'})\cot(\theta{'})}{2G_{N}r^{2}_{h}}\label{BT6.1}\,, \end{eqnarray} \noindent where \begin{eqnarray} h(\theta{'})&=&-\frac{(1+\pi/2)}{2\sin(\theta{'})}+\frac{\cot^{3}(\theta{'})\cos^{2}(\theta{'})}{(1+\cos^{2}(\theta{'}))}\tanh^{-1}\left(\frac{\sqrt{2}\cos(\theta{'})}{\sqrt{1+\cos^{2}(\theta{'})}}\right)\nonumber\\ &-&\frac{(1+\cos^{2}(\theta{'})+3\cos^{4}(\theta{'})-3\cos^{6}(\theta{'}))}{3\sin^{5}(\theta{'})(1+\cos^{2}(\theta^{'}))}\,, \nonumber\\ q(\theta{'})&=&\left(\frac{1}{4}-\cos^{3}(\theta{'})\right)\cot(\theta{'})\csc(\theta{'})\,. \nonumber \end{eqnarray} Beyond the AdS-BTZ black hole, we can compute the euclidean action for the thermal AdS solution considering $f(r)\to 1$. From the equations (\ref{BT}) and (\ref{BT1}), it is straightforward to get in this limit \begin{eqnarray} I_{E}(0)=-\frac{L\Delta y}{8r_{h}G_{N}}\left(1-\frac{\xi}{4L^{2}}\right)\,.\label{BT6.2} \end{eqnarray} \section{BTZ black hole entropy in Horndeski gravity}\label{BTZentro} In this section we will compute the entropy related to the BTZ black hole considering the contributions of the AdS/BCFT correspondence within Horndeski gravity. From the free energy defined as \begin{equation}\label{FE} \Omega=T_H\, I_E \,, \end{equation} one can obtain the corresponding entropy as: \begin{eqnarray} S=-\frac{\partial\Omega}{\partial T_{H}}\,.\label{BT7} \end{eqnarray} By plugging the euclidean on-shell action $I_E$, Eq. \eqref{freeEBH}, in the above equation, one gets \begin{eqnarray} S_{\rm total}&=&\frac{L\Delta y}{4r_{h}G_{N}}\left(1-\frac{\xi}{4L^{2}}\right)+\frac{L}{2G_{N}}\left(1-\frac{\xi}{8L^{2}}\right)\sinh^{-1}(\cot(\theta{'}))\cr&-&\frac{3\xi}{2r^{2}_{h}G_{N}} \cot(\theta{'}) h(\theta{'}) +\frac{\xi L}{2G_{N}}q(\theta{'}).\label{BT8} \end{eqnarray} Recalling that the Hawking temperature, Eq. \eqref{hawk}, is a function of $r_h$, we should evaluate the profile from Eq. \eqref{19.3} at the horizon $r=r_h$. Then, one gets \begin{eqnarray}\label{seno} \frac{1}{2}\sinh^{-1}(\cot(\theta{'}))=\frac{\Delta y_{_Q}}{r_{h}}-\frac{\xi }{2r^{2}_{h}}\, b(\theta{'}) \end{eqnarray} where \begin{eqnarray}\label{btheta} b(\theta{'})=\cos(\theta{'})\tan^{-1}\left(\frac{1}{\sin(\theta{'})}\right)+\cot(\theta{'})\left(\frac{1+\cos^{2}(\theta{'})\cot^{2}(\theta{'})}{\sin^{2}(\theta{'})}\right)\,. \end{eqnarray} Replacing Eq. \eqref{seno} in Eq. \eqref{BT8} one gets the total entropy with the bulk and boundary contributions with Horndeski terms: \begin{equation} S_{\rm total}= S_{\rm bulk + Horndeski} + S_{\rm boundary + Horndeski}\,, \label{St} \end{equation} where \begin{eqnarray} S_{\rm bulk + Horndeski}&=&\frac{L\Delta y}{4r_{h}G_{N}}\left(1-\frac{\xi}{4L^{2}}\right) \\ S_{\rm boundary + Horndeski}&=&\frac{L\Delta y_{_Q}}{r_{h}G_{N}}\left(1-\frac{\xi}{8L^{2}}\right)-\frac{\xi b(\theta{'})L}{2r^{2}_{h}G_{N}}\left(1-\frac{\xi}{8L^{2}}\right)\nonumber \\&-&\frac{3\xi h(\theta{'})\cot(\theta^{'})}{2r^{2}_{h}G_{N}}+\frac{\xi q(\theta{'})L}{2G_{N}}\,. \end{eqnarray} One interpretation for this total entropy is to identify it with the Bekenstein-Hawking formula for the black hole: \begin{eqnarray} S_{BH}=\frac{A}{4G_{N}}\label{BT9}\,. \end{eqnarray} Thus, in this case, from Eq. \eqref{St}, one has \begin{eqnarray} A&=&\frac{L\Delta y}{r_{h}}\left(1-\frac{\xi}{4L^{2}}\right)+\frac{4L\Delta y_{_Q}}{r_{h}}\left(1-\frac{\xi}{8L^{2}}\right)-\frac{2\xi b(\theta{'})L}{r^{2}_{h}}\left(1-\frac{\xi}{8L^{2}}\right)\nonumber \\&-&\frac{6\xi h(\theta{'})\cot(\theta^{'})}{r^{2}_{h}}+2\xi q(\theta{'})L\,, \label{BT10} \end{eqnarray} where $A$ would be the total area of the AdS-BTZ black hole with Horndeski contribution terms for the bulk and the boundary Q. Since the information is bounded by the black hole area, the equation \eqref{BT10} suggests that the information storage increases with increasing $|\xi|$, as long as $\xi<0$. Note that the Bekenstein-Hawking equation \eqref{BT9} is a semi-classical result \cite{Das:2010su, Almheiri:2020cfm}. In this sense our total entropy ($S_{\rm total}$), Eq. \eqref{St}, can be interpreted as a correction to the original Bekenstein-Hawking formula: \begin{equation} S_{total} = S_{\rm Bekenstein-Hawking} + S_{\rm Horndeski\,\, contributions} \,. \end{equation} It is worthwhile to mention that corrections in the entropy were studied for instance in Refs. \cite{Hendi:2010xr, Solodukhin:2011gn, Bamba:2012rv, Feng:2015oea}. In particular, we found compatible results with the ones in Ref. \cite{Feng:2015oea}, where they considered Horndeski gravity in $n$-dimensional spacetime $(n\ge 4)$ within the Wald formalism or the regularized euclidean action. Considering the boundary entropy for the AdS-BTZ black hole with Horndeski gravity, from Eq. \eqref{St}, one has: \begin{eqnarray} S_{bdry}=\frac{L\Delta y_{_Q}}{r_{h}G_{N}}\left(1-\frac{\xi}{8L^{2}}\right)-\frac{\xi b(\theta{'})L}{2r^{2}_{h}G_{N}}\left(1-\frac{\xi}{8L^{2}}\right)-\frac{3\xi h(\theta{'})\cot(\theta^{'})}{2r^{2}_{h}G_{N}}+\frac{\xi q(\theta{'})L}{2G_{N}}, \;\;\quad \label{BT11} \end{eqnarray} which is identified with the entropy of the BCFT corrected by the Horndeski terms parametrized by $\xi$. If we put $\xi\to 0$ we recover the results presented in Refs. \cite{Takayanagi:2011zk, Fujita:2011fp}. In addition, still analyzing Eq. \eqref{BT11}, due to the effects of the Horndeski gravity, there is a non-zero boundary entropy even if we consider the zero temperature scenario similar to an extreme black hole. This can be seen if one takes the limit $T\to 0$ ($r_h \to \infty$) in Eq. \eqref{BT11}, then one gets what we call the residual boundary entropy \begin{equation} S_{bdry}^{res}=\frac{\xi q(\theta{'})L}{2G_{N}}\,. \label{BT11ext} \end{equation} Note that, since the entropy should be non-negative, this zero temperature limit is only meaningful if $q(\theta')<0$, once $\xi<0$. In particular, considering our approximate analytical solution Eq. \eqref{19.3}, this will be fulfilled for small or large $\theta'$, $0< \theta'< \sqrt{6/13} $ or $ \pi / 2 < \theta' < \pi$, respectively. On the other side, in the region $ \sqrt{6/13} < \theta' < \pi/2 $, one has $q(\theta')>0$, and then the limit $T\to 0$ cannot be reached. In this case there should be a minimum non-zero temperature corresponding to the zero entropy. \section{Thermodynamic quantities and results}\label{v4} The thermodynamics of black holes was established in Refs. \cite{Hawking:1971tu, Bardeen:1973gs, Bekenstein:1973ur}, and in this section we will present our numerical results for the thermodynamic observables, from a BTZ black hole. We take into account the contribution of the AdS/BCFT correspondence within Horndeski gravity. All of those thermodynamic observables will be derived from the renormalized free energy. Motivated by the thermodynamics of black holes, the AdS/CFT and AdS/QCD were benefited, due to the possibility to construct effective gauge theories at finite temperature opened a myriad of applications. In particular, the holographic study of charged black holes was presented in Refs. \cite{Chamblin:1999tk, Chamblin:1999hg}. These ideas were then applied to some high-energy phenomenology at finite temperature. For an incomplete list, one can see Refs. \cite{Kubiznak:2016qmn, Bravo-Gaete:2014haa, Zeng:2016aly, Gubser:2008ny, Gubser:2008yx, Li:2011hp, Cai:2012xh, He:2013qq, Zhao:2013oza, Li:2014hja, Li:2017ple, Rodrigues:2018pep, Chen:2018vty, Chen:2019rez, Rodrigues:2018chh, Arefeva:2020vae, Ballon-Bayona:2020xls, Arefeva:2020bjk, Caldeira:2020sot, Rodrigues:2020ndy, Caldeira:2020rir}. After this brief outlook, let us start our calculation from the differential form of the thermodynamics first law, within the canonical ensemble. It can be written as: \begin{equation}\label{1lei} d \Omega = -p dV - S dT\,, \end{equation} \noindent leading to \begin{equation}\label{omega} \Omega = \epsilon - TS \,, \end{equation} \noindent where $p$ is the pressure and $\Omega$ is the canonical potential or free energy and $\Omega = T_{H}I_E $. The energy density is represented by $\epsilon$, $S$ is the entropy and $T$ is the temperature. Besides for a fixed volume ($V \equiv 1$), one has: \begin{equation}\label{1leientro} d \Omega = - S dT\,. \end{equation} Here, we will present the behavior of the canonical potential or free energy, from Eq. \eqref{FE}. By analyzing Fig. \ref{freeenergy}, one can see that the canonical potential $\Omega$ has a minimum for each value of the Horndeski parameter $\gamma$ which assures a global condition of thermodynamic stability \cite{DeWolfe:2010he}. This picture also shows that there are critical temperatures where $\Omega =0$, depending on $\gamma$. For $\Omega >0$ these solutions become unstable. The increase of the absolute value of $\gamma$ induces a decrease of these critical temperatures. \begin{figure}[!ht] \begin{center} \includegraphics[scale=0.55]{f05.pdf} \caption{Canonical potential or free energy as a function of the temperature and considering the influence of Horndeski gravity for the following values $\theta{'}=2\pi/3$, $\kappa=1/4$, $\Lambda=-1$, $\alpha=8/3$, with $\gamma=-0.1$ (solid line), $\gamma=-0.2$ (dashed line), $\gamma=-0.3$ (dot dashed line), and $\gamma=-0.4$ (thick line).} \label{freeenergy} \end{center} \end{figure} The next thermodynamic quantity that we analyze is the heat capacity $C_V$, defined as: \begin{equation} C_V = T \left( \frac{\partial S}{\partial T}\right)_V = - T \left( \frac{\partial^2 \Omega}{\partial T^2} \right)\,. \end{equation} In Refs. \cite{Ganai:2019lgc, Myung:2015pua, Ma:2013eaa, Hendi:2015wxa, Hendi:2016pvx} the authors discussed the positivity of the heat capacity and relate it to the local black hole thermodynamic stability condition. This means that the black holes will be thermodynamic stable if $C_V >0$. From Fig. \ref{heatcapacity} one can see that the black hole can switch between stable ($C_V>0$) and unstable ($C_V<0$) phases depending on the sign of heat capacity. Also in Fig. \ref{heatcapacity} one can see the influence of Horndeski gravity in the temperature where the phase transition occurs. \begin{figure}[!ht] \vskip 1cm \begin{center} \includegraphics[scale=0.55]{f07.pdf} \caption{Heat capacity as a function of the temperature and considering the influence of Horndeski gravity for the following values $\theta{'}=2\pi/3$, $\kappa=1/4$, $\Lambda=-1$, $\alpha=8/3$, with $\gamma=-0.1$ (solid line), $\gamma=-0.2$ (dashed line), $\gamma=-0.3$ (dot dashed line), and $\gamma=-0.4$ (thick line).} \label{heatcapacity} \end{center} \end{figure} The sound speed is defined as: \begin{eqnarray} c_s^2 \equiv \frac{\partial p}{\partial \epsilon} = \frac{\partial T}{\partial \epsilon} \frac{\partial p}{\partial T} \,. \end{eqnarray} Identifying \begin{eqnarray} \frac{\partial T}{\partial \epsilon} = \left(\frac{\partial \epsilon}{\partial T}\right)^{-1} = C_V^{-1} \,;\qquad \frac{\partial p}{\partial T} = S\,, \end{eqnarray} one gets:\footnote{It is also very common to describe the sound sound speed also as $c^2_s = \frac{\partial \ln{T}}{\partial \ln{S}}$.} \begin{eqnarray} c_s^2 &=& \frac{S}{C_V}\,. \end{eqnarray} In Fig. \ref{entrovs}, we present the behavior of the entropy $S$ and sound speed $c^2_s$ against the temperature achieved from our model. The entropy comes directly from Eq. \eqref{BT8}. In the left panel one can see the behavior of the entropy $S$ and the influence of the Horndeski gravity. On the other hand, in the right panel, we show the sound speed and the effects of Horndeski gravity which are more intense for $\gamma = -0.4$. In this case it deviates from the value 1/3 associated with the conformal system. \begin{figure}[!ht] \vskip 1cm \begin{center} \includegraphics[scale=0.45]{f06.pdf} \includegraphics[scale=0.45]{f08.pdf} \caption{ Entropy ({\sl left panel}) and sound speed ({\sl Right panel}) as a functions of the temperature and considering the influence of Horndeski gravity for the following values values $\theta{'}=2\pi/3$, $\kappa=1/4$, $\Lambda=-1$, $\alpha=8/3$, with $\gamma=-0.1$ (solid line), $\gamma=-0.2$ (dashed line), $\gamma=-0.3$ (dot dashed line), and $\gamma=-0.4$ (thick line).} \label{entrovs} \end{center} \end{figure} The last thermodynamic quantity that we will present in this section is the trace of the energy momentum tensor, defined as: \begin{equation} \langle T^a_{\ \ a}\rangle = \epsilon - 3p = 4 \Omega + TS\,. \end{equation} In Fig. \ref{trace}, one can see the behavior of the scaled trace of the energy momentum tensor ($\langle T^a_{\ \ a}\rangle/T^4$) as a function of the temperature. It has a quite interesting behavior: for the small temperature regime it presents $\langle T^a_{\ \ a}\rangle \neq 0$; for the high temperature regime, despite the influence of the Horndeski gravity, $\langle T^a_{\ \ a}\rangle \to 0$, which is an indication of a restoration of the conformal symmetry and therefore the emergence of a non-trivial BCFT. \begin{figure}[!ht] \begin{center} \includegraphics[scale=0.55]{f09.pdf} \caption{Trace of the scaled energy momentum tensor as a function of the temperature considering the values values $\theta{'}=2\pi/3$, $\kappa=1/4$, $\Lambda=-1$, $\alpha=8/3$, with $\gamma=-0.1$ (solid line), $\gamma=-0.2$ (dashed line), $\gamma=-0.3$ (dot dashed line), and $\gamma=-0.4$ (thick line).} \label{trace} \end{center} \end{figure} \section{Hawking-Page phase transition}\label{v5} In this section, we will analyze the Hawking-Page phase transition (HPPT) for a BTZ black hole considering the contributions of the AdS/BCFT correspondence within Horndeski gravity. The HPPT was proposed originally in Ref. \cite{hawpage}, in the context of general relativity, which discusses the stability and instability of black holes in AdS space. The transition between the stable and unstable configurations characterize a phase transition of first order with an associated critical temperature. In the context of the AdS/CFT program, the pioneer work in Ref. \cite{Witten:1998zw}, presented how to relate the temperature in gravitational theory with the one associated with the gauge theory on the boundary.\footnote{Note that in Ref. \cite{Witten:1998zw} the Hawking temperature as well as Hawking-Page phase transition were associated with the temperature of deconfimenent in QCD and confinement/deconfinement phase transition. In this work we do not use such an interpretation.} For an incomplete list of works dealing with HPPT within AdS/QCD context, see for instance Refs. \cite{Cho:2002hq,Herzog:2006ra, Kajantie:2006hv, BallonBayona:2007vp, Rodrigues:2017cha, Rodrigues:2017iqi, Chen:2020ath, Li:2020khm, Wang:2020pmb}. In particular, HPPT within BTZ black hole scenario can be seen, also in an incomplete list, in Refs. \cite{Myung:2006sq, Eune:2013qs, Detournay:2015ysa, Myung:2015pua, Tang:2016vmu, Ganai:2019lgc}.\footnote{It is worthwhile to mention that only in Ref. \cite{Eune:2013qs} the authors have used the holographic renormalization in order to compute the free energy. In all other listed references, the authors derived the free energy from the Bekenstein-Hawking entropy.} The partition function for the AdS-black hole ($V_{E}$) is identified with minus the renormalized euclidean action, Eq. \eqref{freeEBH}, $V_{E}=-I_E$, so that: \begin{eqnarray} V_{E}&=&\frac{L\Delta y}{8r_{h}G_{N}}\left(1-\frac{\xi}{4L^{2}}\right)+\frac{L \Delta y_{Q}}{2r_{h}G_{N}}\left(1-\frac{\xi}{8L^{2}}\right)\nonumber\\ &-&\frac{\xi b(\theta^{'})L}{2r^{2}_{h}G_{N}}\left(1-\frac{\xi}{8L^{2}}\right)-\frac{\xi q(\theta^{'})L}{2G_{N}}-\frac{\xi h(\theta^{'})\cot(\theta^{'})}{2r^{2}_{h}}\,.\label{HP} \end{eqnarray} \noindent Analogously, the partition function for the thermal AdS, is defined as $V_{E}(0)=-I_E(0)$, where $I_E(0)$ is given by Eq. \eqref{BT6.2}: \begin{eqnarray} &&V_{E}(0)=\frac{L\Delta y}{8r_{h}G_{N}}\left(1-\frac{\xi}{4L^{2}}\right)\,.\label{HP1} \end{eqnarray} Now, we can compute $\Delta V_{E}$, so that: \begin{eqnarray} \Delta V_{E}=\frac{L\Delta y_{Q}}{r_{h}G_{N}}\left(1-\frac{\xi}{8L^{2}}\right)-\frac{\xi b(\theta^{'})L}{2r^{2}_{h}G_{N}}\left(1-\frac{\xi}{8L^{2}}\right)-\frac{\xi h(\theta^{'})\cot(\theta^{'})}{2r^{2}_{h}}-\frac{\xi q(\theta^{'})L}{2G_{N}}\,.\label{HP2} \end{eqnarray} According to the HPPT prescription, the difference $\Delta V_E$ vanishes at the phase transition and $\Delta V_E < 0$ indicates the stability of the black hole. On the other hand, $\Delta V_E > 0$ points the stability of the thermal AdS space. \begin{figure}[!ht] \vskip 1cm \begin{center} \includegraphics[scale=0.55]{f03.pdf} \caption{This figure shows the Hawking-Page phase transition from Eq. \eqref{HP2} considering the values $\theta{'}=2\pi/3$, $\kappa=1/4$, $\Lambda=-1$, $\alpha=8/3$, with $\gamma=-0.1$ (solid line), $\gamma=-0.2$ (dashed line), $\gamma=-0.3$ (dot dashed line), and $\gamma=-0.4$ (thick line). See the text for discussions.}\label{p01} \label{planohwkhz} \end{center} \end{figure} In Fig. \ref{planohwkhz}, we show the the difference between the partition functions as a function of the temperature of the BTZ black hole in the AdS/BCFT correspondence and taking into account the contributions coming from the Horndeski gravity. We see that the Horndeski effect decreases the HPPT critical temperature $T_c$, where $\Delta V_E=0$. Besides, the thermal AdS space is stable for low temperatures ($T<T_c$), while the AdS black hole is stable for the high temperature regime ($T>T_c$). \section{Conclusion}\label{v6} Here, in this section, we are going to present our conclusions on the AdS/BCFT correspondence and BTZ black hole thermodynamics within Horndeski gravity. Considering the non-minimal coupling between the standard scalar term and the Einstein tensor we established our setup. Besides the three-dimensional bulk, we introduced a Gibbons-Hawking surface term and obtained the corresponding field equations. Then, using the no-hair-theorem, we found a consistent solution for the BTZ black hole. From this solution we construct the Q profile on the two-dimensional boundary, which characterizes the AdS$_{3}$/BCFT$_{2}$ correspondence. In particular, we found an exact numerical solution and an approximate analytical one.\footnote{Note that these solutions for the boundary Q seem to describe a Randall-Sundrum brane in the limit of large Horndeski parameter.} These two solutions are shown in Fig. \ref{p0}, where one can see that the approximate solution describes qualitatively well the influence of the Horndeski term. So, starting in Sec. \ref{v3} and all subsequent sections, we just considered the approximated analytical solution throughout. Using this solution, we performed a holographic renormalization procedure in order to get the euclidean on-shell action for the thermal and the AdS-BTZ black holes. The identification of the euclidean on-shell action with the free energy allowed us to computed the total entropy which is the sum of the contributions coming from the bulk and the boundary both with Horndeski terms. From this total entropy and assuming the Bekenstein-Hawking formula we derive the corresponding total area for the AdS-BTZ black hole with Horndeski terms. We found that the total area grows with the absolute value of $\xi$. This suggests that the information encoded on the black hole horizon also grows with $|\xi|$. Another interpretation for the total entropy found in this work is that it represents a correction to the Bekenstein-Hawking formula. For the boundary entropy, it is remarkable that the influence of the Horndeski gravity implies a non-zero or residual entropy in the zero temperature limit $(r_h\to\infty)$, for a certain range of the angle $\theta'$. For another range of $\theta'$ the limit $T\to 0$ cannot be reached. In this case there it seems that there should be a minimum non-zero temperature corresponding to the zero entropy. The free energy of the AdS-BTZ black hole with Horndeski gravity is depicted in Fig. \ref{freeenergy}. This picture shows the stability of these solutions for $\Omega <0$, up to a certain critical temperature depending on the Horndeski parameter $\xi$. From this free energy we extracted the other relevant thermodynamic quantities, as the heat capacity, sound speed and trace anomaly. These results seem to be compatible with the ones expected from usual black hole thermodynamic properties. In Sec. \ref{v5}, we have studied the Hawking-Page phase transition in the AdS/BCFT correspondence with Horndeski gravity. The modification coming from the Horndeski contribution allow us to obtain this phase transition as a function of the temperature, as is usual in higher-dimensional contexts. This contrasts with the description of the HPPT given in Refs. \cite{Takayanagi:2011zk, Fujita:2011fp}, where the authors plotted the free energy as a function of the Q profile tension. Finally, we would like to comment that these theories of extended gravity, as the Horndeski one, and beyond Einstein’s original proposal, where such theories take into account scalar fields and their couplings with gravity or accommodating higher-order terms in curvature invariants may provide new insights on aspects which can contribute to deepen the knowledge of the gravity duals of conformal field theories as the AdS/BCFT correspondence. \begin{acknowledgments} We would like to thank Konstantinos Pallikaris, Vasilis Oikonomou, Adolfo Cisterna, and Diego M. Rodrigues for discussions. H.B.-F. is partially supported by Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES), and Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq) under Grant No. 311079/2019-9. \end{acknowledgments}
{ "timestamp": "2021-05-11T02:16:00", "yymm": "2105", "arxiv_id": "2105.03802", "language": "en", "url": "https://arxiv.org/abs/2105.03802" }
\section{Introduction} The word aquaculture is related to firming, including breeding, raising, and harvesting fishes, aquatic plants, crustaceans, mollusks, and aquatic organisms. It involves the cultivation of both freshwater and saltwater creatures under a controlled condition and is used to produce food and commercial products as shown in Figure \ref{fig:Aquaculture}. There are mainly two types of aquaculture. The first one is \textbf{Mariculture} which is the farming of marine organisms for food and other products such as pharmaceuticals, food additives, jewelry (e.g., cultured pearls), nutraceuticals, and cosmetics. Marine organisms are farmed either in the natural marine environment or in the land- or sea-based enclosures, such as cages, ponds, or raceways. Seaweeds, mollusks, shrimps, marine fish, and a wide range of other minor species such as sea cucumbers and sea horses are among the wide range of organisms presently farmed around the world's coastlines. It contributes to sustainable food production and the economic development of local communities. However, sometimes at a large scale of marine firming become a threat to marine and coastal environments like degradation of natural habitats, nutrients, and waste discharge, accidental release of alien organisms, the transmission of diseases to wild stocks, and displacement of local and indigenous communities \cite{MariCulture}. The second one is \textbf{Fish farming} which is the cultivation of fish for commercial purposes in human-made tanks and other enclosures. Usually, some common types of fish like catfish, tilapia, salmon, carp, cod, and trout are firmed in these enclosures. Nowadays, the fish-farming industry has grown to meet the demand for fish products \cite{FishFarm}. This form of aquaculture is widespread for a long time as it is said to produce a cheap source of protein. Global aquaculture is one of the quickest growing food productions, accounting for almost 53\% of all fish and invertebrate production and 97\% of the total seaweed manufacture as of 2020. Estimated global production of farmed salmon stepped up by 7 percent in 2019, to just over 2.6 million tonnes of the market \cite{AquacultureIntroduction}. Global aquaculture of salmon has a threat of various diseases that can devastate the conventional production of salmon. Diseases have a dangerous impact on fishes in both the natural environment and in aquaculture. Diseases are globally admitted as one of the most severe warnings to the economic success of aquaculture. Diseases of fishes are provoked by a spacious range of contagious organisms such as bacteria, viruses, protozoan, and metazoan parasites. Bacteria are accountable for the preponderance of the contagious diseases in confined fish \cite{FishDiseaseIntroduction}. Infectious diseases create one in every foremost vital threat to victorious aquaculture. The massive numbers of fishes gathered in a tiny region gives an ecosystem favorable for development and quickly spreads contagious diseases. In this jam-packed situation, a comparatively fabricated environment, fishes are stressed and also respond to disease. Furthermore, the water ecosystem and insufficient water flow make it easier for the spread of pathogens in gathered populations \cite{FishDiseaseIntroduction2}. Detection of disease with the cooperation of some image processing can help to extract good features. \begin{figure} \centering \includegraphics[width=1.0\columnwidth]{Images/Visuals/aquaculture.jpg} \caption{Aquaculture~\protect\cite{FigAquaculture}} \label{fig:Aquaculture} \end{figure} Image segmentation becomes indispensable for various research fields like computer vision, artificial intelligence, etc. The \textit{k} means segmentation is a popular image processing technique that mainly partitions different regions in an image without loss of information. In \cite{kailasanathan2001image}, authors applied \textit{k} means segmentation for authentication of images. Another application of \textit{k} means segmentation shown at \cite{gaur2015handwritten} where they use this technique to recognize handwritten Hindi characters. One of the most popular supervised machine learning techniques, support vector machine (SVM), has brought convenient solutions for many classification problems in various fields. It is a powerful classification tool that brings out quality predictions for unlabeled data. In \cite{khan2016analysis} Authors built an SVM model based on three kernel functions to differentiate dengue human infected blood sera and healthy sera. For image classification, another SVM architecture has been proposed in \cite{agarap2017architecture} where they emulate the architecture by combining convolutional neural network (CNN) with SVM. SVM provides remarkable accuracy in many contexts. In this paper, we conduct our research on the salmon fish disease classification, either the fish has an infection or not, with a machine vision-based technique. A feature set is a trade-off for the classification of the disease. Image processing techniques are used to extort the features from the images, then a support vector machine (SVM) is employed for the successful classification of infectious disease. Hither, we summarize the entire concept of this work’s contribution given below: \begin{itemize} \item Propose a groundbreaking framework for fish disease detection based on the machine learning model (SVM). \item Appraising and analyzing the performance of our proposed model both with and without image augmentation. \item Juxtaposing our proposed model with a good performing model by some evaluation metrics. \end{itemize} \section{Related Work} Some works focused on only some basic image processing techniques for the identification of fish disease. Shaveta et al. \cite{LRShaveta} proposed an image-based detection technique where firstly applies image segmentation as an edge detection with Canny, Prewitt, and Sobel. However, they did not specify the exact technique that engrossed for feature extraction. In feature extraction, they applied Histogram of Gradient (HOG) and Features from Accelerated Segment Test (FAST) for classification with a combination of both techniques. They tried to discover a better classification with a combination instead of applying a specific method with less exactness. Another technique Lyubchenko et al. \cite{LRLyubchenko} proposed a structure called the clustering of objects in the image that obliged diverse image segmentation actions based on a scale of various clusters. Here, they chose markers for individual objects and objects encountered with a specific marker. Finally, they calculated the proportion of an object in the image and the proportion of infected area to the fish body to identify fish disease. However, individual marking of an object is time-consuming and not effective. There are some approaches focused on the consolidation of image processing and machine learning. Malik et al. \cite{LRMalik} proposed a specific fish disease called Epizootic Ulcerative Syndrome (EUS) detection approach. Aphanomyces invadans, a fungal pathogen, cause this disease. Here, they approached combination styles that combine the Principal Component Analysis (PCA) and Histogram of Oriented Gradients (HOG ) with Features from Accelerated Segment Test (FAST) feature detector and then classify over machine learning algorithm (neural network). The sequence of FAST-PCA-NN gives 86 percent accuracy through the classifier, and HOG-PCA-NN gives 65.8 percent accuracy that is less than the previous combination. Verma et al. \cite{verma2017analysis} proposed a sensitive topic that is kidney stone detection. In this paper, the authors apply morphological operations and segmentation to determine ROI (region of interest) for the SVM classification technique. After applying this technique, they investigated the kidney stone images with some difficulties, such as the similarity of kidney stone and low image resolution. Zhou et al. \cite{zhou2017device} introduced a device-free present detection and localization with SVM aid. Here, the detection algorithm can detect human presence through the SVM classifier using CSI (channel state information) fingerprint. Trojans in hardware detection \cite{inoue2017designing} depend on SVM based approach. Here, the authors evaluated a trojans detection method with their designed hardware. For SVM analysis, their netlists consist of three types of hardware trojan with normal and abnormal behavior. We can conclude that none has performed any depth research work on salmon fish disease classification regarding the research obligations described above. Furthermore, most of the research works involved typical fish disease classification but not in aquaculture. All those described techniques depend solely on image processing or a combination of image processing and machine learning technique but not up to the mark. \section{Preliminary and Proposed Framework} This section has several stages presented in Figure \ref{img:proposedFramework}. Here we precisely present the appurtenant technologies and a solution framework of salmon fish disease classification. \begin{figure*} \begin{center} \centering \includegraphics[scale=.90]{Images/ProposedFrameworkV2.pdf} \caption{Proposed Framework (The overall anatomy of our proposed work gradually from input to result).} \label{img:proposedFramework} \end{center} \end{figure*} \subsection{Cubic Splines Interpolation} Raw images appeared on the dataset in various sizes. If we do not resize these images before training the classifier, the classifier's efficiency may be decreased. As we collected these images from different sources, we reshape them before applying them to the classifier. For image magnification and fixed-size conversion, we use an improved interpolation method called extended \textbf{\textit{Cubic Splines interpolation}} \cite{BSPline}. For a finite interval $[a,b]$, let $\left \{ x_{i} \right \}_{i = 0}^{n}$ be a partition of the interim with steady step size, $h$. We elongate the partition using Equation \ref{eqn:cubicB1}. \begin{equation}\label{eqn:cubicB1} h = \frac{b - a}{n}, \quad x_{0} = a, \quad x_{i} = x_0 + ih, \quad i = \pm 1, \, \pm 2, \, \pm 3, ... \end{equation} Given $\left \{ x_{i} \right \}$, the extended cubic B-spline function, $S\left ( x \right )$ is a linear combination of the extended cubic B-spline basis function in Equation \ref{eqn:cubicB2}, \begin{equation}\label{eqn:cubicB2} S\left ( x \right ) = \sum_{i = -3}^{n-1} C_{i}EB_{3,i}\left ( x \right ), \quad x \, \epsilon \, \left [ x_{0}, x_{n} \right ] \end{equation} where $C_{i}$ are unknown real coefficients. Since $\,C_{i}EB_{3,i}\left ( x \right )$ has a support on $\left [ x_{i}, x_{i+4} \right ]$, there are three nonzero basis function evaluated at each $x_{i}$: $C_{i}EB_{3,i-3}\left ( x \right )$, $C_{i}EB_{3,i-2}\left ( x \right )$, $C_{i}EB_{3,i-1}\left ( x \right )$. \subsection{Adaptive Histogram Equalization} In order to improve the image's quality, contrast enhancement is an essential technique. It contributes to recover the lost information in images. Due to magnification and resizing images, some images may lose information. To prevent this problem, we use Adaptive Histogram Equalization and enhance the contrast of each image. Adaptive histogram equalization (AHE) is a vision processing approach used to enhance contrast in images. Here, we use an extension of AHE called contrast limited adaptive histogram equalization (CLAHE) \cite{liu2019adaptive}. CLAHE diverges from conventional AHE in its contrast limiting. A user-defined value called clip limit where CLAHE limits the augmentation by clipping the histogram. The amount of clamor in the histogram depends on the clipping level. Also, the smoothness and enhancement of contrast rely on this clipping level too. A modification of the limited contrast technique called adaptive histogram clip (AHC) can also be applied. AHC dynamically calibrates the clipping level and balanced over-enhancement of a background area of images \cite{hitam2013mixture}. Here, we use one of the AHC called the Rayleigh distribution that forms a normal histogram. Equation \ref{eqn:clahe} represents this function as below \begin{equation}\label{eqn:clahe} Rayleigh \;\;\; p = p_{min} + [\;2(\alpha^2)\;\;ln(\frac{1}{1-Q(f)})\;]^{0.5} \end{equation} Where $p_{min}$ and $Q(f)$ represent minimum pixel value and cumulative probability distribution, respectively. $\alpha$ is a non-negative real scalar indicating a distribution parameter. In this experiment, we set 0.01 as a clip limit and 0.04 as the $\alpha$ value in the Rayleigh distribution function. \subsection{RGB Color Space to L*a*b Color Space} We now convert the adaptive histogram equalized image from RGB to L*a*b. Hence, we use the \textit{k}-means clustering for segmentation of the image, and here \textit{k}-means clustering technique segments image efficiently in L*a*b color space rather than RGB color space \cite{burney2014k}. In L*a*b color space, where L expresses the lightness of an image and the a, b color channel depicts the other color combinations \cite{rahman2016non}. For this transformation, we need to convert from RGB color space to XYZ color space \cite{ColorCoversion}, \cite{bianco2007new} according to Equation \ref{eqn:rgbToXyz}. \begin{equation}\label{eqn:rgbToXyz} \begin{bmatrix} X\\ Y\\ Z \end{bmatrix} = \begin{bmatrix} 0.412453 & 0.357580 & 0.180423\\ 0.212671 & 0.715160 & 0.072169 \\ 0.019334 & 0.119193 & 0.950227 \end{bmatrix} * \begin{bmatrix} R\\ G\\ B \end{bmatrix} \end{equation} Now, transforming XYZ color space to L*a*b color space \cite{acharya2002median} according to equation [\ref{eqn:xyzToLab} - \ref{eqn:lab_a}] and suppose that the tristimulus values of the reference white are $X_n$, $Y_n$, $Z_n$. \begin{equation}\label{eqn:xyzToLab} L^* = \left\{\begin{matrix} 116(\frac{Y}{Y_n})^\frac{1}{3} - 16 & if \frac{Y}{Y_n} > 0.008856 \\ \\ 903.3 \frac{Y}{Y_n} & if \frac{Y}{Y_n} \leq 0.008856 \end{matrix}\right. \end{equation} \begin{equation}\label{eqn:lab_a} a^* = 500 (f(\frac{X}{X_n})- f(\frac{Y}{Y_n})) \end{equation} \begin{equation}\label{eqn:lab_b} b^* = 200 (f(\frac{Y}{Y_n})- f(\frac{Z}{Z_n})) \end{equation} Where, \begin{equation}\label{eqn:lab_a} f(t) = \left\{\begin{matrix} t^\frac{1}{3} & if \; t > 0.008856 \\ \\ 7.787t + \frac{16}{116} & if \; t \leq 0.008856 \end{matrix}\right. \end{equation} \subsection{\textit{k}-means Clustering Segmentation} When we segment the infected part in a fish's image, the classifier learns to identify the infected fish's images accurately. In this subsection, the converted image is then segmented, utilizing the \textit{k}-means clustering technique into several regions. As a result, it separates infected areas from fishes' images. Techniques in \cite{gaur2015handwritten} and \cite{hartigan1979algorithm} adheres to the conventional steps to appease the primary intent, and that is clustering the image objects into K distinct groups. The steps of \textit{k}-means clustering technique as follows. \begin{enumerate} \item Determine the total number of clusters k. \item In each group, choose k points as a centroid. \item Appoint each data point to the nearest centroid that assembles k clusters. \item Calculate and assign the new centroid of each cluster. \item Go to step 4 when any reassignment took place, and that time reassigns each data point to the nearest centroid. Otherwise, the model is ready. \end{enumerate} The function of the \textit{k}-means clustering technique \cite{de2009detection} is the minimum square structure that is measured by \begin{equation}\label{eqn:KMeans} J = \sum_{j = 1}^{k}\sum_{i=1}^{n}\left \| {x_i}^{(j)} - c_j \right \|^2 \end{equation} Where $J$ is the objective function that displays the similarity act of the $n$ objects encompassed in specific groups. $k$ and $n$ are the numbers of clusters and number of cases, respectively. Finally, $ \left \| {x_i}^{(j)} - c_j \right \| $ is the distance function from a point ${x_i}^{(j)}$ to the centroid of group $c_j$. Here, two types of feature vectors are acquired from the infected area in a fish, i.e., co-occurrence and statistical. We are going to explain certain features with nicety in the experimental evaluation section. \subsection{Support Vector Machine} We exploit the feature vectors discussed in the previous subsection to SVM. Support vector machine (SVM) is a supervised machine learning algorithm used in many classification problems for its higher accuracy rate. It aims to construct a hyperplane between different classes with a margin to classify objects. The hyperplane can be constructed in a multidimensional axis to partitioned the data points \cite{meyer2003support, noble2006support}. Figure \ref{img:svm} indicates the basic diagram for the support vector machine. Some common terms related to SVM are mentioned below: \textbf{Optimal Hyperplane:} The boundary that distinguishes two classes with the maximum margin is the optimal hyperplane. It is an N-1 dimensional subset of N-dimensional surface that distinguishes the classes on that surface. In two dimensions, the hyperplane is a line. With the increasing number of dimensions, the hyperplane's dimension is increased. The optimal hyperplane is determined as $wx_i + b =0$. Here $w$ is the weight vector, $x$ is the input feature vector, and $b$ is the bias. For all aspects of the training set, the w and b comply with the following inequalities \cite{suthaharan2016support}: \begin{center} $wx_i + b \geq +1 \; \;if \; y_i = 1$\\ $wx_i + b \leq -1 \; \;if \; y_i = -1$ \end{center} \textbf{Support Vectors:} Data points that are more adjacent to the hyperplane and influence the positioning of the hyperplane are known as support vectors. The more similar points between the two classes become the support vectors. These points avail in the establishment of SVM. Suppose a labeled training dataset represented as ${\{(x_i,y_i) \;|\; i = 1,2,...,k\}}$, Where $x_i$ is a feature vector representation or input, and $y_i$ is the class label or output. To consider the margin among two kinds of points to maximize, use the Lagrange technique to revamp the original problem to identify a maximum value of the function by Equation \ref{eqn:SVM_1}. \begin{equation}\label{eqn:SVM_1} Q(\alpha ) = \sum_{i=1}^{k}\alpha _i - \sum_{i,j=1}^{k}\alpha _i \alpha_j y_i y_j(x_i.y_j) \end{equation} Where $\alpha _i$ is the reciprocal multiplier of every snippet. Then remodel it to a large dimensionality space by kernel function $K(x_i,y_j)$ shown as Equation \ref{eqn:SVM_2}. \begin{equation}\label{eqn:SVM_2} Q(\alpha ) = \sum_{i=1}^{k}\alpha _i - \sum_{i,j=1}^{k}\alpha _i \alpha_j y_i y_jK(x_i.y_j) \end{equation} \textbf{Margin:} Margin is the gap between two non-overlapping classes separated by hyperplanes. It mainly indicates the gap between data points and the dividing line. For the optimal hyperplane, we required the maximum margin. \textbf{Kernel:} The functions used by the SVM algorithm to classify the objects are mainly known as the kernel function. They mainly transform the inputs into a required form to construct the hyperplane easily. There are many kernels \cite{zhang2015complete} used in SVM such as linear, polynomial (homogeneous and heterogeneous), gaussian, fisher, graph, string, tree, etc. The \textbf{linear kernel} is one of the most used and straightforward kernel functions used for the linearly separable data points \cite{ben2010user}. There are many application areas where SVM usually outperforms any other classifier with high accuracy \cite{chandra2018survey}. It is mainly designed for the binary classification problem that we have addressed here. The SVM is trained using feature training datasets for robust performance in the test dataset. For performance analysis, we need to evaluate some metrics shown in the experimental evaluation section. \begin{figure*} \begin{center} \centering \includegraphics[scale=.85]{Images/SVM.pdf} \caption{Support Vector Machine (Discovering the optimal hyperplane and the separation of classes for optimal hyperplane).} \label{img:svm} \end{center} \end{figure*} \subsection{System Architecture} We embellish a system architecture shown in Figure \ref{Fig:SystemDiagram}. It contains two phases, the first one is the building phase, and the second one is the deployment phase. Inside the building phase, we process the labeled images as training data. \begin{itemize} \item Each image is refined through the serialization of mentioned image processing techniques such as cubic splines interpolation, adaptive histogram equalization, and covert RGB color space to L*a*b color space. \item \textit{K}-means clustering technique is applied for the image segmentation and identifies two types of feature vectors, namely co-occurrence matrix features and statistical features, respectively. \item Exploit these feature vectors to SVM for further processing. \end{itemize} \begin{figure*} \centering \includegraphics[scale=.65]{Images/Visuals/SystemDiagram.pdf} \caption{System Architecture (A well-regulated diagram demonstrating the entire process from data acquisition to model training and prediction of classes).} \label{Fig:SystemDiagram} \end{figure*} The weighted model is used to train the feature vectors and the corresponding labels in the building phase. The outgrowth from the building phase comprises a trained or learned SVM model. This trained model is applied for classifying any incoming fish in the deployment phase. In the deployment phase, some steps are performing as follows. \begin{itemize} \item An input of any fish image is supplied into the system, then the image is refined through the serialization of image processing techniques. \item Apprehended two types of feature vectors by the \textit{K}-means clustering technique in terms of image segmentation. \item From the feature extractor, the feature vectors are fed for the trained SVM model. \item Finally, the outcome is a label of an input image to classify the specific category as fresh or infected fish. \end{itemize} This system architecture exhibits the entire process from data acquiring to model training and prediction of classes. \section{Evaluation} This section delineates the experimental intervention in detail to evaluate our proposed approach. In this evaluation, we extract the features with a statistical and grey-level co-occurrence matrix (GLCM) with appropriate analogous terms utilized in our fish image dataset. For classification results, we need some performance evaluation metrics to show the operating capability to predict new data. \subsection{Environment Specifications} Here, we use the combination of MATLAB \footnote{https://www.mathworks.com/solutions/image-video-processing.html} as a multi-paradigm programming language and Python. For image processing tasks such as cubic splines interpolation, adaptive histogram equalization, and image conversion from RGB to L*a*b, we have used MATLAB. Then, features extraction and training of our SVM model are employed by Python in the Google Colab \footnote{https://colab.research.google.com/} platform. Image interpretation and classification need enormous computing power. Hence, powerful computing tools installation with additional hardware support is expensive. So we use the Google Colab platform that serves us high-end CPU and GPU on the cloud premises that accommodate to train our model efficiently with less time. There is no extra encumbrance to install necessary packages because this platform crafts with all the obligatory packages used in the training process \cite{bisong2019google}. Google Colab ships NVIDIA K80 with 12 GB of GPU memory and 358 GB of disk space. This embellished environment gives an immense computational power to train machine learning models. \subsection{Experimental Dataset} Since there is no accessible dataset of salmon fresh and infected fish, we prepared a shibboleth and novel dataset - some images from the internet and most of them from some aquaculture firms. The dataset contains images of fresh and infected salmon fish displays in Figure \ref{FIG:salmonFishExample}. We collect a total of 266 images that are used to train and validate our model. For training and testing, we split our dataset with a ratio of train and test data that depicts in Table \ref{tab:datasetSplitting}. The total number of training and testing images are 231 and 35, respectively. Hence, our data acquisition is a complicated process. We apply the image augmentation technique in our dataset for expanding the dataset. Here we use the \textit{image\textunderscore argumentor \begin{NoHyper}\footnote{https://github.com/codebox/image\textunderscore augmentor}\end{NoHyper}} tool with some image augmentation operations such as Horizontal Fip (fliph), Vertical Flip (flipv), Rotates (rot), Pixel Shifting (trans), and Zooms (zoom). After the augmentation operation, we perceive 1,105 training images depicted in Table \ref{tab:datasetSplittingWithAugmentaion}. The total number of training and testing images is 1,105 and 221, respectively. \begin{figure*} \centering \begin{subfigure}{.45\linewidth} \centering\includegraphics[scale=.17]{Images/SalmonFish/SalmonF3.png} \end{subfigure} \begin{subfigure}{.45\linewidth} \centering\includegraphics[scale=.17]{Images/SalmonFish/salmonD1.png} \end{subfigure} \medskip \begin{subfigure}{.45\linewidth} \centering\includegraphics[scale=.17]{Images/SalmonFish/SalmonF2.png} \caption{Fresh Fish} \end{subfigure} \begin{subfigure}{.45\linewidth} \centering\includegraphics[scale=.17]{Images/SalmonFish/salmonD2.png} \caption{Infected Fish} \end{subfigure} \medskip \caption{Salmon Fish (Two sample of fresh fish and infected fish from our dataset).} \label{FIG:salmonFishExample} \end{figure*} \begin{table}[] \centering \caption{Overall dataset splitting (The total number of fresh and infected fish images without augmentation).} \label{tab:datasetSplitting} \begin{tabular}{ccc} \hline \textbf{Fish} & \textbf{Training images} & \textbf{Testing images} \\ \hline Fresh fish & 68 & 15 \\ \hline Infected fish & 163 & 20 \\ \hline \textbf{Total} & \textbf{231} & \textbf{35} \\ \hline \bottomrule \end{tabular} \end{table} \begin{table}[] \centering \caption{Overall dataset splitting (The total number of fresh and infected fish images with augmentation). }\label{tab:datasetSplittingWithAugmentaion} \begin{tabular}{ccc} \hline \textbf{Fish} & \textbf{Training images} & \textbf{Testing images} \\ \hline Fresh fish & 320 & 64 \\ \hline Infected fish & 785 & 157 \\ \hline \textbf{Total} & \textbf{1,105} & \textbf{221} \\ \hline \bottomrule \end{tabular} \end{table} \subsection{Features Extraction} We consider two types of feature extraction techniques: one is statistical features, and another one is grey-level co-occurrence matrix (GLCM) features based on interpreting fish diseases. Statistical features are described as follows. \begin{itemize} \item Mean ($\mu$): Presume there are $P$ pixels in infected regions and $Q$ pixels in fresh regions, and the gray-scale color intensity of a pixel in infected regions is $\psi$, then the mean $\mu$ is represented as the following Equation \ref{eqn:Fea_1}. \begin{equation}\label{eqn:Fea_1} \mu = \frac{\sum_{i=1}^{P}\psi_i}{P} \end{equation} \item Standard deviation ($\sigma$): Presume there are $P$ pixels in infected regions where the gray-scale color intensity of a pixel represents $\psi$ and mean gray-scale color intensity of all pixels represents $\mu$. The standard deviation $\sigma$ is defined as the following Equation \ref{eqn:Fea_2}. \begin{equation}\label{eqn:Fea_2} \sigma = \sqrt{\frac{\sum_{i=1}^{P}(\psi_i-\mu)^2}{P}} \end{equation} \item Variance (${\sigma}^2$): If there are $P$ pixels in infected regions where the gray-scale color intensity of a pixel and mean gray-scale color intensity of all pixels represent $\mu$ and $\psi$, then the variance ${\sigma}^2$ is defined as the following Equation \ref{eqn:Fea_3}. \begin{equation}\label{eqn:Fea_3} {\sigma}^2 = {\frac{\sum_{i=1}^{P}(\psi_i-\mu)^2}{P}} \end{equation} \item Kurtosis ($\kappa$): Presume there are $P$ pixels in infected regions where $\psi$ and $\mu$ represent the gray-scale color intensity of a pixel and the mean gray-scale color intensity of all pixels respectively, then the kurtosis $\kappa$ is defined as the following Equation \ref{eqn:Fea_4}. \begin{equation}\label{eqn:Fea_4} \kappa = \frac{\frac{1}{P}\sum_{i=1}^{P}(\psi-\mu)^4}{(\frac{1}{P}\sum_{i=1}^{P}(\psi-\mu)^2)^2}-3 \end{equation} \item Skewness ($\gamma$):Here, the mean is $\mu$, the standard deviation is $\sigma$, and the mode is $\rho_m$ for the gray-scale color intensity of all pixels in infected areas. Then the skewness $\gamma$ is defined as the following Equation. \ref{eqn:Fea_5}. \begin{equation}\label{eqn:Fea_5} \gamma = \frac{\mu - \rho_m}{\sigma} \end{equation} \end{itemize} Along with those statistical features, an amount of GLCM features is used here. These are convenient for extracting textural features from images. By examining the relationship between two pixels at a time, an assessment of the intense diversity at the pixel can be executed. Let us presume that $f(a,b)$ be a digital image of two dimensions with $X \times Y$ pixels and gray levels $G_L$. We also presume that $(a_1,b_1)$ and $(a_2,b_2)$ are two pixels in $f(a,b)$, the distance is $D$ and the angle between the dimension and the ordinate is $\theta$. So, a GLCM $M(i, j, D, \theta)$ depicts as the following equation \ref{eqn:glcm_1}. \begin{equation}\label{eqn:glcm_1} M(i, j, D, \theta) = \left | \{(a_1,b_1),(a_2,b_2) \epsilon X \times Y: \linebreak D,\theta,f(a_1,b_1) \} \right | \end{equation} In this experiment, we use five GLCM features, namely Contrast ($C$), correlation ($\chi$), energy ($\zeta$), entropy ($\Delta$), and homogeneity ($\xi$). Those are represented as the following equation of \ref{eqn:glcm_2} to \ref{eqn:glcm_6}. \begin{equation}\label{eqn:glcm_2} Contrast \; \; C = \sum_{i=0}^{G_L-1}\sum_{j=0}^{G_L-1}(i-j)^2M(i,j) \end{equation} \begin{equation}\label{eqn:glcm_3} Correlation \; \; \chi = \frac{\sum_{i=0}^{G_L-1}\sum_{j=0}^{G_L-1}i.j.M(i,j)-\mu_a.\mu_b}{\sigma_a.\sigma_b} \end{equation} \begin{equation}\label{eqn:glcm_4} Energy \; \; \zeta = \sum_{i=0}^{G_L-1}\sum_{j=0}^{G_L-1}M(i,j)^2 \end{equation} \begin{equation}\label{eqn:glcm_5} Entropy \; \; \Delta = -\sum_{i=0}^{G_L-1}\sum_{j=0}^{G_L-1}M(i,j)\log M(i,j) \end{equation} \begin{equation}\label{eqn:glcm_6} Homogeneity \; \; \xi = \sum_{i=0}^{G_L-1}\sum_{j=0}^{G_L-1}\frac{M(i,j)}{1+(i-j)^2} \end{equation} Here, $\mu_a$, $\mu_b$, $\sigma_a$, and $\sigma_b$ are the sum of anticipated and variance values individually for the row and column entries. \subsection{Proposed Classifier} Here, we exploit linear SVM for nonseparable sheaths as our classifier. Since the total number of training and testing images (without augmentation) are 231 and 35, respectively, the training dataset is $\{(x_1,y_1), (x_2, y_2), ...,(x_{231}, y_{231})\}$, where $x_i = (\mu, \sigma, {\sigma}^2,\kappa, \gamma, C, \chi, \zeta, \Delta, \xi)$ is the input vector and $y_i = \pm 1$. The reciprocal multiplier of $\{\alpha_1, \alpha_2,...,\alpha_{231}\}$ depicts as the following equation of \ref{eqn:SVM_Evaluation}. \begin{equation}\label{eqn:SVM_Evaluation} Q(\alpha ) = \sum_{i=1}^{35}\alpha _i - \frac{1}{2} \sum_{i,j=1}^{35}\alpha _i \alpha_j y_i y_j x_i x_j \end{equation} subject to the constraints\\ 1. $\sum_{i=1}^{35}\alpha_iy_i = 0$\\ 2. $0 \leq \alpha_i \leq C \; \; for \; i = 1,2,...,35$ \vspace{.5cm} Where $C$ is a non-negative parameter functioning as an upper-bound value of $\alpha_i$. The $C$ parameter is acknowledged as the penalty parameter. The $C$ parameter affixes a penalty for all misclassified data points. The low value of $C$ identifies the low points of misclassification. As a result, a large-margin decision boundary is taken at the expense of a higher number of misclassifications. Whereas the significant value of $C$, SVM seeks to reduce the number of misclassification resulting in a smaller margin decision boundary. Here we settle all SVM parameters through a training process and apply a significant numeric value for $C$. We have used linear kernel in our work from our applied four kernels, namely linear, sigmoid, polynomial, and Gaussian. From those, we have seen that the accuracy varies a negligible amount, and the linear kernel has performed satisfactorily with a short time of the process. \subsection{Performance Evaluation Metrics} We appraise the appearance of our trained SVM model based on several metrics. For predicting new data or images, we use a confusion matrix to envision our model's performance. A confusion matrix comprises four building blocks, namely True Positive (TP), True Negative (TN), False Positive (FP), and False Negative (FN). TP and TN refer to the cases where the predictions are true and negative. FP refers to the positively false predictions, and FN indicates negatively false predictions \cite{marom2010using}. We compute more distinct metrics to evaluate our model from the confusion matrix. These precise metrics are Accuracy, Precision, Recall or Sensitivity, Specificity, and F1 score, and those are calculated by applying the formulas below. \textbf{Accuracy:} Accuracy is one of the evaluating metrics, and informally interpreted in Equation \ref{equation:accuracy}. It is the proportion of specifically classified fishes and the total number of fishes in the test set. \begin{equation} Accuracy = \frac{\sum_{i}^{N} P_i}{\sum_{i}^{N} \left | Q_i \right |} \times 100\% \label{equation:accuracy} \end{equation} Where, ${\sum_{i}^{N} P_i}$ is the number of correct predictions, and ${\sum_{i}^{N} \left | Q_i \right |}$ is the total number of predictions. For binary classification, Accuracy can also be calculated as follows with Equation \ref{equation:accuracy2}. \begin{equation} Accuracy = \frac{TP + TN}{TP + TN + FP + FN} \times 100\% \label{equation:accuracy2} \end{equation} Where, $TP$ = True Positives, $TN$ = True Negatives, $FP$ = False Positives, and $FN$ = False Negatives. \textbf{Precision:} The proposition of classified fishes (TP) and the ground truth (the sum of TP and FP) defines the precision. It calculates the percentage of accurately classified fishes as Equation \ref{equation:precision}. \begin{equation} Precision = \frac{TP}{TP + FP} \times 100\% \label{equation:precision} \end{equation} \textbf{Recall or Sensitivity:} The ratio of classified fishes (TP) from the ground truth fish (total number of TP and FN) is defined as Equation \ref{equation:recall}. \begin{equation} Recall \; or \; Sensitivity = \frac{TP}{TP + FN} \times 100\% \label{equation:recall} \end{equation} \textbf{Specificity:} The ratio of TN and the summation of FP and TN determine the specificity with the equation of \ref{equation:specificity}. \begin{equation} Specificity = \frac{TN}{FP + TN} \times 100\% \label{equation:specificity} \end{equation} \textbf{F1 score (F-measure):} The metric is calculated as the symphonic average of precision and recall \cite{minh2017deep} as the following equation of \ref{equation:F1}. \begin{equation} F1 \, score = \frac{2 * Precision * Recall}{Precision + Recall} \label{equation:F1} \end{equation} We cannot only rely on the performance evaluation metrics of F1 score and accuracy. A very high cutoff exaggerates the accuracy of a model. So, we also measure the FPR (False Positive Rate), FNR (False Negative Rate), and TPR (True Positive Rate) by the following equation of \ref{equation:fpr} to \ref{equation:tpr}. \begin{equation} FPR = \frac{FP}{FP + TN} \times 100\% \label{equation:fpr} \end{equation} \begin{equation} FNR = \frac{FN}{FN + TP} \times 100\% \label{equation:fnr} \end{equation} \begin{equation} TPR = \frac{TP}{TP + FN} \times 100\% \label{equation:tpr} \end{equation} We use the Receiver Operating Characteristics or ROC curve for additional evaluation. We can estimate the area under the ROC curve (AUC) with the valuing of the ROC curve \cite{bradley1997use}. The proportion of TPR and FPR from equations \ref{equation:fpr} and \ref{equation:tpr} generates the ROC curve. ROC curve can easily figure out the performance of a model or classifier to differentiate between classes. Higher the AOC means more excellent the classifier or model is predicting. \section{Experimental Results} This section palpates our SVM model results to inspect our model's robustness and see the outcomes of our utilized techniques in both the regular and augmented datasets. Here, we present the actual upshots and comparisons with some graphical representations and tables. First, an input image with any dimension is converted and magnified with a fixed size of 600 $\times$ 250 pixels according to our proposed framework. The image is then segmented into various regions utilizing the \textit{k}-means clustering technique. As a result, a fish image is easily identifiable in terms of the infected and fresh areas. After this segmentation, the infected areas are more observable. All the mentioned aspects are shown in Figure \ref{FIG:salmonStages}. \begin{figure*} \centering \begin{subfigure}{.24\linewidth} \centering\includegraphics[scale=.20]{Images/SalmonFish/ImageProcessing/salmon_dis_31.png} \end{subfigure} \begin{subfigure}{.24\linewidth} \centering\includegraphics[scale=.20]{Images/SalmonFish/ImageProcessing/salmon_dis_31.png} \end{subfigure} \begin{subfigure}{.24\linewidth} \centering\includegraphics[scale=.20]{Images/SalmonFish/ImageProcessing/Clache1.jpg} \end{subfigure} \begin{subfigure}{.24\linewidth} \centering\includegraphics[scale=.15]{Images/SalmonFish/ImageProcessing/Kmeans1.png} \end{subfigure} \medskip \begin{subfigure}{.24\linewidth} \centering\includegraphics[scale=.20]{Images/SalmonFish/ImageProcessing/salmon_dis_32.png} \caption{Input image} \end{subfigure} \begin{subfigure}{.24\linewidth} \centering\includegraphics[scale=.20]{Images/SalmonFish/ImageProcessing/salmon_dis_32.png} \caption{Resized image} \end{subfigure} \begin{subfigure}{.24\linewidth} \centering\includegraphics[scale=.20]{Images/SalmonFish/ImageProcessing/Clache2.jpg} \caption{Contrast enhanced image} \end{subfigure} \begin{subfigure}{.24\linewidth} \centering\includegraphics[scale=.15]{Images/SalmonFish/ImageProcessing/Kmeans2.png} \caption{k-means segmented image} \end{subfigure} \medskip \caption{Various appearances of image processing (Exhibit the four stages of image processing before features extraction).} \label{FIG:salmonStages} \end{figure*} \subsection{Classification Performance of Proposed SVM} The classification assessment of our proposed SVM classifier describes in Table \ref{tab:ClassificationResult} (with augmentation) and Table \ref{tab:ClassificationResultWithAugmentation} (without augmentation). Here, both tables only comprise the SVM classifier that concession two classes: fresh fish and infected fish. In Table \ref{tab:ClassificationResult}, for the class of fresh fish, a high percentage shows the sensitivity of 98.46\% whereas the accuracy of 92.0\%. There is also the precision, F1 score, and specificity of 92.75\%, 95.52\%, and 50.00\% correspondingly. In the infected fish class, the highest percentage of 96.02\% shows in the F1 score. Here, the accuracy of 93.50\% and the recall of 98.13\% are also mentioned. Table \ref{tab:ClassificationResultWithAugmentation} shows the accuracy of 93.75\% in the fresh fish class and 94.90\% in the infected fish class. It also shows a good F1 score of 96.23\% and 97.08\% in the fresh and infected fish class. We distinctly observe Table \ref{tab:ClassificationResult} and Table \ref{tab:ClassificationResultWithAugmentation} where the infected fish class accuracy is 93.50\% and 94.90\%, higher than the fresh fish class. We also see the FPR and FNR in both classes, and the infected class shows slightly higher FNR and lower FPR than the fresh fish class. Here, the low percentage of FPR and FNR refers to our model is not underfit or overfit. So, as an individual class prediction, the infected fish class performs satisfactorily. \begin{table*}[] \caption{Class-wise classification results of SVM (Without augmentation).} \label{tab:ClassificationResult} \begin{tabular}{ccccccccc} \hline \textbf{Classifier} & \textbf{Class} & \textbf{\begin{tabular}[c]{@{}c@{}}Accuracy\\ (\%)\end{tabular}} & \textbf{\begin{tabular}[c]{@{}c@{}}Precision\\ (\%)\end{tabular}} & \textbf{\begin{tabular}[c]{@{}c@{}}Recall/Sensitivity\\ (\%)\end{tabular}} & \textbf{\begin{tabular}[c]{@{}c@{}}Specificity\\ (\%)\end{tabular}} & \textbf{\begin{tabular}[c]{@{}c@{}}F1-score\\ (\%)\end{tabular}} & \textbf{\begin{tabular}[c]{@{}c@{}}False positive\\ Rate (\%)\end{tabular}} & \textbf{\begin{tabular}[c]{@{}c@{}}False negative\\ Rate (\%)\end{tabular}} \\ \hline \multirow{2}{*}{SVM} & Fresh fish & 92.0 & 92.75 & 98.46 & 50.0 & 95.52 & 50.0 & 1.54 \\ \cline{2-9} & Infected fish & 93.50 & 94.01 & 98.13 & 75.0 & 96.02 & 25.0 & 1.875 \\ \hline \bottomrule \end{tabular} \end{table*} \begin{table*}[] \caption{Class-wise classification results of SVM (With augmentation).} \label{tab:ClassificationResultWithAugmentation} \begin{tabular}{ccccccccc} \hline \textbf{Classifier} & \textbf{Class} & \textbf{\begin{tabular}[c]{@{}c@{}}Accuracy\\ (\%)\end{tabular}} & \textbf{\begin{tabular}[c]{@{}c@{}}Precision\\ (\%)\end{tabular}} & \textbf{\begin{tabular}[c]{@{}c@{}}Recall/Sensitivity\\ (\%)\end{tabular}} & \textbf{\begin{tabular}[c]{@{}c@{}}Specificity\\ (\%)\end{tabular}} & \textbf{\begin{tabular}[c]{@{}c@{}}F1-score\\ (\%)\end{tabular}} & \textbf{\begin{tabular}[c]{@{}c@{}}False positive\\ Rate (\%)\end{tabular}} & \textbf{\begin{tabular}[c]{@{}c@{}}False negative\\ Rate (\%)\end{tabular}} \\ \hline \multirow{2}{*}{SVM} & Fresh fish & 93.75 & 96.23 & 96.23 & 81.82 & 96.23 & 18.19 & 3.77 \\ \cline{2-9} & Infected fish & 94.90 & 98.52 & 95.68 & 88.89 & 97.08 & 11.11 & 4.31 \\ \hline \bottomrule \end{tabular} \end{table*} We represent the two confusion matrix as a heat map for better graphical representation shown in Figure \ref{FIG:heatmapSVM} for both with and without augmentation. This heat map can conveniently display the classification and misclassification of our binary class. From this confusion matrix in Figure \ref{FIG:heatmapSVM} (a), we see that fresh fish is misclassified only two times with infected fishes, and infected fish is misclassified once with fresh fish. Figure \ref{FIG:heatmapSVM} (b) shows the misclassification of seven fresh fish with infected fish and six infected fish with fresh fish. \begin{figure*}[!h] \centering \begin{subfigure}{.48\linewidth} \centering\includegraphics[width=\linewidth]{Images/Results/heatMapSVM.pdf} \caption{Without augmentation} \end{subfigure} \begin{subfigure}{.48\linewidth} \centering\includegraphics[width=\linewidth]{Images/Results/heatMapSVMWithAugmentation.pdf} \caption{With augmentation} \end{subfigure} \caption{Confusion matrix for SVM classifier.} \label{FIG:heatmapSVM} \end{figure*} We illustrate our proposed classifier SVM's overall performance in Table \ref{tab:matricEvaluation} to perceive our SVM classifier's metrics-wise performance for both with and without augmentation. From the table, the accuracy is 91.42\% in terms of without augmentation. Furthermore, 94.12\% in terms of augmentation, which is reliable for detecting an infected fish. \begin{table*}[] \caption{Metric evaluation of SVM classifier.} \label{tab:matricEvaluation} \begin{tabular}{|c|c|c|} \hline \multirow{2}{*}{\textbf{Evaluation metric}} & \multicolumn{2}{c|}{\textbf{Value (\% )}} \\ \cline{2-3} & \multicolumn{1}{l|}{\textbf{Without augmentation}} & \multicolumn{1}{l|}{\textbf{With augmentation}} \\ \hline Accuracy & 91.42 & 94.12 \\ \hline Precision & 86.67 & 89.06 \\ \hline Recall or Sensitivity & 92.86 & 90.48 \\ \hline Specificity & 90.48 & 95.57 \\ \hline F1-score & 89.66 & 89.76 \\ \hline False positive rate & 4.43 & 9.52 \\ \hline False negative rate & 9.52 & 7.14 \\ \hline \end{tabular} \end{table*} In Figure \ref{FIG:rovSVM}, we exhibit a ROC curve that evinces the outcomes to reflect the comprehensive classification performance of SVM with and without augmentation. It scrutinizes the proportion between the true positive rate (TPR) and the false positive rate (FPR). Figure \ref{FIG:rovSVM} (a) shows the overall micro average AUC score of 96.20\% and the macro average AUC score of 95.93\% without augmentation. And the Figure \ref{FIG:rovSVM} shows the (b) micro average AUC score of 98.12\% and the macro average AUC score of 96.71\% with augmentation. \begin{figure*}[!h] \centering \begin{subfigure}{.48\linewidth} \centering\includegraphics[width=\linewidth]{Images/Results/ROCSVM.pdf} \caption{Without augmentation} \end{subfigure} \begin{subfigure}{.48\linewidth} \centering\includegraphics[width=\linewidth]{Images/Results/ROCSVMWithAugmentaion.pdf} \caption{With augmentation} \end{subfigure} \caption{ROC curve for SVM classifier.} \label{FIG:rovSVM} \end{figure*} In this paper, we have analyzed SVM so far, but we likewise investigate the other three classifiers for the performance appraisal. Our investigated three classifiers are decision tree, logistic regression, and naïve Bayes. In Figure \ref{FIG:comparisonClassifier}, we sketch a bar diagram representing the evaluation metrics of our investigated all four classifiers. Here, SVM’s evaluation metrics with augmentation perform more vital than the other three classifiers. Decision tree performs more reliable than logistic regression with an accuracy of 81.54\% whereas logistic regression’s accuracy is 80.0\%, which is better than naïve Bayes. The remaining metrics precision, sensitivity, specificity, and F1-score for the decision tree are 84.84\%, 80.0\%, 83.33\%, and 82.35\% respectively, lower than SVM higher than logistic regression and naïve Bayes. \begin{figure*} \centering \centering\includegraphics[scale=.46]{Images/Results/ClassificationComparision.pdf} \caption{Comparison of classifiers evaluation metrics with image augmentation (Value of Accuracy, Precision, Sensitivity, Specificity and F1-score for SVM, Decision Tree, Logistic Regression and Naive Bayes).} \label{FIG:comparisonClassifier} \end{figure*} Last, we see our predicted result from our SVM classifier in Figure \ref{FIG:predictedFish}. Here, we demonstrate our binary class predictions for fresh fishes and infected fishes. Classifier correctly predicts the inserted original image. \begin{figure*} \centering \begin{subfigure}{.45\linewidth} \centering\includegraphics[scale=.23]{Images/Results/PredictFresh.png} \caption{Original: Fresh Fish} \end{subfigure} \begin{subfigure}{.45\linewidth} \centering\includegraphics[scale=.23]{Images/Results/PredictInfected.png} \caption{Original: Infected Fish} \end{subfigure} \medskip \caption{Fish prediction according to SVM.} \label{FIG:predictedFish} \end{figure*} \subsection{Comparative analysis} Research works regarding Fish disease detection technique in machine learning mechanisms did not take place up to the mark. The related works are comparatively fewer than other detection work such as fruit disease, crop disease, and some connected work. To appraise our proposed SVM's evaluation metrics to identify the infected fishes, we require studying some relevant promulgated research works. In table \ref{tab:comprisonTable}, we list down some research related to identifying the fish disease. Some works only concentrated on image processing to identify fish disease, and some focused on machine learning based classification models. Shaveta et al. \cite{LRShaveta} arbitrated his work with \textit{k}-means segmentation algorithm and took feature set size of two and then applied neural network classifier to achieve the accuracy of 86\%. Lyubchenko et al. \cite{LRLyubchenko} applied segmentation on this work with the combination of \textit{k}-means clustering and mathematical morphology. This work took three feature sets in image processing and did not apply any classifier. As a result, accuracy is not applicable. Malik et al. \cite{LRMalik} proposed some edge detection method and morphological operation as the segmentation process. This work took three feature sets and applied multiple classification models for comparing the result. Neural network and K-NN (Nearest Neighbour) are the two applied models with 86.0\% and 63.32\% accuracy, respectively. \begin{table*}[\linewidth,cols=8,pos=h] \caption{Comparison analysis between this work and related works.} \label{tab:comprisonTable} \begin{tabular}{llllll} \hline \textbf{Work} & \textbf{\begin{tabular}[c]{@{}l@{}}Segmentation \\ Algorithm\end{tabular}} & \textbf{\begin{tabular}[c]{@{}l@{}}Feature \\ Set\\ Size\end{tabular}} & \textbf{\begin{tabular}[c]{@{}l@{}}Classification\\ Performed\end{tabular}} & \textbf{Classifier} & \textbf{\begin{tabular}[c]{@{}l@{}}Accuracy\\ (\%)\end{tabular}} \\ \hline This work (with augmentation) & k-means clustering & 10 & Yes & SVM & 94.12 \\ This work (without augmentation) & --- & --- & --- & --- & 91.42 \\ \hline Shaveta et al. \cite{LRShaveta} & k-means clustering & 2 & Yes & Neural network & 86.0 \\ \hline Lyubchenko et al. \cite{LRLyubchenko} & \begin{tabular}[c]{@{}l@{}}Combination of k-means \\ clustering and \\ mathematical morphology\end{tabular} & 3 & No & Not applicable & Not applicable \\ \hline Malik et al. \cite{LRMalik} & \begin{tabular}[c]{@{}l@{}}Edge detection and \\ morphological operation\end{tabular} & 3 & Yes & \begin{tabular}[c]{@{}l@{}}Neural network \\ and K-NN (Nearest \\ Neighbour)\end{tabular} & \begin{tabular}[c]{@{}l@{}}86.0 (NN) and \\ 63.32 (K-NN)\end{tabular} \\ \hline \end{tabular} \end{table*} \section{Discussion} Salmon fish disease detection is an important research area that should require the most attention in the automated research field. However, rarely any intelligent solution for this area comes up in modern times. No existed dataset is available for this research purpose. In this work, we transpire a novel dataset for salmon fish disease detection and conduct our research. Table \ref{tab:datasetSplitting} and Table \ref{tab:ClassificationResultWithAugmentation} have information about our dataset and we divide it for our experiments in this research. Figure \ref{FIG:salmonFishExample} is a little portion of our dataset in which we introduce images of fresh and infected fishes from our dataset. It is mainly defining the input images which we processed and accorded to our classifier. The main goal we have chased in this research is to classify the infected and fresh salmon fishes. We conduct this experiment based on a real-world image dataset to bring out a reliable system. For ensuring high accuracy, we choose a very efficient machine learning algorithm called support vector machine. SVM is known as one of the leading supervised learning algorithms for classification purposes. In this work, we justify selecting the SVM classifier by comparing our result with other algorithms. The graph from Figure \ref{FIG:comparisonClassifier} justifies our decision to choose the SVM classifier over other techniques. This classifier outperforms to indicate infected and fresh fish for every performance evolution metrics we considered. Comparing Logistic regression, Decision tree, and Naive Bayes, SVM scores higher for accuracy, precision, sensitivity, specificity, and F1 score. These metrics' values from Table \ref{tab:ClassificationResult} and Table \ref{tab:ClassificationResultWithAugmentation} for our proposed classifier specify the efficiency of this work. We apply image processing techniques like cubic spline interpolation, adaptive histogram equalization, and k means segmentation before the classification process. In Figure \ref{FIG:salmonStages} we can see how these techniques normalized the raw input image for the classifier. Figure \ref{FIG:salmonStages}(b) the resized output image of \ref{FIG:salmonStages}(a) is shown. We achieved these resized images after using cubic spline interpolation. Next \ref{FIG:salmonStages}(c) mainly showcasing the contrast-enhanced images resulting from adaptive histogram equalization. This step makes our image dataset more clearer for the classifier. Then we apply \textit{k} means clustering segmentation to differentiate the infected part and fresh part in an image. In \ref{FIG:salmonStages}(d), we display some segmented images from our experiment. We conduct our experiment by putting the processed images in the proposed SVM classifier. We identify the performance of our classifier by measuring different evolution metrics. It is a good parameter to understand the efficiency of any algorithm for a developed model. In Table \ref{tab:matricEvaluation}, we put the values of these metrics. We sketch a heatmap that describes the confusion metrics for our model. Confusion metrics are a visualization of the performance measurement of any algorithm of machine learning. Figure \ref{FIG:heatmapSVM} (a) and \ref{FIG:heatmapSVM} (b) are the representation of our confusion metrics. Here we can observe the number of misclassification by our classifier is very low. We present another result in our classifier's justification: the ROC (Receiver Operating Characteristic) curve in Figure \ref{FIG:rovSVM}. This graph mainly conveys our classifier's performance at every possible classification threshold by plotting True Positive Rate and False Positive Rate. Previously we mention that less research has been conducted for fish diseases, and the existing ones are not up to the mark. Unless this, we find some related works and compare them with our work to make our research more specific. In Table \ref{tab:comprisonTable}, we differentiate our work from other works. The main noticeable difference observed is that none of these works focuses explicitly on salmon fish. One of them uses only image processing techniques which is not an intelligent system. The other two authors have used the neural network but have less accuracy than our classifier. The number of features they considered in their work is lower than ours. \vspace{-5pt} \section{Conclusion and Future Work} We introduce a significant machine learning-based classification model (SVM) to identify infected fishes in this research work. The real-world without augmented dataset (163 infected and 68 fresh) and augmented dataset (785 infected and 320 fresh) are used to train our model is new and novel. We mainly classify fishes into two individual classes: fresh fish and another is infected fish. We appraise our model with various metrics and show the classified outcome with visual interaction from those classification results. Besides developing our classifier, we applied updated image processing techniques like \textit{k}-means segmentation, cubic spline interpolation, and adaptive histogram equalization for transforming our input image more adaptable to our classifier. We also compare our model results with three classification models and observe that our proposed classifier is the best solution in this case. This work contributes to bringing out a superior automated fish detection system than the existed systems based on image processing or lower accuracy. We not only depend on the modern image processing technique but also adjoin demandable supervised learning techniques. We prosperously develop the classifier that predicts infected fish with the best accuracy rate than other systems for our real-world novel dataset. In the future, we stratagem to utilize various Convolutional Neural Networks (CNN) architecture for identifying fish disease more precisely and meticulously. Moreover, we will focus on the implementation of a real-life IoT device using the proposed system. Doing so can be a specific solution for the farmers in aquaculture to identify infected salmon fishes and take proper steps before facing any unexpected loss in their farming. We will work with different fish datasets to make our system more usable in other sectors of aquaculture. We will also concentrate on increasing our existing dataset as salmon fish is one of the demanding elements worldwide. \vspace{-3pt} \vspace{-5pt} \bibliographystyle{abbrv}
{ "timestamp": "2021-05-11T02:20:27", "yymm": "2105", "arxiv_id": "2105.03934", "language": "en", "url": "https://arxiv.org/abs/2105.03934" }
\section{Introduction}\label{sec:introduction}} \IEEEPARstart{H}{umans} have the amazing ability to learn new concepts from only a few examples, and then effortlessly generalize this knowledge to new samples. In contrast, despite considerable progress, existing image classification models based on deep neural networks e.g.\@,~\cite{krizhevsky2012imagenet,resnet}, are still highly dependent on large amounts of annotated training data~\cite{imagenet_cvpr09} to achieve satisfactory performance. This learnability gap between human intelligence and existing neural networks has motivated many to study learning from a few samples, e.g.\@,~\cite{fei2006one,lake2015human,ravi2017optimization,finn2017model}. Meta-learning, \textit{a.k.a.} learning to learn~\cite{Schmidhuber1992,thrun2012learning}, emerged as a promising direction for few-shot learning~\cite{andrychowicz2016learning,ravi2017optimization,finn2017model,zhen2020learning}. The working mechanism of meta-learning involves a meta-learner that exploits the common knowledge from various tasks to improve the performance of each individual task. Remarkable success has been achieved in learning good parameter initializations~\cite{finn2017model,rusu2018meta}, efficient optimization update rules~\cite{andrychowicz2016learning, ravi2017optimization}, and powerful common metrics~\cite{vinyals2016matching, snell2017prototypical} from related tasks, which enables fast adaptation to new tasks with few training samples. Meta-learning has also proven to be effective in learning amortized networks shared by related tasks, which generate specific parameters \cite{gordon2018meta} or normalization statistics \cite{du2020metanorm} for individual few-shot learning tasks. However, how to properly define and exploit the prior knowledge from experienced tasks remains an open problem for few-shot learning, and is the one we address in this paper. An effective base-learner should be powerful enough to solve individual tasks, while being able to absorb the information provided by the meta-learner for overall benefit. Kernels \cite{smola1998learning,scholkopf2018learning,hofmann2008kernel} have proven to be a powerful technique in the machine learning toolbox, e.g.\@,~\cite{cristianini2000introduction,smola2004tutorial,rahimi2007random,sinha2016learning,bach2004multiple}, as they are able to produce strong performance without relying on a large amount of labelled data. Moreover, task-adaptive kernels with random features, leveraging data-driven sampling strategies~\cite{sinha2016learning}, achieve improved performance over universal ones, at low sampling rates~\cite{hensman2017variational,carratino2018learning,bullins2018not,li2019implicit}. This makes kernels with data-driven random features well-suited tools for learning tasks with limited data. Hence, we introduce kernels as base-learners into the meta-learning framework for few-shot learning. However, due to the limited availability of samples, it is challenging to learn informative random features for few-shot tasks by solely relying on a tasks own data. Therefore, exploring the shared prior knowledge from different but related tasks is essential for obtaining richer random features and few-shot learning. \begin{figure*}[t] \centering \includegraphics[width=.9\linewidth]{framework.pdf} \vspace{-2mm} \caption{MetaKernel learning framework. The meta-learner employs an LSTM-based context inference network $\phi(\cdot)$ to infer the spectral distribution over $\bm{\omega}_0^{t}$, the kernel from the support set $\mathcal{S}^t$ of the current task $t$, and the outputs $\mathbf{h}^{t-1}$ and $\mathbf{c}^{t-1}$ of the previous task. The enriched random bases $\bm{\omega}_k^{t}$ are obtained via conditional normalizing flows with a flow of length $k$. During the learning process, the cell state in the LSTM is deployed to accumulate shared knowledge by experiencing a set of prior tasks. The \textit{remember} and \textit{forget} gates in the LSTM episodically refine the cell state by absorbing information from each experienced task. For each individual task, the task-specific information extracted from the support set is combined with distilled information from the previous tasks to infer the adaptive spectral distribution of the kernels.} \label{fig:MeteKernel} \end{figure*} We propose learning task-specific kernels in a data-driven way with variational random features by leveraging the shared knowledge provided by related tasks. To do so, we develop a latent variable model that treats the random Fourier basis of translation-invariant kernels as the latent variable. The posterior over the random feature basis corresponds to the spectral distribution associated with the kernel. The optimization of the model is formulated as a variational inference problem. Kernel learning with random Fourier features for few-shot learning allows us to leverage the universal approximation property of kernels to capture shared knowledge from related tasks. This probabilistic modelling framework provides a principled way of learning data-driven kernels with random Fourier features and, more importantly, fits well into the meta-learning framework for few-shot learning, providing us the flexibility to customize the variational posterior and leverage meta-knowledge to enhance individual tasks. To incorporate the prior knowledge from experienced tasks, we further propose a context inference scheme to integrate the inference of random feature bases of the current task into the context of previous related tasks. The context inference provides a generalized way to integrate shared knowledge from the related tasks with task-specific information for the inference of random feature bases. To do so, we adopt a long short-term memory (LSTM) based inference network~\cite{hochreiter1997long}, leveraging its capability of learning long-term dependencies to collect and refine the shared meta-knowledge from a set of previously experienced tasks. A preliminary conference version of this work, which also covers variational random features and task context inference was published previously~\cite{zhen2020learning}. In this extended work, we further propose conditional normalizing flows to infer richer posteriors over the random bases, which allows us to obtain more informative random features. The normalizing flows (NFs)~\cite{dinh2014nice, dinh2016density, rezende2015variational, kingma2018glow, winkler2019learning} model complicated high dimensional marginal distributions by transforming a simple base distribution (e.g.\@, a standard normal) or priors through a learnable, invertible mapping and then applying the change of variables formula. Normalizing flows, which have not yet been explored in few-shot learning, provide a well-suited technique for learning more expressive random features by transforming a random basis into a richer distribution. The overall learning framework of our MetaKernel is illustrated in Figure~\ref{fig:MeteKernel}. To validate our method, we conduct extensive experiments on fourteen benchmark datasets for a variety of few-shot learning tasks including image classification and regression. Unlike our prior work~\cite{zhen2020learning}, we also experiment on the large-scale Meta-Dataset by Triantafillou et al.\@~\cite{triantafillou2019meta} and the challenging few-shot domain generalization setting suggested by Du et al.\@~\cite{du2020metanorm}. MetaKernel consistently delivers at least comparable and often better performance than state-of-the-art alternatives on all datasets, and the ablative analysis demonstrates the effectiveness of each MetaKernel component for few-shot learning. The rest of this paper is organized as follows: Section~\ref{sec:related} summarizes related work. Section~\ref{sec:method} presents the proposed MetaKernel framework. Section~\ref{sec:experiments} summarizes experimental details, state-of-the-art comparisons and detailed ablation studies. Section~\ref{sec:conclusion} closes with concluding remarks. \section{Related Work} \label{sec:related} \subsection{Meta-Learning} Meta-learning, or learning to learn, endues machine learning models the ability to improve their performance by leveraging knowledge extracted from a number of prior tasks. It has received increasing research interest with breakthroughs in many directions, e.g.\@,~\cite{finn2017model,rusu2018meta,gordon2018meta,rajeswaran2019meta, hospedales2020meta}. Existing methods can be roughly categorized into four groups. Models in the first group are based on distance metrics and generally learn a shared or adaptive embedding space in which query images are accurately matched to support images for classification. They rely on the assumption that a common metric space is shared across related tasks and usually do not employ an explicit base-learner for each task. By extending the matching network~\cite{vinyals2016matching} to few-shot scenarios, Snell et al.\@~\cite{snell2017prototypical} constructed a prototype for each class by averaging the feature representations of samples from the class in the metric space. The classification is conducted by matching the query samples to prototypes by computing their distances. To enhance the prototype representation, Allen et al.\@~\cite{allen2019infinite} proposed an infinite mixture of prototypes (IMP) to adaptively represent data distributions for each class, using multiple clusters instead of a single vector. Oreshkin et al.\@~\cite{oreshkin2018tadam} proposed a task-dependent adaptive metric for few-shot learning and established prototypes of classes conditioned on a task representation encoded by a task embedding network. Yoon et al.\@~\cite{yoon2019tapnet} proposed a few-shot learning algorithm aided by a linear transformer that performs task-specific null-space projection of the network output. Graphical neural network based models generalize the matching methods by learning the message propagation from the support set and transferring it to the query set~\cite{garcia2018few}. Prototype based methods have recently been improved in a variety of ways \cite{cao2019theoretical,triantafillou2019meta,zhen2020memory}. In this work, we design an explicit base-learner based on kernels for each individual task. Algorithms in the second group learn an optimization that is shared across tasks, while being adaptable to new tasks. Finn et al.\@~\cite{finn2017model} proposed model-agnostic meta-learning (MAML) to learn an appropriate initialization of model parameters and adapt it to new tasks with only a few gradient steps. To make MAML less prone to meta-overfitting, easier to parallelize and more interpretable, Zintgraf et al.\@~\cite{zintgraf2019fast} proposed fast context adaptation via meta-learning (CAVIA), a single model that adapts to a new task via gradient descent by updating only a set of input parameters at test time, instead of the entire network. Ravi and Larochelle~\cite{ravi2017optimization} proposed an LSTM-based meta-learner that is trained to optimize a neural network classifier. It captures both the short-term knowledge in individual tasks and the long-term knowledge common to all tasks. Learning a shared optimization algorithm has also been explored to quickly learn new tasks~\cite{andrychowicz2016learning,chen2017learning}. Bayesian meta-learning methods~\cite{edwards2016towards,finn2018probabilistic, gordon2018meta,saemundsson2018meta} usually rely on hierarchical Bayesian models to learn the shared statistical information from different tasks and to infer the uncertainty of the models. Rusu et al.\@~\cite{rusu2018meta} proposed to learn a low-dimensional latent embedding of model parameters and perform optimization-based meta-learning in this space, which allows for a task-specific parameter initialization and achieves adaptation more effectively. Our method is orthogonal to optimization based methods and learns a specific base-learner for each task. The third group of explicitly learned base-learners incorporate what meta-learners have learned and effectively addresses individual tasks~\cite{gordon2018meta, bertinetto2018meta, zhen2020learning}. Gordon et al.\@~\cite{gordon2018meta} avoided the need for gradient based optimization at test time by amortizing the posterior inference of task-specific parameters in their VERSA. It amortizes the cost of inference and alleviates the need for second derivatives during training by replacing test-time optimization with a forward pass through the inference network. To enable efficient adaptation to unseen learning problems, Bertinetto et al.\@~\cite{bertinetto2018meta} incorporated fast solvers with closed-form solutions as the base learning component of their meta-learning framework. These teach the deep network to use ridge regression as part of its own internal model, enabling it to quickly adapt to novel data. In our method, we also deploy an explicit base-learner but, differently, we leverage a memory mechanism based on an LSTM to collect shared knowledge from related tasks and enhance the base-learners for individual tasks. In the fourth group, a memory mechanism is part of the solution, where an external memory module is deployed to store and leverage key knowledge for quick adaptation~\cite{santoro2016meta,munkhdalai2017meta, munkhdalai2017rapid}. Santoro et al.\@~\cite{santoro2016meta} introduced neural Turing machines into meta-learning by augmenting their neural network with an external memory module, which is used to rapidly assimilate new data to help make accurate predictions with only a few samples. Munkhdalai et al.\@~\cite{munkhdalai2017meta} proposed a Meta Network (MetaNet) to learn meta-level knowledge across tasks and shifting the inductive biases via fast parameterization for rapid generalization. Munkhdalai et al.\@~\cite{munkhdalai2017rapid} designed conditionally shifted neurons within the framework of meta-learning, which modify their activation values with task-specific shifts retrieved from a memory module. In this work, we also leverage a memory mechanism, but, differently, we deploy an LSTM module to collect shared knowledge from related tasks experienced previously to help solve individual tasks. \subsection{Kernel Learning} Kernel learning with random Fourier features is a versatile and powerful tool in machine learning~\cite{bishop2006pattern, hofmann2008kernel, shervashidze2011weisfeiler}. Pioneering works~\cite{bach2004multiple,gonen2011multiple, duvenaud2013structure} learn to combine predefined kernels in a multi-kernel learning manner. Kernel approximation by random Fourier features (RFFs)~\cite{rahimi2008random} is an effective technique for efficient kernel learning~\cite{gartner2002multi}, which has recently become increasingly popular~\cite{sinha2016learning,carratino2018learning}. RFFs~\cite{rahimi2008random} are derived from Bochner's theorem~\cite{rudin1962fourier}. \begin{theorem}[Bochner's theorem~\cite{rudin1962fourier}] A continuous, real valued, symmetric and shift-invariant function $\mathtt{k}(\mathbf{x},\mathbf{x}') = \mathtt{k}(\mathbf{x}-\mathbf{x}')$ on $\mathbb{R}^d$ is a positive definite kernel if and only if it is the Fourier transform $p(\bm{\omega})$ of a positive finite measure such that \begin{align} \mathtt{k}(\mathbf{x},\mathbf{x}') =& \int_{\mathbb{R}^d} e^{i\bm{\omega}^\top(\mathbf{x}-\mathbf{x}')}dp(\bm{\omega}) = \mathbb{E}_{\bm{\omega}}[\zeta_{\bm{\omega}}(\mathbf{x})\zeta_{\bm{\omega}}(\mathbf{x}')^*] \end{align} where $\zeta_{\bm{\omega}}(\mathbf{x}) = e^{i\bm{\omega}^\top \mathbf{x}}$. \end{theorem} It is guaranteed that $\zeta_{\bm{\omega}}(\mathbf{x})\zeta_{\bm{\omega}}(\mathbf{x})^*$ is an unbiased estimation of $\mathtt{k}(\mathbf{x}, \mathbf{x}')$ with sufficient RFF bases $\{\bm{\omega}\}$ drawn from $p(\bm{\omega})$~\cite{rahimi2008random}. For a predefined kernel, e.g.\@, radial basis function (RBF), we sample from its spectral distribution using the Monte Carlo method, and obtain the explicit feature map: \begin{equation} \mathbf{z}(\mathbf{x}) = \frac{1}{\sqrt{D}} [\cos(\bm{\omega}_1^{\top} \mathbf{x} + b_1), \cdots, \cos(\bm{\omega}_D^{\top} \mathbf{x} + b_D)], \label{rfs} \end{equation} where $\{\bm{\omega}_1, \cdots, \bm{\omega}_D\}$ are the random bases sampled from $p(\bm{\omega})$, and $[b_1, \cdots, b_D]$ are $D$ biases sampled from a uniform distribution with a range of $[0, 2\pi]$. Finally, the kernel value $\mathtt{k}(\mathbf{x}, \mathbf{x}')=\mathbf{z}(\mathbf{x})\mathbf{z}(\mathbf{x}')^{\top}$ in $K$ is computed as the dot product of their random feature maps with the same bases. Wilson and Adams~\cite{wilson2013gaussian} learn kernels in the frequency domain by modelling the spectral distribution as a mixture of Gaussians and computingits optimal linear combination. Instead of modelling the spectral distribution with explicit density functions, other works focus on optimizing the random base sampling strategy~\cite{yang2015carte, sinha2016learning}. Nonetheless, it has been shown that accurate approximation of kernels does not necessarily result in high classification performance \cite{avron2016quasi,chang2017data}. This suggests that learning adaptive kernels with random features by data-driven sampling strategies \cite{sinha2016learning} can improve the performance, even with a low sampling rate, compared to using universal random features \cite{avron2016quasi,chang2017data}. Our work introduces kernels into few-shot meta-learning. We propose to learn kernels with random features in a data-driven way by formulating it as a variational inference problem. This allows us to generate task-specific kernels as well as to leverage shared knowledge from related tasks. \subsection{Normalizing Flows} Normalizing flows (NFs)~\cite{papamakarios2021normalizing,dinh2014nice,rezende2015variational} are promising methods for expressive probability density estimation with tractable distributions. Unlike variational methods, sampling and density evaluation can be efficient and exact for NFs with neat architectures. Generally, NFs are categorized into five types based on how they construct a flow: 1) Autoregressive flows were one of the first classes of flows with invertible autoregressive functions. Examples of such flows include inverse autoregressive flow~\cite{kingma2016improved} and masked autoregressive flow~\cite{papamakarios2017masked}. 2) Linear flows generalize the idea of permutating of input variables via an invertible linear transformation~\cite{kingma2018glow}. 3) Residual flows~\cite{chen2019residual} are designed as residual networks. The invertible property can be preserved under appropriate constraints; 4) volume-preserved flows with effective invertible architecture, such as coupling layers~\cite{dinh2016density}, are typically used in generative tasks. 5) Infinitesimal flows provide another alternative strategy for constructing flows in continuous time by parameterizing its infinitesimal dynamics~\cite{rezende2015variational}. Normalizing flows are known to be effective in applications with probabilistic models, including probabilistic modelling~\cite{kingma2018glow, ho2019flow++,esling2019universal, prenger2019waveglow}, inference \cite{rezende2015variational,kingma2016improved} and representation learning~\cite{jacobsen2018revnet}. In this work, we introduce conditional normalizing flows into our kernel learning framework to infer richer posteriors over the random bases, which yields more informative random features. To our knowledge, this is the first work that introduces conditional normalizing flows into the meta-learning framework for few-shot learning. \section{Methodology} \label{sec:method} In this section, we present our methodology for learning kernels with random Fourier features under the meta-learning framework with limited labels. In Section~\ref{MLK}, we describe the base-learner based on kernel ridge regression. We introduce kernel learning with random features by formulating it as a variational inference problem in Section~\ref{metavrf}. We describe the context inference to leverage the shared knowledge provided by related tasks in Section~\ref{contextinference}. We further enrich the variational random features by conditional normalizing flows in Section~\ref{MetaVRF-CNF}. \subsection{Meta-Learning with Kernels} \label{MLK} We adopt the episodic training strategy~\cite{ravi2017optimization} commonly used for few-shot meta-learning, which involves \textit{meta-training} and \textit{meta-test} stages. In the \textit{meta-training} stage, a meta-learner is trained to enhance the performance of a base-learner on a \textit{meta-training} set with a batch of few-shot learning tasks, where a task is usually referred to as an episode \cite{ravi2017optimization}. In the \textit{meta-test} stage, the base-learner is evaluated on a \textit{meta-test} set with different classes of data samples from the \textit{meta-training} set. For the few-shot classification problem, we sample $N$-way $k$-shot classification tasks from the \textit{meta-training} set, where $k$ is the number of labelled examples for each of the $N$ classes. Given the $t$-th task with a support set $\mathcal{S}^{t}=\{(\mathbf{x}_i, \mathbf{y}_i)\}_{i=1}^{N\mathord\times k}$ and query set $\mathcal{Q}^{t}=\{(\tilde{\mathbf{x}}_i, \tilde{\mathbf{y}}_i)\}_{i=1}^m$ ($\mathcal{S}^{t}, \mathcal{Q}^{t} \subseteq \mathcal{X}$), we learn the parameters $\alpha^{t}$ of the predictor $f_{\alpha^{t}}$ using a standard learning algorithm with a kernel trick $\alpha^{t} = \Lambda(\Phi(X), Y)$, where $\mathcal{S}^{t} = \{X, Y\}$.\ Here, $\Lambda$ is the base-learner and $\Phi: \mathcal{X} \rightarrow \mathbb{R}^\mathcal{X}$ is a mapping function from $\mathcal{X}$ to a dot product space $\mathcal{H}$. The similarity measure $\mathtt{k}(\mathbf{x}, \mathbf{x}')=\langle\Phi(\mathbf{x}),\Phi(\mathbf{x}')\rangle$ is called a kernel~\cite{hofmann2008kernel}. In traditional supervised learning, the base-learner for the $t$-th single task usually relies on a universal kernel to map the input into a dot product space for efficient learning. Once the base-learner is trained on the support set, its performance is evaluated on the query set using the following loss function: \begin{equation} \sum_{(\tilde{\mathbf{x}}, \tilde{\mathbf{y}}) \in \mathcal{Q}^{t}} L \left(f_{\alpha^t} \big(\Phi(\tilde{\mathbf{x}} )\big), \tilde{\mathbf{y}}\right), \end{equation} where $L(\cdot)$ can be any differentiable function, e.g.\@,~cross-entropy loss. In the meta-learning setting for few-shot learning, we usually consider a batch of tasks.\ Thus, the meta-learner is trained by optimizing the following objective function \textsl{w.r.t.} the empirical loss on $T$ tasks: \begin{equation} \begin{aligned} \vspace{-3mm} \sum^T_{t} \sum_{(\tilde{\mathbf{x}}, \tilde{\mathbf{y}} ) \in \mathcal{Q}^{t}} L\left(f_{\alpha^{t}}\big(\Phi^{t}(\tilde{\mathbf{x}})\big), \tilde{\mathbf{y}}\right), \text{s.t.} \,\ \alpha^{t} = \Lambda\left(\Phi^{t}(X), Y\right), \label{obj} \vspace{-2mm} \end{aligned} \end{equation} where $\Phi^t$ is the feature mapping function which can be obtained by learning a task-specific kernel $\mathtt{k}^t$ for each task $t$ with data-driven random Fourier features. In this work, we employ kernel ridge regression, which has an efficient closed-form solution, as the base-learner $\Lambda$ for few-shot learning.\ The kernel value in the Gram matrix $K \in \mathbb{R}^{Ck\times Ck}$ is computed as $\mathtt{k}(\mathbf{x}, \mathbf{x}') = \Phi(\mathbf{x}) \Phi(\mathbf{x}')^{\top}$, where ``${\top}$'' is the transpose operation. The base-learner $\Lambda$ for a single task is obtained by solving the following objective \textsl{w.r.t.} the support set of this task, \begin{equation} \Lambda = \argmin_{\alpha} \Tr[(Y-\alpha K) (Y-\alpha K)^{\top}] + \lambda \Tr[\alpha K \alpha^{\top}], \label{krg} \end{equation} which admits a closed-form solution \begin{equation} \alpha = Y(\lambda \mathrm{I} + K)^{-1}. \label{closed} \end{equation} The learned predictor is then applied to samples in the query set $\tilde{X}$: \begin{equation} \hat{Y}=f_{\alpha}(\tilde{X})=\alpha \tilde{K}, \end{equation} Here, $\tilde{K} = \Phi(X)\Phi(\tilde{X})^\top\in \mathbb{R}^{Ck\times m}$, with each element as $\mathtt{k}(\mathbf{x}, \tilde{\mathbf{x}})$ between the samples from the support and query sets. Note that we also treat $\lambda$ in (\ref{krg}) as a trainable parameter by leveraging the meta-learning setting, and all these parameters are learned by the meta-learner. In order to obtain task-specific kernels, we consider learning adaptive kernels with random Fourier features in a data-driven way. This also enables shared knowledge of different tasks to be captured by exploring their dependencies in the meta-learning framework. \subsection{Variational Random Features} \label{metavrf} From a probabilistic perspective, under the meta-learning setting for few-shot learning, the random feature basis is obtained by maximizing the conditional predictive log-likelihood of samples from the query set $\mathcal{Q}$: \begin{align} &\max_{p} \sum_{(\mathbf{x},\mathbf{y})\in \mathcal{Q}} \log p(\mathbf{y} | \mathbf{x}, \mathcal{S}) \\ &= \max_{p} \sum_{(\mathbf{x},\mathbf{y})\in \mathcal{Q}} \log \int p(\mathbf{y} |\mathbf{x}, \mathcal{S}, \bm{\omega}) p(\bm{\omega} | \mathbf{x}, \mathcal{S}) d\bm{\omega}. \label{likeli} \end{align} We adopt a conditional prior distribution $p(\bm{\omega} | \mathbf{x}, \mathcal{S})$ over the base $\bm{\omega}$, as in the conditional variational autoencoder~\cite{sohn2015learning}, rather than an uninformative prior \cite{kingma2013auto,rezende2014stochastic}. By depending on the input $\mathbf{x}$, we infer the bases that can specifically represent the data, while leveraging the context of the current task by conditioning on the support set $\mathcal{S}$. In order to infer the posterior $p(\bm{\omega} | \mathbf{y},\mathbf{x}, \mathcal{S})$ over $\bm{\omega}$, which is generally intractable, we use a variational distribution $q_{\phi}(\bm{\omega}| \mathcal{S})$ to approximate it, where the base is conditioned on the support set $\mathcal{S}$ by leveraging meta-learning. We obtain the variational distribution by minimizing the Kullback-Leibler (KL) divergence: \begin{equation} D_{\mathrm{KL}}[q_{\phi}(\bm{\omega}| \mathcal{S}) || p(\bm{\omega} | \mathbf{y}, \mathbf{x}, \mathcal{S})]. \label{kl} \end{equation} By applying Bayes' rule to the posterior $p(\bm{\omega}|\mathbf{y},\mathbf{x}, \mathcal{S})$, we derive the evidence lower bound (ELBO) as \begin{align} \log p(\mathbf{y} | \mathbf{x}, \mathcal{S}) \geq \,\,\, &\mathbb{E}_{q_{\phi}(\bm{\omega}| \mathcal{S})} \log \, p(\mathbf{y} | \mathbf{x}, \mathcal{S}, \bm{\omega} ) \nonumber\\ &- D_{\mathrm{KL}}[q_{\phi}(\bm{\omega}|\mathcal{S}) || p(\bm{\omega} | \mathbf{x}, \mathcal{S})]. \label{eq:elbo} \end{align} The first term of the ELBO is the predictive log-likelihood conditioned on the observation $\mathbf{x}$, $ \mathcal{S}$ and the inferred RFF bases $\bm{\omega}$. Maximizing it enables us to make an accurate prediction for the query set by utilizing the inferred bases from the support set. The second term in the ELBO minimizes the discrepancy between the meta variational distribution $q_{\phi}(\bm{\omega}|\mathcal{S})$ and the meta prior $p(\bm{\omega} | \mathbf{x}, \mathcal{S})$, which encourages samples from the support and query sets to share the same random Fourier bases. The full derivation of the ELBO is provided in the supplementary material. We now obtain the objective by maximizing the ELBO with respect to a batch of $T$ tasks: \begin{align} \vspace{-4mm} \mathcal{L} = &\frac{1}{T} \sum_{t=1}^{T} \Big[ \sum_{(\mathbf{x},\mathbf{y})\in \mathcal{Q}^{t}} \!\!\!\! \mathbb{E}_{q_{\phi}(\bm{\omega}^t| \mathcal{S}^t)} \log \, p(\mathbf{y} | \mathbf{x},\mathcal{S}^t, \bm{\omega}^t ) \nonumber\\ &- D_{\mathrm{KL}}[q_{\phi}(\bm{\omega}^t|\mathcal{S}^t) || p(\bm{\omega}^t | \mathbf{x}, \mathcal{S}^t)] \Big], \label{vi-obj-base} \end{align} where $\mathcal{S}^t$ is the support set of the $t$-th task associated with its specific bases $\{\bm{\omega}^t_{d}\}_{d=1}^{D}$ and $(\mathbf{x}, \mathbf{y}) \in \mathcal{Q}^t$ is the sample from the query set of the $t$-th task. \subsection{Task Context Inference} \label{contextinference} We propose a context inference which puts the inference of random feature bases for the current task in the context of related tasks. We replace the variational distribution in (\ref{kl}) with a conditional distribution $q_{\phi}(\bm{\omega}^t| \mathcal{S}^t,\mathcal{C})$, where we use $\mathcal{C}$ to contain the shared knoweledge provided by related tasks. This makes the bases $\{\bm{\omega}^t_{d}\}_{d=1}^{D}$ of the current $t$-th task conditioned also on the context $\mathcal{C}$ of related tasks, which gives rise to a new ELBO, as follows: \begin{equation} \begin{aligned} \log p(\mathbf{y} | \mathbf{x}, \mathcal{S}^t) &\geq \,\,\, \mathbb{E}_{q_{\phi}(\bm{\omega}| \mathcal{S}^t,\mathcal{C})} \log \, p(\mathbf{y} | \mathbf{x}, \mathcal{S}^t, \bm{\omega} ) \\ &- D_{\mathrm{KL}}[q_{\phi}(\bm{\omega}|\mathcal{S}^t,\mathcal{C}) || p(\bm{\omega} | \mathbf{x}, \mathcal{S}^t)]. \label{metaelbo} \end{aligned} \end{equation} This can be represented in a directed graphical model, as shown in Figure~\ref{graph}. In a practical sense, the KL term in (\ref{metaelbo}) encourages the model to extract useful information from previous tasks for inferring the spectral distribution associated with each individual sample $\mathbf{x}$ of the query set in the current task. \begin{figure}[t] \centering \includegraphics[width=\linewidth]{tci.pdf} \caption{Graphical illustration of variational inference of the random Fourier basis under the meta-learning framework for few-shot learning, where $(\mathbf{x}, \mathbf{y})$ is a sample in the query set $\mathcal{Q}^t$. The base $\bm{\omega}^t$ of the $t$-th task is dependent on the support set $\mathcal{S}^t$ of the current task and the context $\mathcal{C}$ of related tasks. The dashed lines indicate variational inference.} \label{graph} \end{figure} The context inference integrates the knowledge shared across tasks with the task-specific knowledge to build up adaptive kernels for individual tasks. The inferred random features are highly informative due to the information absorbed from experienced tasks. The base-learner built on the inferred kernel with the informative random features effectively solves the current task. However, since there is usually a large number of related tasks, it is non-trivial to model them all simultaneously. We consider using recurrent neural networks to gradually accumulate information episodically along with the learning process by organizing tasks in a sequence. We propose an LSTM-based inference network, leveraging its innate capability of remembering long-term information~\cite{gers2000recurrent}. The LSTM offers a well-suited structure to implement the context inference. The cell state $\mathbf{c}$ stores and accrues the meta knowledge shared among related tasks. It can also be updated when experiencing a new task in each episode over the course of learning, where the output $\mathbf{h}$ is used to adapt the model to each specific task. To be more specific, we model the variational posterior $q_{\phi}(\bm{\omega}^t| \mathcal{S}^t,\mathcal{C})$ through $q_{\phi}(\bm{\omega}|\mathbf{h}^t)$, which is parameterized as a multi-layer perceptron (MLP) $\phi(\mathbf{h}^t)$. Note that $\mathbf{h}^t$ is the output from an LSTM that takes $\mathcal{S}^t$ and $\mathcal{C}$ as inputs. We implement the inference network with bidirectional LSTMs \cite{schuster1997bidirectional,graves2005framewise}. For the LSTM, we have \begin{equation} [\mathbf{h}^t, \mathbf{c}^t] = g_{\mathrm{LSTM}}(\mathcal{\bar{S}}^t,\mathbf{h}^{t-1},\mathbf{c}^{t-1}), \label{vlstm} \end{equation} where $g_{\mathrm{LSTM}}(\cdot)$ is a LSTM network that takes the current support set, the output $\mathbf{h}^{t-1}$ and the cell state $\mathbf{c}^{t-1}$ as input. $\mathcal{\bar{S}}^t$ is the average over the feature representation vectors of samples in the support set~\cite{zaheer2017deep}. The feature representation is obtained by a shared convolutional network $\psi(\cdot)$. To incorporate more context information, we also implement the inference with a bidirectional LSTM. We thus have $\mathbf{h}^t = [\stackrel{\rightarrow}{\mathbf{h}^t}, \stackrel{\leftarrow}{\mathbf{h}^t}]$, where $\stackrel{\rightarrow}{\mathbf{h}^t}$ and $\stackrel{\leftarrow}{\mathbf{h}^t}$ are the outputs from the forward and backward LSTMs, respectively, and $[\cdot,\cdot]$ indicates a concatenation operation. Therefore, the optimization objective with the context inference is: \begin{equation} \begin{aligned} \mathcal{L} = &\frac{1}{T} \sum_{t=1}^{T} \Big[\sum_{(\mathbf{x},\mathbf{y})\in \mathcal{Q}^{t}} \!\!\!\! \mathbb{E}_{q_{\phi}(\bm{\omega}^t| \mathbf{h}^t)} \log \, p(\mathbf{y} | \mathbf{x},\mathcal{S}^t, \bm{\omega}^t) \\ -& D_{\mathrm{KL}}[q_{\phi}(\bm{\omega}^t|\mathbf{h}^t) || p(\bm{\omega}^t | \mathbf{x},\mathcal{S}^t)] \Big], \label{vi-obj \end{aligned} \end{equation} where the variational approximate posterior $q_{\phi}(\bm{\omega}^t| \mathbf{h}^t)$ is taken as a multivariate Gaussian with a diagonal covariance. Given the support set as input, the mean $\bm{\omega}_{\mu}$ and standard deviation $\bm{\omega}_{\sigma}$ are output from the inference network $\phi(\cdot)$. The conditional prior $p(\bm{\omega}^t | \mathbf{x},\mathcal{S}^t)$ is implemented with a prior network which takes an aggregated representation using the cross attention \cite{kim2019attentive} between $\mathbf{x}$ and $\mathcal{S}^t$. The details of the prior network are provided in the supplementary material. To enable back propagation with the sampling operation during training, we adopt the reparametrization trick \cite{rezende2014stochastic,kingma2013auto} as $\bm{\omega}= \bm{\omega}_{\mu} + \bm{\omega}_{\sigma} \odot \boldsymbol\epsilon$, where $\bm\epsilon \sim \mathcal{N}(0, \mathrm{I} ).$ During the course of learning, the LSTMs accumulate knowledge in the cell state by updating their cells using information extracted from each task. For the current task $t$, the knowledge stored in the cell is combined with the task-specific information from the support set to infer the spectral distribution for this task. To accrue information across all the tasks in the meta-training set, the output and the cell state of the LSTMs are passed down across batches. As a result, the final the cell state contains the distilled prior knowledge from all the tasks experienced in the meta-training set. \subsection{Enriching Random Features by Normalizing Flows} \label{MetaVRF-CNF} The posterior distribution $q_{\phi}(\bm{\omega}|\mathbf{h}^t)$ is assumed to be a fully factorized Gaussian, resulting in limited expressive ability to approximate the true posterior over random Fourier bases. Motivated by the empirical success of normalizing flows~\cite{rezende2015variational} and conditional normalizing flows~\cite{winkler2019learning}, we propose the conditional normalizing flows that provide a principled way to learn richer posteriors. Normalizing flows map a complex distribution $p_{\mathbf{x}}(\mathrm{X})$ to a simpler distribution $p_{\vect{z}}(\mathrm{Z})$ through a chain of transformations. Let $\mathbf{x} \in X$ denote data sampled from an unknown distribution $\mathbf{x} \sim p_{X}(\mathbf{x})$. The key idea in normalizing flows is to represent $p_{X}(\mathbf{x})$ as a transformation $\mathbf{x}=g(\mathbf{z})$ of a single Gaussian distribution $\mathbf{z} \sim p_{Z} = \mathcal{N}(0, I)$. Moreover, we assume that the mapping is bijective: $\mathbf{x} = g(\mathbf{z}) = f^{-1}(\mathbf{z})$. Therefore, the log-likelihood of the data is given by the change of variable formula: \begin{equation} \begin{aligned} \label{eq:likelihood} \log\left(p_X(\mathbf{x})\right) =& \log\left(p_Z\left(f(\mathbf{x})\right)\right)+\log\left( \left|\det\left(\frac{\partial f(\mathbf{x})}{\partial \mathbf{x}^T}\right)\right|\right), \end{aligned} \end{equation} where $\frac{\partial f(\mathbf{x})}{\partial \mathbf{x}^T}$ is the Jacobian of the map $f(\mathbf{x})$ at $\mathbf{x}$. The functions $f$ can be learned by maximum likelihood~(\ref{eq:likelihood}), where the bijectivity assumption allows expressive mappings to be trained by gradient backpropagation. \begin{figure}[t] \centering \includegraphics[width=1\linewidth]{nf.pdf} \label{fig:frame} \vspace{-5mm} \caption{Effect of conditional normalizing flows on the random bases. They transform the single Gaussian distribution of the random bases into a more complex distribution, which yields more informative random features.} \vspace{-3mm} \label{fig: nf_distribution} \end{figure} To make the Jacobian tractable for the map $f(\mathbf{x})$, NICE~\cite{dinh2014nice} and RealNVP~\cite{dinh2016density} proposed to stack a sequence of simple bijective transformations, such that their Jacobian is a triangular matrix. In this way, the log-determinant depends only on the sum of its diagonal elements. Dinh et al.\@~\cite{dinh2014nice, dinh2016density} proposed the additive coupling layer for each transformation. In each affine coupling transformation, the input vector $\mathbf{x}\in \mathbb{R}^d$ is split into upper and lower halves, $\mathbf{x}_{I_1},\mathbf{x}_{I_2} \in \mathbb{R}^{d/2}$. These are plugged into the following transformation, referred to as a single flow-block $f_i$: \begin{equation} \begin{aligned}\label{eq:3} \mathbf{z}_1 = \mathbf{x}_{I_1},~~~ \mathbf{z}_2 = \mathbf{x}_{I_2} \circ \exp(s_i(\mathbf{x}_{I_1})) + t_i(\mathbf{x}_{I_1}), \end{aligned} \end{equation} where $\circ$ denotes element-wise multiplication. It is important to note that the mappings $s_i$ and $t_i$ can be arbitrarily complicated functions of $\mathbf{x}_i$ and need not be invertible themselves. In practice, $s_i$ and $t_i$ are achieved via neural networks. Given the outputs $\mathbf{z}_1$ and $\mathbf{z}_2$, this affine transformation is invertible by: \begin{equation} \begin{aligned}\label{eq:4} \mathbf{x}_{I_1} = \mathbf{z}_1,~~~ \mathbf{x}_{I_2} = (\mathbf{z}_2 - t_i(\mathbf{z}_1)) \circ \exp(-s_i(\mathbf{z}_1)). \end{aligned} \end{equation} The RealNVP~\cite{dinh2016density} flow comprises $k$ reversible flow-blocks interleaved with switch-permutations, \begin{equation} f_{\textit{RealNVP}} = f_k\cdot r \dots f_2 \cdot r \cdot f_1, \end{equation} where $r$ denotes a switch-permutation, which permutes the order of $\mathbf{x}_1$ and $\mathbf{x}_2$. According to the chain rule, the log-determinant of the Jacobian of the whole transformation $f$ is computed by summing the log-determinants of the Jacobian of each $f_i$, making the likelihood calculation tractable. Conditional normalizing flows~\cite{winkler2019learning} learn conditional likelihoods for complicated target distributions in multivariate prediction tasks. Take an input $\mathbf{x} \in \mathcal{X}$ and a regression target $\mathbf{y} \in \mathcal{Y}$. CNFs learn a complicated distribution $p_{Y|X}(\mathbf{y} | \mathbf{x})$ using a conditional prior $p_{Z|X}(\mathbf{z} | \mathbf{x})$ and a mapping $f_\phi: {\mathcal{Y}} \times {\mathcal{X}} \to {\mathcal{Z}}$, which is bijective in ${\mathcal{Y}}$ and ${\mathcal{Z}}$. The log-likelihood of CNFs is: \begin{equation} \begin{aligned} \log (p_{Y|X}(\mathbf{y} | \mathbf{x})) = & \log(p_{Z|X}(\mathbf{z} | \mathbf{x})) + \log(\left\lvert \frac{\partial \mathbf{z}}{\partial \mathbf{y}} \right\rvert) \\ = & \log(p_{Z|X}(f_{\phi}(\mathbf{y} , \mathbf{x}) | \mathbf{x})) + \log(\left\lvert \frac{\partial f_{\phi}(\mathbf{y} , \mathbf{x})}{\partial \mathbf{y}} \right\rvert). \label{eq:cnf} \end{aligned} \end{equation} Different from NFs, in the log-likelihood of CNFs, all distributions are conditional and the flow has a conditioning argument for $\mathbf{x}$. We parameterize the approximate posterior distribution $q_{\phi}(\bm{\omega}|\mathbf{h}^t)$ with a flow of length $K$, $q_{\phi}(\bm{\omega}|\mathbf{h}^t) := q_{K}(\bm{\omega}_K)$. The ELBO~(\ref{eq:elbo}) is thus written as an expectation over the initial distribution $q_0(\bm{\omega})$: \begin{equation} \begin{aligned} \log p(\mathbf{y} | \mathbf{x}, \mathcal{S}) \geq & -\mathbb{E}_{q_{\phi} ( \bm{\omega} |\mathbf{h}^t )} [ \log q_{\phi}(\bm{\omega}|\mathbf{h}^t) + \log p ( \mathbf{y}, \bm{\omega} | \mathcal{S}, \mathbf{x} ) ] \\ = &-\mathbb{E}_{q_{0} ( \bm{\omega}_0 )} \left [ \ln q_{K} ( \bm{\omega}_K ) + \log p ( \mathbf{y} ,\bm{\omega}_K|\mathcal{S}, \mathbf{x}) \right ] \\ = &-\mathbb{E}_{q_{0} (\bm{\omega}_0)} [ \ln q_{0} ( \bm{\omega}_0 ) - \sum_{k=1}^{K} \ln |\det \frac{\partial f}{\partial \bm{\omega}_k}| ] \\ & + \mathbb{E}_{q_{0} ( \bm{\omega}_0 )} [\log p ( \mathbf{y} ,\bm{\omega}_K|\mathcal{S}, \mathbf{x}) ], \label{eq:elbo-nf} \end{aligned} \end{equation} where $q_0(\bm{\omega}_0)$ is obtained from the approximate posterior distribution $q_{\phi}(\bm{\omega}|\mathbf{h}^t)$ without transformation. We then obtain the objective by maximizing the log-likelihood $ \log p(\mathbf{y} | \mathbf{x}, \mathcal{S})$ with respect to a batch of $T$ tasks: \begin{equation} \begin{aligned} \mathcal{L} = &\frac{1}{T} \sum_{t=1}^{T} \Big[ \sum_{(\mathbf{x},\mathbf{y})\in \mathcal{Q}^{t}} \!\!\!\! \mathbb{E}_{q_{0} (\bm{\omega}_0^t)} [ - \ln q_{0} ( \bm{\omega}_0^t ) + \sum_{k=1}^{K} \ln |\det \frac{\partial f}{\partial \bm{\omega}_k^t}| ] \\ +& \mathbb{E}_{q_{0} ( \bm{\omega}_0^t )} \left[\log p ( \mathbf{y} ,\bm{\omega}^t_K|\mathcal{S}^t, \mathbf{x}) \right] \Big], \label{eq:obj-cnf} \end{aligned} \end{equation} where $\bm{\omega}_k^t$ is the random base after $k$ transformations. We rely on the conditional coupling layer from~\cite{winkler2019learning} to transform the random base distribution. This layer is an extension of the affine coupling layer from RealNVP~\cite{dinh2016density} to make the computation of the Jacobian for the map $f(x)$ tractable. The input $\bm{\omega}_{k-1} = [\bm{\omega}_{k-1}^{I_0}, \bm{\omega}_{k-1}^{I_1}]$ of an affine coupling layer is split into two parts, which are transformed individually: \begin{equation} \begin{aligned} \bm{\omega}_{k}^{I_i} = \bm{\omega}_{k-1}^{I_i} \odot \exp(s_{i+1}(\bm{\omega}_{k-(1-i)}^{I_{(1-i)}}, \mathbf{h}^t)& \\ + t_{(i+1)}(\bm{\omega}_{k-(1-i)}^{I_{(1-i)}}, \mathbf{h}^t)& \end{aligned} \end{equation} where $i \in \{0, 1\}$. Note that the transformations $s_{i+1}, t_{i+1}$ do not need to be invertible and are modelled as convolutional neural networks. The inverse of an affine coupling layer is: \begin{equation} \begin{aligned} \bm{\omega}_{k-1}^{I_i} = (\bm{\omega}_{k}^{I_i} - t_{(1+i)}(\bm{\omega}_{k-(1-i)}^{I_1}, \mathbf{h}^t))&\\ \odot \exp(-s_{(1+i)}(\bm{\omega}_{k-i}^{I_i}, \mathbf{h}^t))&. \end{aligned} \end{equation} The log-determinant of the Jacobian for one affine coupling layer is calculated as the sum over $s_i$, i.e.\@, $\sum_j s_1(\bm{\omega}_{k-1}^{I_1}, \mathbf{h}^t)_j + \sum_j s_2(\bm{\omega}_{k}^{I_0}, \mathbf{h}^t)_j$. A deep invertible network is built as a sequence of multiple such layers, with a permutation of the dimensions after each layer. The conditional input $\mathbf{h}^t$ is added as an extra input to each transformation in the coupling layer. We refer to the kernel constructed based on the random bases by conditional normalizing flows as MetaKernel. We visualize the distribution of the random bases produced by the CNFs in Figure~\ref{fig: nf_distribution}. $\bm{\omega}_k$ indicates the distribution of the random bases after $k$ transformations. This visualization shows that we can transform a single Gaussian distribution of random bases into a more complex distribution, which achieves more informative random features, resulting in improved performance, as we will demonstrate in our experiments. \section{Experiments} \label{sec:experiments} In this section, we report our experiments to demonstrate the effectiveness of the proposed MetaKernel for both regression and classification with limited labels. We also provide thorough ablation studies to gain insight into our method by showing the efficacy of each introduced component. \subsection{Few-Shot Classification} The few-shot classification experiments are conducted on four commonly used benchmarks, i.e.\@, Omniglot \cite{lake2015human}, \textit{mini}ImageNet{} \cite{vinyals2016matching}, CIFAR-FS \cite{krizhevsky2009learning} and Meta-Dataset~\cite{triantafillou2019meta}. We also perform experiments on DomainNet~\cite{peng2019moment} for few-shot domain generalization. Sample images from each dataset are provided in Figure~\ref{fig:Dataset}. \begin{figure*}[t] \centering \includegraphics[width=1.\linewidth]{Dataset_4.pdf} \caption{Examples from each dataset. Orange and green boxes indicate the meta-training and meta-test tasks for each dataset. $\mathcal{S}$ and $\mathcal{Q}$ indicate the support and query sets for each task. For Meta-Dataset, we only show examples from \textit{ImageNet}~\cite{russakovsky2015imagenet}, \textit{Aircraft}~\cite{maji13finegrained}, \textit{Quick Draw}~\cite{Quick}, \textit{Fungi}~\cite{Fungi},\textit{Traffic Signs}~\cite{Houben-IJCNN-2013} and \textit{MS-COCO}~\cite{lin2014microsoft}. For the few-shot domain generalization, we only show the examples from DomainNet using \textit{Quick Draw} as the target domain during the meta-test stage.} \label{fig:Dataset} \end{figure*} \subsubsection{Datasets} \textbf{Omniglot}~\cite{lake2015human} is a few-shot classification benchmark that contains $1623$ handwritten characters (each with $20$ examples). All characters are grouped into one of $50$ alphabets. For fair comparison against the state of the art, we follow the same data split and pre-processing used by Vinyals et al.\@~\cite{vinyals2016matching}. Specifically, the training, validation, and test sets are composed of a random split of $[1100, 200, 423]$. The dataset is augmented with rotations of $90$ degrees, which results in $4000$ classes for training, $400$ for validation, and $1292$ for testing. The number of examples is fixed to $20$. All images are resized to $28\mathord\times 28$. For a $N$-way, $k$-shot task at training time, we randomly sample $N$ classes from the $4000$ classes, each with $(k+15)$ examples. Thus, there are $C\mathord\times k$ examples in the support set and $C \mathord\times 15$ examples in the query set. The same sampling strategy is followed for validation and testing. \textbf{\textit{mini}ImageNet{}}~\cite{vinyals2016matching} is a challenging dataset constructed from ImageNet~\cite{russakovsky2015imagenet}, which comprises a total of $100$ different classes (each with $600$ instances). All images are downsampled to $84\mathord\times 84$. We use the same splits as Ravi and Larochelle~\cite{ravi2017optimization}, with $[64, 16, 20]$ classes for training, validation and testing. We use the same episodic sampling strategy as for Omniglot. \textbf{\textsc{cifar-fs}{}}~\cite{bertinetto2018meta} is adapted from CIFAR-100~\cite{krizhevsky2009learning} for few-shot learning. In the many-shot image classification benchmark CIFAR-100, there are $100$ classes grouped into $20$ superclasses (each with $600$ instances). \textsc{cifar-fs}{} uses the same split criteria ($64, 16, 20$) with which \textit{mini}ImageNet{} has been generated. The resolution of all images is $32\mathord\times 32$. \textbf{Meta-Dataset} \cite{triantafillou2019meta} is composed of ten existing image classification datasets (eight for training, two for testing). These are: \textit{ILSVRC-2012} (ImageNet, \cite{russakovsky2015imagenet}), \textit{Omniglot}~\cite{lake2015human}, \textit{Aircraft}~\cite{maji13finegrained}, \textit{CUB-200-2011} (Birds, \cite{WahCUB_200_2011}), \textit{Describable Textures}~\cite{cimpoi14describing}, \textit{Quick Draw}~\cite{Quick}, \textit{Fungi}~\cite{Fungi}, \textit{VGG Flowr}~\cite{Nilsback08}, \textit{Traffic Signs}~\cite{Houben-IJCNN-2013} and \textit{MS-COCO}~\cite{lin2014microsoft}. Each episode generated in Meta-Dataset uses classes from a single dataset. Two of these datasets, \textit{Traffic Signs} and \textit{MSCOCO}, are fully reserved for evaluation, which means that no classes from these sets are participated in the training set. Apart from for \textit{Traffic Signs} and \textit{MS-COCO}, the remaining datasets contribute some classes to the training, validation and test splits. There are about 14 million images in total in Meta-Dataset. \textbf{DomainNet.}~\cite{peng2019moment}. Du et al.\@~\cite{du2020metanorm} introduced the setting of few-shot domain generalization, which combines the challenges of both few-shot classification and domain generalization. It is based on the DomainNet dataset by Peng et al.\@~\cite{peng2019moment}, which contains six distinct domains, i.e.\@, \textit{clipart}, \textit{infograph}, \textit{painting}, \textit{quickdraw}, \textit{real}, and \textit{sketch}, for 345 categories. The categories are from 24 divisions. \input{table_min_omn} \subsubsection{Implementation Details} We extract image features using a shallow convolutional neural network with the same architecture as~\cite{gordon2018meta} for \textit{mini}ImageNet{}, and \textsc{cifar-fs}{}. We do not use any fully connected layers in this CNNs. For the Meta-Dataset experiments, we use a ResNet-18~\cite{resnet} as our base learner to be consistent with~\cite{triantafillou2019meta}. The dimension of all feature vectors is $256$. We also evaluate the random Fourier features (RFFs) and the radial basis function (RBF) kernel, where we take the bandwidth $\sigma$ as the mean of the pair-wise distances between samples in the support set of each task. The inference network $\phi(\cdot)$ is a three-layer MLP with $256$ units in the hidden layers and rectifier non-linearity, where the input sizes is $512$ for the bidirectional LSTMs. We use an SGD optimizer with a momentum of $0.9$ in all experiments. \input{table_meta-dataset} The key hyperparameter for the number of bases $D$ in (\ref{rfs}) is set to $D{=}780$ for MetaKernel in all experiments, while we use RFFs with $D{=}2048$ as this produces the best performance. The sampling rate in MetaKernel is much lower than in previous works using RFFs, in which $D$ is usually set to be $5$ to $10$ times the dimension of the input features~\cite{yu2016orthogonal, rahimi2008random}. We adopt a similar meta-testing protocol as~\cite{gordon2018meta, finn2017model}, but we test on $3000$ episodes rather than $600$ and present the results with $95\%$ confidence intervals. All reported results are produced by models trained from scratch. We compare with previous methods that use the same training procedures and similar shallow conventional CNN architectures as ours. Our code will be publicly released. \input{table_few-shotdg} \begin{figure*}[t] \centerline{\includegraphics[width=1\linewidth]{new_regression.pdf}} \sbox1{\raisebox{2pt}{\tikz{\draw[blue,dashed,line width = 1.5pt](0,0) -- (6mm,0);}}} \sbox2{\raisebox{2pt}{\tikz{\draw[-,green,dashed,line width = 1.5pt](0,0) -- (6mm,0);}}} \sbox3{\raisebox{2pt}{\tikz{\draw[-,red,dashed,line width = 1.5pt](0,0) -- (6mm,0);}}} \sbox4{\raisebox{2pt}{\tikz{\draw[-, black, dashed,line width = 1.5pt](0,0) -- (6mm,0);}}} \sbox5{\raisebox{2pt}{\tikz{\draw[-,black!40!gray,solid,line width = 0.9pt](0,0) -- (6mm,0);}}} \sbox6{\begin{tikzpicture} \caption{ Few-shot regression performance comparison (MSE). MetaKernel fits the target function well, even with variational random features only using three shots, and consistently outperforms MAML for all settings. Legend: \usebox1 MAML; \usebox2 MetaKernel (variational RFFs only); \usebox3 MetaKernel~(variational RFFs \& task context); \usebox4 MetaKernel (full model); \usebox5 Ground Truth; \usebox6 Support Samples. } \label{fig:reg} \end{figure*} \input{table_kernel_mini} \subsubsection{Comparison to the State of the art} \textbf{Few-shot image classification.} We first evaluate MetaKernel on the \textit{mini}ImageNet{}, \textsc{cifar-fs}{} and Omniglot datasets under various way (the number of classes used in each task) and shot (the number of support set examples used per class) configurations. The results are reported in Table~\ref{tab:miniandcifar}. We report the results of two experiments using MAML~\cite{finn2017model}. To keep MAML~\cite{finn2017model} consistent with our backbone for \textit{mini}ImageNet{} and \textsc{cifar-fs}{}, in addition to its original results, we also implement MAML ($64C$) with $64$ channels in each convolutional layer for fair comparison. While it obtains modest performance, we believe the increased model size leads to overfitting. As the original SNAIL uses a very deep ResNet-12 network for embedding, we cite the results of SNAIL reported in \cite{bertinetto2018meta} using a similar shallow network as ours. For fair comparison, we also cite the original results of R2-D2~\cite{bertinetto2018meta} using $64$ channels. On all benchmark datasets, MetaKernel delivers the best performance. It is worth noting that MetaKernel achieves an accuracy of $55.5\%$ under the $5$-way $1$-shot setting on the \textit{mini}ImageNet~dataset, surpassing the second-best model by $1.3\%$. This is a good improvement considering the challenge of this setting. On \textsc{cifar-fs}{}, our model surpasses the second-best method, i.e.\@, VERSA~\cite{gordon2018meta} and has a smaller margin of error bar under the $5$-way $1$-shot setting using the same backbone. On Omniglot, performance of all methods saturates. Nonetheless, MetaKernel achieves the best performance under most settings, including $5$-way $1$-shot, $5$-way $5$-shot, and $20$-way $1$-shot. It is also competitive under the $20$-way $5$-shot setting, falling within the error bars of the state of the art. \textbf{Few-shot meta-dataset classification.} Next, we evaluate MetaKernel on the most challenging few-shot classification benchmark i.e.\@,~Meta-Dataset~\cite{triantafillou2019meta}, which is composed of 10 image classification datasets. For Meta-Dataset, we train our model on the ILSVRC~\cite{russakovsky2015imagenet} training split and test on the 10 diverse datasets. As shown in Table~\ref{tab:meta_dataset}, MetaKernel outperforms fo-Proto-MAML~\cite{triantafillou2019meta} across all 10 datasets. MetaKernel also surpasses the second-best method, RFS~\cite{tian2020rethinking}, on 7 out of 10 datasets. Overall, we perform well against previous methods, achieving new state-of-the-art results on the challenging Meta-Dataset. \textbf{Few-shot domain generalization.} We also evaluate our method on few-shot domain generalization~\cite{du2020metanorm}, which combines the challenges of both few-shot classification and domain generalization. For few-shot domain generalization, each task has only a few samples in the support set for training and we test the model on tasks in a query set, which come from a different domain than the support set. The results are reported in Table~\ref{tab:fewdg}. MetaKernel obtains the best performance, surpassing the MetaNorm~\cite{du2020metanorm} by a margin of up to $2.0\%$ on the $5$-way $1$-shot and $1.8\%$ on the $5$-way $5$-shot setting. Its performance on the few-shot domain generalization task demonstrates that MetaKernel is not only able to handle the problem of few-shot learning, but also thrives under domain-shifts. \subsection{Few-Shot Regression} We also consider regression tasks with a varying number of shots $k$, and compare MetaKernel with MAML~\cite{finn2017model}, a representative meta-learning algorithm. We follow MAML \cite{finn2017model} and fit a target sine function $y{=}A \sin{(wx + b)}$, with only a few annotated samples. $A \in [0.1, 5]$, $w \in [0.8, 1.2]$, and $ b\in [0, \pi ]$ denote the amplitude, frequency, and phase, which follow a uniform distribution within the corresponding interval. The goal is to estimate the target sine function given only $n$ randomly sampled data points. Here, we consider inputs within the range of $x\in [-5, 5]$, and conduct three tests under the conditions of $k {=} 3, 5, 10$. For fair comparison, we compute the feature embedding using a small MLP with two hidden layers of size $40$, following the same settings used in MAML. The results in Figure~\ref{fig:reg} show that MetaKernel fits the function well with only three shots, even when we do not use the full model. It performs better with an increasing number of shots, almost entirely fitting the target function with ten shots. We observe all MetaKernel variants perform better than MAML~\cite{finn2017model} for all three settings with varying numbers of shots, both visually and in terms of MSE. Best results are obtained with our full model. \begin{figure*}[t] \centering \begin{minipage}{.48\textwidth} \begin{subfigure \centering \includegraphics[width=0.482\columnwidth]{dim_5w5s_mini.png} \label{fig:eff1} \end{subfigure}% \begin{subfigure \centering \includegraphics[width=0.482\columnwidth]{dim_5w5s_cifar.png} \label{fig:eff2} \end{subfigure} \vspace{-4mm} \caption{Efficiency with varying numbers $D$ of bases. MetaKernel consistently achieves better performance than regular RFFs, especially with relatively low sampling rates.} \label{fig:eff} \end{minipage} \hspace{4mm} \begin{minipage}{.48\textwidth} \centering \vspace{-6mm} \begin{subfigure \centering \includegraphics[width=0.482\columnwidth]{flex_way.pdf} \end{subfigure}% \begin{subfigure \centering \includegraphics[width=0.482\columnwidth]{flex_shot.pdf} \end{subfigure} \caption{Versatility of MetaKernel with varied ways and shots on Omniglot.} \label{fig:flex} \end{minipage} \end{figure*} \subsection{Ablation Studies} To study how our proposed components bring performance gains to MetaKernel on few-shot learning, our ablations consider: (1) the benefit of random Fourier features; (2) the benefit of task context inference; (3) the benefit of enriching random features by normalizing flows; (4) the effect of deeper embeddings; (5) the efficiency of the model; (6) the versatility of the model. \textbf{Benefit of random Fourier features.} We first show the benefit of random Fourier features (RFFs) by comparing them with the regular RBF kernel. As can be seen from the first two rows in Table~\ref{tab:mini_kernel}, RFFs perform 10.7\% better than an RBF kernel on the $5$-way $1$-shot setting of \textit{mini}ImageNet{}, and 14.9\% better on the $5$-way $5$-shot setting of \textsc{cifar-fs}{}. The considerable performance gain over RBF kernels on both datasets indicates the benefit of adaptive kernels based on random Fourier features for few-shot image classification. The modest performance obtained by RBF kernels is due to the mean of pair-wise distances of support samples being unable to provide a proper estimate of the kernel bandwidth. Note that the performance of RFFs is better than the variational RFFs on the $5$-way $1$-shot setting of \textit{mini}ImageNet{}. This may be due to the fact that the support samples are too small, resulting in the random bases generated from the samples not accurately representing the current task, while the parameters in the random bases of RFFs are sampled from a standard Gaussian distribution. Therefore, the context information among previous related tasks should be integrated into the variational RFFs. In addition, RFFs cannot use the context information directly since it generates random base parameters sampled from a deterministic distribution. \textbf{Benefit of task context inference.} We investigate the benefit of adding task context inference to the MetaKernel. Specifically, we leverage a bi-\textsc{lstm}~cell state $\textbf{c}$ to store and accrue the meta-knowledge shared among related tasks. The experimental results are reported in Table~\ref{tab:mini_kernel}. Adding task context inference on top of the MetaKernel with variational random features leads to a consistent gain under all settings, for both datasets. This demonstrates the effectiveness of using an \textsc{lstm}~to explore task dependency. \textbf{Benefit of enriching features by normalizing flows.} We show the benefit of enriching the variational random features by conditional normalizing flow in the last row of Table~\ref{tab:mini_kernel}. we find that MetaKernel performs better than MetaVRF (55.5\% -up 1.3\%) under the $5$-way $1$-shot setting on \textit{mini}ImageNet{} and (64.3\% -up 1.2\%) under the $5$-way $1$-shot setting on \textsc{cifar-fs}{}. These results indicate that the CNFs provide more informative kernels for the new task, which allows the learned distribution of random bases to more closely approximate the real random bases distribution and therefore improves few-shot classification performance. \input{table_tiered} \textbf{Deep embeddings.} MetaKernel is independent of the convolutional architecture for feature extraction and works with deeper embeddings, either pre-trained or trained from scratch. In general, the performance improves with more powerful feature extraction architectures. We evaluate our method using pre-trained embeddings in order to compare with existing methods using deep embedding architectures. Specifically, we adopt the pre-trained embeddings from a 28-layer wide residual network (WRN-28-10) \cite{zagoruyko2016wide}, in a similar fashion to \cite{rusu2018meta, bauer2017discriminative, qiao2018few}. We choose activations in the 21-st layer, with average pooling over spatial dimensions, as feature embeddings. The dimension of the pre-trained embeddings is $640$. We show the comparison results on the \textit{mini}ImageNet~dataset for 5-way 1-shot and 5-shot settings in Table~\ref{tab:mini}. MetaKernel achieves the best performance under both settings and surpasses LEO~\cite{rusu2018meta}, a recently proposed meta-learning method, especially on the challenging 5-way 1-shot setting. Compared with our conference paper, MetaVRF~\cite{zhen2020learning}, MetaKernel performs 1.23\% better on the $5$-way $1$-shot setting of \textit{mini}ImageNet{}, which also validates the effectiveness of the CNFs. The consistent state-of-the-art results on all benchmarks using both shallow and deep feature extraction networks validate the effectiveness of MetaKernel for few-shot learning. \textbf{Efficiency.} Regular RFFs usually require high sampling rates to achieve satisfactory performance. However, our MetaKernel achieves high performance with a relatively low sampling rate, which guarantees its high efficiency. In Figure~\ref{fig:eff}, we compare with regular RFFs using different sampling rates. We provide the performance change of fully trained models using RFFs and MetaKernel under a varying number of bases $D$. We show the comparison results for the $5$-way $5$-shot setting on \textit{mini}ImageNet~ and CIFAR-FS in Figure~\ref{fig:eff}. MetaKernel consistently yields higher performance than regular RFFs with the same number of sampled bases. The results verify the efficiency of MetaKernel in learning adaptive kernels and its effectiveness in improving performance by exploring the dependencies of related tasks. \textbf{Versatility.} In contrast to most existing meta-learning methods, MetaKernel is applicable to versatile settings. We evaluate the performance of MetaKernel on more challenging scenarios where the number of ways $N$ and shots $k$ between training and testing are inconsistent. Specifically, we test the performance of MetaKernel on Omniglot tasks with varied $N$ and $k$, when it is trained on one particular $N$-way $k$-shot task. As shown in Figure~\ref{fig:flex}, the results demonstrate the trained model still produces good performance, even under the challenging conditions with a far higher number of ways. In particular, the model trained on the $20$-way $5$-shot task retains a high accuracy of $94\%$ on the $100$-way setting, as shown in Figure~\ref{fig:flex}(a). The results also indicate that our model exhibits considerable robustness and flexibility to a variety of testing conditions. \section{Conclusion} \label{sec:conclusion} In this paper, we introduce kernel approximation based on random Fourier features into the meta-learning framework for few-shot learning. We propose to learn random features for each few-shot task in a data-driven way by formulating it as a variational inference problem, where the random Fourier basis is defined as the latent variable. We introduce an inference network based on an LSTM module, which enables the shared knowledge from related tasks to be incorporated into each individual task. To further enhance the kernels, we introduce conditional normalizing flows to generate richer posteriors over random bases, resulting in more informative random features. Experimental results on both regression and classification tasks demonstrate the effectiveness for few-shot learning. The extensive ablation study demonstrates the efficacy of each component in our MetaKernel. \bibliographystyle{IEEEtran}
{ "timestamp": "2021-05-11T02:15:24", "yymm": "2105", "arxiv_id": "2105.03781", "language": "en", "url": "https://arxiv.org/abs/2105.03781" }
"\\section{Introduction}\n\n\nAll groups considered in this paper are finite, and all graphs conside(...TRUNCATED)
{"timestamp":"2021-05-11T02:19:42","yymm":"2105","arxiv_id":"2105.03913","language":"en","url":"http(...TRUNCATED)
"\\section{Subvarieties of ${\\mathcal A}_g(n)$ }\\label{sec:Ag}\n \n \n\n\\subsection{Siegel upper (...TRUNCATED)
{"timestamp":"2021-05-17T02:09:03","yymm":"2105","arxiv_id":"2105.03861","language":"en","url":"http(...TRUNCATED)
"\\section*{Introduction}\n\\label{intr}\n\nRecent experimental discovery of the superconductivity i(...TRUNCATED)
{"timestamp":"2021-06-29T02:37:16","yymm":"2105","arxiv_id":"2105.03770","language":"en","url":"http(...TRUNCATED)
"\\section{Experimental Results}\na sample array design was sent out for manufacturing Fig.~\\ref{fi(...TRUNCATED)
{"timestamp":"2021-05-11T02:17:26","yymm":"2105","arxiv_id":"2105.03838","language":"en","url":"http(...TRUNCATED)
"\\section{Introduction}\n\nThe end state of a star resulting from continued gravitational\ncollapse(...TRUNCATED)
{"timestamp":"2021-05-11T02:21:50","yymm":"2105","arxiv_id":"2105.03970","language":"en","url":"http(...TRUNCATED)
"\\section{Introduction}\n\n\\subsection{Overview}\n\n\n\\label{Section_Overview}\n\nThis text comes(...TRUNCATED)
{"timestamp":"2021-06-29T02:23:21","yymm":"2105","arxiv_id":"2105.03795","language":"en","url":"http(...TRUNCATED)
"\\section{Introduction}\n\tThe electronic structure problem of quantum chemistry if one of the main(...TRUNCATED)
{"timestamp":"2021-05-11T02:17:22","yymm":"2105","arxiv_id":"2105.03836","language":"en","url":"http(...TRUNCATED)
End of preview. Expand in Data Studio

No dataset card yet

Downloads last month
6