Dataset Viewer
Auto-converted to Parquet Duplicate
text
stringlengths
1
1.94M
meta
dict
\section{Measuring work} \label{Measuring work} Consider a closed system that does not interact with its surroundings, so that no heat can be added to the system, described by the Hilbert space $\mathcal{H}_S$. In accordance with the first law, the work done on the system by a time-dependent Hamiltonian $H_S(t)$ between an initial time $t=t_i$ and final time $t=t_f$ is equal to the change in its internal energy \begin{align} W = E_n(t_f) - E_m(t_i), \label{work} \end{align} where $E_n(t)$ is an eigenvalue of the system Hamiltonian associated with the eigenvector $\ket{E_n(t)} $ at time $t$, that is, $H_S(t) \ket{E_n(t)} = E_n(t) \ket{E_n(t)}$. For simplicity, we have assumed that the spectrum of $H_S(t)$ is non-degenerate and discrete (labeled by the index $n$), however, the results that follow are expected to generalize straightforwardly. \subsection{Two-point measurement scheme} \label{twopoint} One of the most common operational definitions of work is the so-called two-point measurement scheme~\cite{campisiColloquiumQuantumFluctuation2011,talknerFluctuationTheoremsWork2007}. Suppose the system is prepared in the state $\rho_S(t_i) \in \mathcal{S}\left(\mathcal{H}_S\right)$. A projective measurement of the system's energy is made at $t=t_i$ yielding the outcome $E_m(t_i)$. The system then evolves from $t_i$ to $t_f$ as described by the unitary $U_S(t_f)$ generated by $H_S(t)$. Then, the system energy is measured again yielding the outcome $E_n(t_f)$. From the outcomes of these two measurements the work performed in this particular realization of the protocol is given by Eq.~\eqref{work}. The outcomes of these energy measurements are probabilistic and thus so too is the amount of work $W$ done on the system. The probability associated with an amount of work $W$ is \begin{align} \mathcal{P}(W) = \sum_{m,n} \mathcal{P}_m \mathcal{P}_{m \to n} \delta \big(W - [E_n(t_f)- E_m(t_i)] \big), \label{TMP} \end{align} where $\delta$ is the Dirac delta function, $\mathcal{P}_m \ce \braket{E_m(t_i) | \rho_S(t_i) | E_m(t_i)}$ is the probability of outcome $m$ in the first measurement, and $\mathcal{P}_{m \to n} \ce \abs{\braket{E_n(t_f) | U_S(t) | E_m(t_i)}}^2$ is the probability of outcome $n$ in the second measurement conditioned on outcome $m$ in the first measurement; see Ref.~\cite{dechiaraAncillaAssistedMeasurementQuantum2018} for a recent discussion. \subsection{Ancilla-assisted protocol} Alternative to the two-point measurement scheme, one can consider an explicit measurement model that describes an apparatus which couples to the system at the times $t_i$ and $t_f$ in such a manner that a subsequent projective measurement of the apparatus yields the amount of work performed on the system between $t_i$ and $t_f$. Let the measuring apparatus be modeled as a free particle on the real line, whose associated Hilbert space is $\mathcal{H}_A \simeq L^2(\mathbb{R})$ and whose free evolution is governed by the Hamiltonian $H_A = P^2/2m$, where $m$ is a mass parameter that governs the dispersion of the measuring apparatus in position space. Suppose that the system and apparatus are prepared at the time $t_p < t_i$ in the separable state $\rho_S(t_p) \otimes \rho_A(t_p)$, where $\rho_S(t_p) \in \mathcal{S}\left(\mathcal{H}_S\right)$ and $\rho_A(t_p) \in \mathcal{S}\left(\mathcal{H}_A\right)$. For simplicity we will suppose that the apparatus is initially a pure state $\rho_A(t_p) = \ket{\psi_A(t_p)}\! \bra{\psi_A(t_p)}$ localized in position space around $x = 0$, \begin{align} \ket{\psi_A(t_p)} = \frac{1}{\pi^{1/4}\sqrt{\sigma_{x}}} \int dx \, e^{-\frac{x^2}{2\sigma_{x}^2}} \ket{x}, \label{Astatepgaus} \end{align} where $\ket{x}$ is the generalized eigenvector of the position operator $X$, that is, $X \ket{x} = x \ket{x}$ for all $x \in \mathbb{R}$. The apparatus must interact with the system such that it keeps a coherent record of the energy of the system at the times $t_i$ and $t_f$. An interaction Hamiltonian that accomplishes this is \begin{align} H_{SA}(t) = \lambda f(t) H_S(t) \otimes P, \nn \end{align} where $\lambda \in \mathbb{R}$ is the strength of the interaction, and $P$ is the momentum operator acting on $\mathcal{H}_A$, $f(t) \ce g(t-t_f) - g(t-t_i)$, and $g(t)$ is a function with narrow support around $t=0$. Because the momentum operator $P$ generates a translation of the position operator $X$, the evolution generated by $H_S(t)$ first translates the apparatus to the left by an amount conditioned on the internal energy of the system at time $t_i$ and then translates the apparatus to the right conditioned on the internal energy of the system at time $t_f$. The system and apparatus evolve from the time $t_p$ to $t_m>t_f$ according to the unitary operator \begin{align} U_{t_p \to t_m} = \mathcal{T} e^{-i \int_{t_p}^{t_m} \dif{t} \, H(t)}, \nn \end{align} where $\mathcal{T}$ denotes the time ordering operator and the total Hamiltonian $H(t)$ describing the system, apparatus, and their interaction is \begin{align} H(t) = H_S(t) + H_A + H_{SA}(t) \label{totalH} \end{align} At the time $t_m$, a position measurement of the apparatus is made. The outcome of which corresponds to the measured work $W$ in this realization of the process governed by $H_S(t)$. Accordingly, the probability density of an amount of work $W$ being done on the system is given by \begin{align} \mathcal{P}(W) = \tr \left[ I_S \otimes \Pi_{x = W} U_{t_p \to t_m} \rho_S(t_p) \otimes \rho_A(t_p) U_{t_p \to t_m}^\dagger \right], \nn \end{align} where $\Pi_{x} \ce \ket{x}\!\bra{x} \in \mathcal{E} \left(\mathcal{H}_A\right)$ is the effect operator associated with outcome $x \in \mathbb{R}$. This protocol constitutes a measurement model and is depicted in Fig.~\ref{protocolFig} as a quantum circuit. \begin{figure}[t] \includegraphics[width= 245pt]{ProtocolFig.pdf} \caption{The measurement model described in the text is depicted as quantum circuit that implements a single measurement described by a POVM. The different evolution channels are: $U_{S}$ for the evolution of $\rho_{S}(t_{p})$ under $H_{S}$, similarly for the measuring apparatus $A$, and $U_{i},U_{f}$ represent the evolution of the joint state under the interaction Hamiltonian. } \label{protocolFig} \end{figure} The above measurement model induces a positive operator-valued measure (POVM) described by effect operators $ E(W) \in \mathcal{E}(\mathcal{H}_S)$ for all $W \in \mathbb{R}$ such that \begin{align} \mathcal{P}(W) &= \tr \big[E(W) \rho_S(t_p) \big] \nn \\ & = \tr \left[ I_S \otimes \Pi_{x = W} U_{t_p \to t_m} \rho_S(t_p) \otimes \rho_A(t_p) U_{t_p \to t_m}^\dagger \right]\!, \label{ProbabilityReproducability} \end{align} where the last equality defines $E(W)$ and is known as the the probability reproducibility condition~\cite{heinosaariMathematicalLanguageQuantum2011}. Inverting Eq.~\eqref{ProbabilityReproducability} allows for the POVM elements to be solved for explicitly \begin{align} E(W) = \bra{\psi_A(t_p)} U_{t_p \to t_m}^\dagger I_S \otimes \Pi_{x=W} U_{t_p \to t_m} \ket{\psi_A(t_p)}. \label{Effect} \end{align} In the ideal limit where the initial state of the apparatus is completely localized in the position/measurement basis, $\sigma_x \to 0$, the measurement interaction happens infinitely fast, $g(t) \to \delta(t)$, and the initial state of the system is prepared in the state $\ket{E_m(t_p)}$ with probability $\mathcal{P}_m \ce \braket{E_m(t_p) | \rho_S(t_p) | E_m(t_p)}$, then the probability distribution in Eq.~\eqref{ProbabilityReproducability} is equivalent to the work distribution sampled in the two-point measurement scheme and given in Eq.~\eqref{TMP}. Henceforth, we will refer to this limit as the \textit{ideal measurement limit}. \label{secideal} \subsection{Thermodynamic considerations} \label{thermocons} Suppose the system of interest is subject to a time-dependent Hamiltonian $H_S(t)$ and evolves as $\rho_S(t)$. The first law of thermodynamics states that \begin{align} \Braket{\Delta U} &= \Braket{W} + \Braket{Q}, \label{firstlaw} \end{align} where $\braket{W} \ce \int_{t_{p}}^{t_{m}} \dif{t} \, \tr{\left[ \dot{H}_{S}(t)\rho_{S}(t)\right]}$ and $\braket{Q} \ce \int_{t_{p}}^{t_{m}} \dif{t} \, \tr{\left[H_{S} (t)\dot{\rho}_{S}(t)\right]}$; see for example~\cite{Vinjanampathy2016QuantumT}. The fact that the measurement apparatus has to interact with the system in order to sample the work distribution leads to the possibility of the apparatus performing work on the system and modifying the work distribution. Although this does not occur when using ideal von-Neumann measurements in the two-point measurement scheme \cite{debarba2019work}, we expect a different outcome based on the finite resolution of our measurement model. To examine this further we have to specify additional layers of detail defining the measurement process. The first layer involves completely ignoring the effect of the measurement interaction between the system and apparatus on the evolution of the system, $\rho_S(t)$, and corresponds to the ideal measurement limit. An additional layer of detail takes into account the measurement interaction, which in turn modifies the evolution of the system state to $\tilde{\rho}_S(t) \neq \rho_S(t) $. As a consequence, this results in different amounts of average work being performed on the system \begin{align} \braket{W}_{S} &\ce \int_{t_{i}}^{t_{f}} \dif{t} \tr{\left[ \dot{H}_{S}(t)\rho_{S}(t)\right]}, \\ \braket{\tilde{W}}_{S} &\ce \int_{t_{i}}^{t_{f}} \dif{t} \tr{\left[ \dot{H}_{S}(t)\tilde{\rho}_{S}(t)\right]}. \label{Defworktilde} \end{align} The difference between these quantities, \begin{align} \Delta W_{\rm int} \ce \braket{\tilde{W}}_{S} - \braket{W}_{S}, \label{workint} \end{align} quantifies the additional work done on the system due to its interaction with the measuring apparatus. Using the first law in Eq.~\eqref{firstlaw}, we similarly define \begin{align} \braket{Q}_{S} &\ce \braket{\Delta U} - \braket{W}_{S}, \nn \\ \braket{\tilde{Q}}_{S} &\ce \braket{\tilde{\Delta U}} - \braket{\tilde{W}}_{S}, \nn \end{align} and their difference \begin{align} \Delta Q_{\rm int} \ce \braket{\tilde{Q}}_{S} - \braket{Q}_{S}. \label{heatint} \end{align} Both of the above average work quantities reference observables that are to be measured on the system itself, as opposed to an observable on the measuring apparatus. The average work computed from the work distribution is \begin{equation} \braket{W}_{\rm dist} \ce \int \dif{W} \, W \mathcal{P}(W,t_{m}). \nn \end{equation} Similarly, the difference in the average work arising from sampling this work distribution, \begin{align} \Delta W_{\rm POVM} \ce \braket{W}_{\rm dist} - \braket{W}_{S}, \nn \end{align} quantifies the effect of using the measured work distribution $\mathcal{P}(W,t_{m})$ and the additional work done on the system relative to the ideal measurement limit. Note that we do not define similar quantities for the heat since that would require a prescription for calculating $\braket{\Delta U}$ using the measuring apparatus. In Sec.~\ref{Self-commuting system Hamiltonians}, we show that $\Delta W_{\rm POVM}$, which is non-zero in general, vanishes upon taking the ideal measurement limit. More surprisingly, we find that $\Delta W_{\rm int}$ vanishes when the system Hamiltonian commutes with itself at different times, which means that the second layer of detail in describing realistic work measurements does not suffice in finding the average work imparted by the apparatus in the sense defined above. Finally, in Sec.~\ref{not self-commuting}, which is the example of a non-self-commuting Hamiltonian, we find that in general $\Delta W_{\rm POVM}$ and $\Delta W_{\rm int}$ are non-zero and differ from each other. \begin{figure}[t] \includegraphics[width= 245pt]{setupFig.pdf} \caption{This figure depicts the three measurement model layers outlined in Sec.~\ref{thermocons}, illustrating qualitatively their associated averages, $\braket{W}_S$, $\braket{\tilde{W}}_S$, and $\braket{W}_{\rm dist}$. The latter two definitions reduce to the former upon taking the ideal measurement limit.} \label{protocolFiglayers} \end{figure} \section{Self-commuting system Hamiltonians} \label{Self-commuting system Hamiltonians} In this section, analytic expressions of the work distribution $\mathcal{P}(W,t)$ are derived using the POVM construction above for the case of systems governed by a time-dependent Hamiltonian $H(t)$ that commutes with itself at different times, $[H(t),H(t')] = 0$. For such systems, it is shown that $\Delta W_{\rm int} =0 $ and $\Delta W_{\rm POVM}$ is a function of the measurement interaction that vanishes in the ideal measurement limit discussed above. The measured work distribution is modified on account of the system-apparatus interaction, which in turn leads to corrections to both the Crooks relation and the Jarzynski equality. \subsection{Setup} \label{subsectionselfcommutingtheory} Consider a system described by the Hilbert space $\mathcal{H}_S$ and whose evolution is governed by the time-dependent Hamiltonian $H_S(t) \in \mathcal{B}(\mathcal{H}_S)$. Suppose that $H_S(t)$ commutes with itself at different times \begin{align} \left[ H_S(t) , H_S(t') \right] = 0, \quad \forall \ t,t' \in \mathbb{R}^{+}. \nn \end{align} It follows that for such Hamiltonians, the energy eigenbasis does not change in time, and therefore by the spectral theorem\footnote{For simplicity, we consider here the case when the spectrum $\sigma_S$ is discrete; however, the results presented here naturally generalize to the case of continuous and degenerate spectrum Hamiltonians.} \begin{align} H_S(t) = \sum_{n} E_n(t) \ket{E_n}\! \bra{E_n}, \nn \end{align} where $\ket{E_{n}} \in \mathcal{H}_S$ is the energy eigenstate associated with the eigenvalue $E_{n}(t)\in \spec (H_S(t))$, that is, $H_S(t) \ket{E_{n}} = E_{n}(t) \ket{E_{n}}$. Thus, it is only the spectrum of the system Hamiltonian that changes in time, not its eigenbasis. An example of such a self-commuting system Hamiltonian is a two-level atom in the presence of a uniform magnetic field of varying strength. As evaluated in Appendix~\ref{Self-Commuting work distribution}, the probability of a measurement of the apparatus at some time $t$ giving the outcome $W$ is \begin{align} \mathcal{P}(W,t) &= \sum_n \dfrac{\rho_S^{(n,n)}(t)}{\sqrt{\pi} \Sigma(t)} e^{-\left( W - \int_{t_{p}}^{t_{m}} \dif{t} \, \lambda f(t) E_n(t) \right)^2/\Sigma^2(t) } , \label{WorkDistribution} \end{align} where \begin{align} \rho^{(n,n)}_S(t) &=\bra{E_n} \rho_S(t) \ket{E_n}= \rho_{S}^{(n,n)}(t_{i}), \nn \\ \Sigma(t) &\ce \left(\frac{1}{\sigma_{p}^{2}} + \dfrac{\sigma_{p}^{2}(t-t_{p})^{2}}{m^{2}}\right)^{\frac{1}{2}}. \nn \end{align} Note that the diagonal elements of state of the system do not evolve under $H_{S}(t)$. It is seen that the Gaussian factors appearing in Eq.~\eqref{WorkDistribution} disperse as $t$ increases in a nontrivial manner that depends on $m$ and $\sigma_p$. Moreover, $\sigma_{p}^{-1}$ quantifies how localized the initial apparatus state is in position space, so as $\sigma_p^{-1}$ decreases the measurement model approaches the ideal measurement limit.\\ Since the work distribution is simply a sum of two Gaussians, the average work done on the system is \phantom{this space has been inserted for better equation spacing} \begin{align} \braket{W}_{\rm dist} = \sum_n \rho_S^{(n,n)}(t_{i}) \int_{t_{p}}^{t_{m}} \dif{t} \lambda f(t) E_n(t) . \label{distaverage} \end{align} In the ideal measurement limit, $f(t) \to \delta(t-t_f) - \delta(t-t_i)$, this expression reduces to the average work obtained from the first law using the freely evolving system state, $\rho_{S}$, so that $\Delta W_{\rm POVM} =0$ for arbitrary $m$ and $\sigma_p$. However, away from this limit $\Delta W_{\rm POVM}$ is in general non-zero. Moreover, as evaluated in Appendix~\ref{Self-Commuting work distribution}, we find that the quantity $\Delta W_{\rm int} $ defined in Eq.~\eqref{workint} vanishes independent of the shape of $f(t)$. Coupled with the fact that the diagonal elements of $\tilde{\rho}_{S}(t)$ are the same as those of $\rho_{S}(t)$ and Eq.~\eqref{firstlaw}, it follows that on average no heat transfer between the system and apparatus occurs. In general, this need not be the case, in particular when $[H_{S}(t),H_{S}(t')] \neq 0$ since $\Delta W_{\rm int}$ is non-zero and the diagonal elements of $\tilde{\rho}_{S}(t)$ are modified non-trivially. Finally, the work distribution is seen to depend only on the diagonal elements in the energy eigenbasis of the system density matrix, which is a consequence of tracing out the system degrees of freedom in obtaining the reduced state of the apparatus. In the case of self-commuting system Hamiltonians, we find that the diagonal elements of the system density matrix do not evolve under $H_{S}(t)$ since the $\ket{E_{n}}$ remain eigenvectors for all $t \in \mathbb{R}^{+}$. We will see in Sec.~\ref{not self-commuting} that this no longer holds for the non-self-commuting case. \subsection{Fluctuation relations} Fluctuation relations are an important tool in statistical mechanics because they relate equilibrium properties to measurable non-equilibrium quantities. Generalizing classical fluctuation relations to the quantum regime has been the subject of much attention~\cite{holmesCoherentFluctuationRelations2019,campisiColloquiumQuantumFluctuation2011,perarnau-llobetQuantumSignaturesFluctuation2019,watanabeGeneralizedEnergyMeasurements2014}. Moreover, measurements of work fluctuations in quantum systems have been proposed and recently realized~\cite{wuExperimentallyReducingQuantum2019,perarnau-llobetNoGoTheoremCharacterization2017, cerisolaUsingQuantumWork2017,perarnau-llobetCollectiveOperationsCan2018,Pekola2015CircuitQT}. Consider a system in contact with a heat bath of inverse temperature $\beta$, evolving under the system Hamiltonian $H_{S}(t)$. The Crooks relation connects the work distributions associated with the forward and backward protocols for an initial equilibrium thermal state, where the former corresponds to $\mathcal{P}_{F}(W,t)$ and the latter corresponds to $\mathcal{P}_{B}(-W,t_{m}-t)$ for $t\in [t_{p},t_{m}]$, are related~\cite{campisiColloquiumQuantumFluctuation2011,crooksEntropyProductionFluctuation1999}, and can be stated as \begin{align} \mathcal{P}_{F}(W) = \mathcal{P}_{B}(-W) e^{\beta(W-\Delta F)}, \nn \end{align} where $\Delta F$ is the change in the equilibrium free energy of the system, defined by $\Delta F \ce -\frac{1}{\beta} \ln{\tfrac{Z(t_{f})}{Z(t_{i})}}$, and $Z(t) \ce \tr e^{-\beta H_S(t)}$ is the partition function of the system at time $t$. If we consider an initial thermal state, $\tilde{\rho}_{S}(t_{i})=\frac{e^{-\beta H_{S}(t_{i})}}{Z(t_{i})}$, the work done on the system obeys $W= - \Delta F$ in the ideal measurement limit. Then, the Crooks relation simply states that $ \frac{\mathcal{P}_{F}(W)}{\mathcal{P}_{B}(-W)} = e^{2\beta W}$. Using Eq.~\eqref{WorkDistribution}, we obtain the work distribution associated to this thermal state \begin{align} \mathcal{P}_{F}(W,t)&= \frac{1}{Z(t_{i})\sqrt{\pi} \Sigma(t)} \nn \\ &\times \sum_n e^ {- \beta E_n(t_i) } e^{ -\frac{\left(W - \int_{t_p}^{t_m} \dif{t} \lambda f(t) E_n (t) \right)^2 }{ \Sigma(t)^2}} \label{ThermalStateDistribution} . \end{align} Using this work distribution, which takes into account the act of measuring the system on which work is being performed, we arrive at a modified Crooks relation specific to our measurement model: \begin{widetext} \begin{align} \dfrac{\mathcal{P}_{F}(W,t)}{\mathcal{P}_{B}(-W,t_m -t)}= \dfrac{\Sigma(t_{m}-t)Z(t_f)}{\Sigma(t)Z(t_{i})} \dfrac{ \sum_n e^ {- \beta E_n(t_{i}) } e^{ -\left(W - \int_{t_p}^{t_m} \dif{t} \, \lambda f(t) E_n (t) \right)^2 / \Sigma(t)^2}}{\sum_n e^ {- \beta E_n(t_{f}) } e^{ -\left(W + \int_{t_{m}}^{t_{p}} \dif{t} \, \lambda f(t_{m}-t) E_n (t_{m}-t) \right)^2 / \Sigma(t_{m}-t)^2}}, \label{Crooksmodified} \end{align} \end{widetext} where we have parameterized the forward protocol with $t$ for $t_{p}\leq t\leq t_{m}$ and the backward protocol with $t_{m}-t$ for $t_{p}\leq T\leq t_{m}$. Equation~\eqref{Crooksmodified} constitutes a generalization of the standard Crooks relation when the ancilla-assisted measurement protocol is used to define work and finite measurement interactions times and dispersion effects in the measuring apparatus are taken into account. Note that by taking the ideal measurement limit discussed under Eq.~\eqref{Effect}, we reproduce the Crooks relation for equilibrium states. The Jarzynski equality is another important fluctuation relation that governs systems away from equilibrium~\cite{campisiColloquiumQuantumFluctuation2011,JarzynskiEquality1996}, which can be derived straightforwardly from the Crooks relation \begin{align} \Braket{e^{-\beta W}} = \int \dif{W} \mathcal{P}_{F}(W)e^{-\beta W} = e^{-\beta \Delta F}= \dfrac{Z(t_{f})}{Z(t_{i})}. \label{jarz} \end{align} Moreover, using Jensen's inequality, $\Braket{e^{-\beta W}} \geq e^{-\beta \Braket{W}}$, the statement of the second law of thermodynamics follows \begin{align} \Braket{W} \geq \Delta F. \label{secondlaw} \end{align} By using the work distribution in Eq.~\eqref{ThermalStateDistribution}, we can calculate the exponentiated average work at the time of measurement of the apparatus, $t_{m}$, and use that to arrive at a modified Jarzynski equality \begin{align} \Braket{e^{-\beta W}}_{\rm dist} = \frac{e^{\frac{\beta^{2} \Sigma^{2}(t_{m})}{4}} }{Z(t_i)} \sum_n e^{ - \beta \left( E_n(t_i) + \int_{t_p}^{t_m} \dif{t} \lambda \, f(t) E_n (t) \right) }. \nn \end{align} Upon taking the ideal measurement limit, the modified Jarzynski equality reduces to the standard Jarzynski equality in Eq.~\eqref{jarz}. For an equilibrium state of the system at inverse temperature $\beta$, Eq.~\eqref{distaverage} simplifies to the following \begin{align} \Braket{W}_{\rm dist} = \frac{1}{Z(t_{i})} \sum_{n} e^{-\beta E_{n}(t_{i})} \int_{t_{p}}^{t_{m}} \dif{t} \lambda f(t) E_{n}(t) \nn \end{align} Using the same reasoning that led to Eq.~\eqref{secondlaw}, we arrive at a statement of the second law of thermodynamics with respect to our measurement model \begin{align} \Braket{W}_{\rm dist} &\geq -\frac{1}{\beta} \ln{\Bigg(\frac{1}{Z(t_i )}\sum_{n}e^{-\beta(E_{n}(t_{i})+ \int_{t_{p}}^{t_{m}} \dif{t} \lambda f(t) E_{n}(t))}\Bigg)} \nn \\ &\quad -\frac{\beta \Sigma^{2}(t_{m})}{4} . \nn \end{align} It is seen that the finite resolution of the measurement modifies the expression of the second law in a way that is dependent on the temperature of the system and the measurement model parameters. In the ideal measurement limit, the first term in the above expression reduces to $\Delta F$ while the second term goes to zero, thus reproducing the expression in Eq.~\eqref{secondlaw}. We find that there is a constant correction proportional to the product of $\beta$ and the square of the width of the work distribution at $t_m$. For sufficiently low temperatures, this constant correction may remain non-zero even in the ideal measurement limit. \section{The work done on a two-level atom by a changing magnetic field} \label{not self-commuting} We now consider the case in which the system Hamiltonian $H_S(t)$ does not commute with itself at different times, $[H_{S}(t),H_{S}(t')] \neq 0$. As an example of such a scenario, we consider a two-level atom $\mathcal{H}_S \simeq \mathbb{C}^2$ in the presence of a magnetic field that changes in strength and direction between the times $t_p$ to $t_m$, \begin{align} H_S(t) = \mu \vec{B}(t) \cdot \vec{\sigma}, \nn \end{align} where $\mu$ is the magnetic moment of the atom, $\vec{B}(t) = B(t) \hat{n}(t)$ is the magnetic field vector and $\vec{\sigma} = (\sigma_x, \sigma_y, \sigma_z)$ is the Pauli vector. For simplicity, we suppose that the magnetic field is rotating around the $z$-axis at a polar angle $\theta$ so that in the basis furnished by the eigenstates of the $\sigma_z$ operator, the system Hamiltonian takes the form \begin{align} H_S(t) = \mu B(t) \left[ \cos \omega t \sin \theta \sigma_x + \sin \omega t \sin \theta \sigma_y + \cos \theta \sigma_z \right]. \nn \end{align} This Hamiltonian does not commute with itself at different times, $[H_{S}(t),H_{S}(t')] \propto \sin{\theta}$, unless $\theta$ is an integer multiple of $\pi$ in which case the results developed in Sec.~\ref{subsectionselfcommutingtheory} apply. Thus, we will use the parameter $\theta$ as a measure of the non-self-commutativity of $H_{S}(t)$. Suppose that the system and measuring apparatus are prepared at time $t_p$ in a product state $\ket{\psi_A(t_p)} \ket{\psi_S(t_p)}$, where $\ket{\psi_A(t_p)}$ is given in Eq.~\eqref{Astatepgaus} and the initial state of the system at the time of the first sampling $t_i$ is \begin{align} \ket{\psi_S(t_i)} = \alpha \ket{0} + \beta \ket{1}, \nn \end{align} where $\alpha$ and $\beta$ are complex numbers such that $\abs{\alpha}^2 + \abs{\beta}^2 = 1$. To properly compare the effects of the parameters defining the measurement model ($\Delta$, $m$, $\sigma$, $\lambda$, $\omega$) with an the ideal measurement as described in Sec.~\ref{secideal}, $\alpha$ and $\beta$ are chosen such that if the system were to evolve under $H_S(t)$ alone, then at $t_i$ the system would be in the state \begin{align} \mathcal{T} e^{-i \int_{t_p}^{t_i} \dif{t} \, H_S(t)} \ket{\psi_S(t_p)} = \alpha \ket{0} + \beta \ket{1}. \nn \end{align} Generally, we can expand the joint state of both the apparatus and the system at a time $t$ as \begin{align} \ket{\psi(t)}= \sum_{n \in \{0,1\}} \int \dif{p} \, c_{n}(t,p) \ket{n} \ket{p}, \label{jointstate} \end{align} where we have expanded the system's state in the $\sigma_{z}$-basis. The coefficient functions $c_{n}(t,p)$ can be determined by substituting Eq.~\eqref{jointstate} in the Schr\"{o}dinger equation of the Hamiltonian in Eq.~\eqref{totalH}. We arrive at two coupled differential equations \begin{align} i\dot{c}_{j}(t,p) = \frac{p^{2}}{2 m} c_{j}(t,p) + \left[1 + \lambda f(t) p \right] \sum_{k} c_{k}(t,p) H_{jk}(t),\label{maindifeq} \end{align} where $j,k \in \{0,1\}$ and we have defined $H_{jk}(t) = \braket{j|H_{S}(t)|k}$. The work distribution is obtained from the diagonal entries in the position basis of the reduced apparatus density matrix. In this case, the work distribution is given by \begin{align} \mathcal{P}(W,t) &= \frac{1}{2 \pi } \sum_{n}\int \dif{p}\dif{p'} \, c_n(t,p) c_n^*(t,p') e^{iW(p-p')}. \nn \\ &= \frac{1}{2 \pi } \sum_{n}\abs{\int \dif{p} \, c_n(t,p) e^{iWp}}^2. \nn \end{align} \begin{figure}[t] \centering \includegraphics[width=0.45\textwidth]{thetacomparison.pdf} \includegraphics{thetacomparisonlegend.pdf} \caption{For a system prepared in an equally weighted superposition of the two energy eigenstates, which corresponds to $\alpha=\tfrac{1}{\sqrt{2}}$ at the time of the first measurement $t_{i}=2\pi \sigma_{p}$, we show the work distribution at the time $t_{m}=6\pi \sigma_{p}$ for different values of $\theta$.The mass of the measuring apparatus and the width of the interaction are taken to be $m=1000\sigma_{p}^{3}$ and $\Delta = \sigma_{p}$. } \label{thetacomparison} \end{figure} \begin{figure}[ht] \centering \includegraphics[width=0.45\textwidth]{AvgWork.pdf}\\ \includegraphics[width=0.5\textwidth]{avgworklegend.pdf} \caption{For a system prepared in the excited state, which corresponds to $\alpha=0$ at the time of the first measurement $t_{i}=2\pi \sigma_{p}$, we show the average work at $t_{m}=6\pi \sigma_{p}$ for $\theta \in [0,\pi]$ using the work distribution, reduced system state, and system state without interaction. In general, the differences $\Delta W_{\rm int}$ and $\Delta W_{\rm POVM}$ are non-zero. The mass of the measuring apparatus and the width of the interaction are taken to be $m=500\sigma_{p}^{3}$ and $\Delta = 0.8\sigma_{p}$. } \label{avgworkplot} \end{figure} The coupled differential equations in Eq.~\eqref{maindifeq} can be solved numerically and their solutions used to arrive at a work distribution specific to the measurement model. To illustrate in a concrete model the measurement interaction effects on the ancilla-assisted measurement protocol, we consider $B(t)= \gamma t$, where we take $\gamma = B(1)/\sigma_p =1$, for convenience, and all parameters not explicitly stated are set to one for simplicity. Moreover, we will suppose $g(t) = \frac{1}{\sqrt{\pi \Delta^2 }}e^{-t^2/ \Delta^2}$ with the interpretation that the length of the interaction between the apparatus and system is on the order of $\frac{\Delta}{\sigma_{p}}$. We note that the work distribution in Eq.~\eqref{WorkDistribution} is simply a weighted sum of Gaussian functions centered at two different work values. From Fig.~\ref{thetacomparison}, we see that this structure persists in the non-self-commuting case because the work distributions are in general bi-modal. Moreover, we find that as the level of non-self-commutativity increases as quantified by $\theta$, the distribution approaches uni-modality. Analytically, this would correspond to the vanishing of the coefficient corresponding to the mode that is suppressed. We see in Fig.~\ref{avgworkplot} that the reduced system and work distribution average work both deviate from the no-interaction average work. Making the sampling wider will skew the calculated averages using both the distribution or reduced state and thus will no longer approximate the no-interaction average work. A similar analysis of the average heat exchanged between the measuring apparatus and system leads to the conclusion that $\Delta Q_{\rm int}$ as defined in Sec.~\ref{thermocons} is generically non-zero when $\theta \neq 0$. This can be seen from the non-trivial modification to the reduced state of the system \begin{align} \tilde{\rho}_{S}(t) = \sum_{n,m} \int \dif{p} c_{n}(t,p) c^{*}_{m}(t,p) \ket{n} \bra{m}, \nn \end{align} constructed from the solutions to Eq.~\eqref{maindifeq}, on account of the system's interaction with the apparatus. \section{Conclusion} \label{Conclusion} The ancilla-assisted protocol for measuring work distributions was generalized to account for dispersion and finite resolution effects of the measuring apparatus used to extract the work distribution. An explicit measurement model was considered that replicates the statistics of the probability distribution associated with the work done on the system. Two regimes were explored, one in which the system Hamiltonian self-commutes with itself at different times, and another in which it does not. The former admits an analytic expression for the work distribution, which we obtain, while the latter does not but was explored numerically via an example of a two-level in a time-dependent magnetic field. Corrections to the Crooks relation and Jarzynski equality were shown to manifest on account of the finite resolution of the measuring apparatus, which are expected to manifest in any realistic measurement of work distributions. \begin{acknowledgments} We thank Nicolai Friis and Nicole Yunger Halpern for their expertise and careful revision of a draft of this paper. This work was supported in part by the Paul K. Richter and Evalyn E. Cook Richter Memorial Fund, a Kaminsky Undergraduate Research Award, the Dartmouth College Society of Fellows, and Saint Anselm College. \end{acknowledgments}
{ "timestamp": "2021-12-01T02:26:30", "yymm": "2111", "arxiv_id": "2111.15470", "language": "en", "url": "https://arxiv.org/abs/2111.15470" }
\section{Introduction} Image registration (IR) is required in a large variety of medical imaging studies involving tasks such as inter-subject comparison \cite{Ardekani2005}, patient follow-up \cite{Brock2006}, modality fusion \cite{Heinrich2013}, atlas generation \cite{Lorenzen2006}, or more recently cross-modality image synthesis with deep learning techniques \cite{Florkow2019}. Automated IR is however a difficult problem, as medical images can be acquired from modalities based on radically different physical principles and/or biological properties (e.g. anatomical imaging compared to functional imaging), using various image sampling strategies (e.g. slice thickness), showing inconsistent representation of underlying structures such as organs or tumours due to motion (e.g. respiratory movements \cite{Fayad2017}) or structural alterations (e.g. disease progression \cite{Zacharaki2009}). For these reasons, image registration is an active field of study followed by large communities of researchers and advanced users of dedicated software such as ANTs \cite{Avants2014} or \textit{elastix} \cite{Klein2010}. Over the recent years, the research in this field has mostly focused towards intensity-based ("voxel-based"), volumetric deformable image registration (DIR), a general alignment scenario accounting for nonlinear deformations between images and where no information other than voxel data is available \cite{Viergever2016}. Many components in a DIR pipeline can have a strong influence on the accuracy of the results. The similarity metric (SM) is often regarded as the most critical \cite{Sotiras2013}, although a variety of other parameters such as the number of iterations of the optimizer or the optimizer itself \cite{Klein2009}, the number of scales in multiscale methods, or the nature of the regularizer \cite{Vishnevskiy2016} have also shown to strongly influence performance. Regarding the choice of the SM, normalized mutual information (NMI) \cite{Viola1997,Studholme1999} has remained highly popular for more than two decades \cite{Sotiras2013} due to its high success and its relative insensitivity to modality changes. Despite this NMI has also some disadvantages. For example, it is known to exhibit many local minima \cite{Haber2006} and to be sensitive to inhomogeneities such as bias fields in magnetic resonance images (MRI) \cite{Heinrich2012}. It is generally admitted that the limitations of NMI are mainly due to the global nature of the intensity histograms, as spatial relationships between voxels are not considered \cite{Loeckx2009,Heinrich2012,Viergever2016}. As pointed out by the authors of \textit{elastix} "\textit{it is unlikely that mutual information will be able to maintain its popularity, given the need for local measures of image similarity}" \cite{Viergever2016}. A number of studies have therefore been devoted to the development of more robust, local and modality-independent alternatives to the NMI criterion \cite{Haber2006,Loeckx2009,Heinrich2012,Simonovsky2016}. Among these efforts towards the development of new SM, Haber and Modersitzki proposed the normalized gradient field (NGF) \cite{Haber2006}. The rationale of NGF is to maximize the alignment between pairs of normalized image gradient vectors (instead of intensity images). The sign of the vectors are ignored to enforce robustness to contrast inversion between different modalities. The normalization step ensures that contrast variations across regions do not influence registration results. Similar concepts were introduced earlier in \cite{Pluim2000} where gradient similarity was used in combination with mutual information-based intensity registration. In practice, NGF is implemented as a SM through a variational formulation by either maximizing the square of the dot product between the two fields, or equivalently by minimizing the norm of the cross product. A limitation is that normalized gradients are only meaningful around edges, whereas in homogeneous regions their directions depend on local noise. To alleviate this issue, an empirical gradient threshold is defined based on the estimated noise level to suppress the influence of weak gradients attributable to noise. However, thresholding may not be optimal as it discards weak but real edges and preserves high amplitude noise. Another recently popular SM is the modality independent neighborhood descriptor (MIND) \cite{Heinrich2012}, which encourages local structural self-similarity. The local structure is in itself described by a vector-valued representation of the local similarity of small image patches. Such descriptors can then be compared between two images using conventional monomodal SM like the sum of square differences (SSD). However, the MIND does not encode the spatial orientations of image structures, contrary to NGF. Both NGF and MIND are being increasingly used in the community. For instance, NGF is used in the current winning method of the EMPIRE10 lung computed tomography (CT) DIR challenge \cite{Murphy2011}, followed by a method based on MIND. A more recent research path is to learn DIR unsupervisedly from training data using deep neural networks \cite{Simonovsky2016}. However, deep learning-based strategies for IR are still in their infancy \cite{Blendowski2020}, and it is not clear from recent results that they can outperform more conventional optimization approaches based on handcrafted similarity metrics and user software experience \cite{Devos2019}, as it is the case for example in supervised image segmentation with deep strategies such as U-Nets \cite{Ronneberger2015}. Another way of improvement concurrent to the development of new SM focuses on \textit{structural representations} (SR) \cite{Wachinger2012}, whereby images themselves are transformed before being aligned in such a way that conventional monomodal SM, like the SSD, can be employed. The authors of \cite{Wachinger2012} introduce \textit{Entropy} and \textit{Laplacian} images, two SR that offer theoretical guarantees in terms of {locality preservation} and {structural equivalence} at the patch level. They demonstrate these SR can achieve superior performance over the use of intensity images. The motivations for modality independent SR are similar to the ones developed for the MIND approach, and identical concepts are now being adopted in deep learning-based DIR under the name of \textit{image transformer networks} \cite{Lee2019,Arar2020}, whereby a geometry preserving image translation neural network is trained to learn a SR, which is then fed to a \textit{spatial transformer network} \cite{Jaderberg2015} in charge of the actual image warping. More generally and relaxing theoretical requirements at the patch level, SR can be seen as a form of domain adaptation ensuring that heterogeneous data can be compared while suppressing modality-specific image characteristics. As pointed out by the authors of MIND \cite{Heinrich2012}, the SR in \cite{Wachinger2012} are scalar-valued. As such, they cannot convey directional information relative to the orientation of image structures. The MIND descriptor captures local self-similarity in a vector-valued SR, but it also does not carry information related to structure orientations. This property is an essential advantage of the NGF metric over these SR-based approaches. In this work, we propose a new method for multimodal image registration based on the evaluation of the similarity between regularized directional \textit{vector-valued} fields derived from structural information, a technique we call Vector Field Similarity (VFS). These new representations are derived from edge-based fields (EBF) that are usually used for deformable model segmentation purposes to provide smooth vector fields oriented towards edges \cite{Xu1998,Li2007,Jaouen2014}. We show that EBF, by encoding regularized edge orientation in a vector-valued SR, can overcome the limitations of intensity-based registration. Similarly to existing SR-based registration methods, our approach can be combined in a straightforward and flexible fashion with any vector-valued registration framework regardless the SM chosen. As already noted in \cite{Heinrich2012}, this is in contrast to the NGF approach, which is formulated at the metric level. Amongst other advantages, SR enable a direct and fair comparison with existing registration pipelines, by using SR as substitutes for intensity images, all else being equal. In our experiments, we show that VFS compares favorably to conventional intensity-based methods on public datasets, using multiple image registration scenarios for a variety of imaging modalities and anatomical locations. \section{Methods} \begin{figure}[htbp] \begin{center} \subfigure[CT image]{ \includegraphics[width=.45\linewidth]{graphics/DIRLAB_I0} } \subfigure[NGF]{ \includegraphics[width=.45\linewidth]{graphics/DIRLAB_NGF.png} } \subfigure[EBF, $\gamma=5.0$]{ \includegraphics[width=.45\linewidth]{graphics/DIRLAB_VFS_kp50.png} } \subfigure[EBF, $\gamma=4.0$]{ \includegraphics[width=.45\linewidth]{graphics/DIRLAB_VFS_kp30.png} } \subfigure[EBF, $\gamma=3.0$]{ \includegraphics[width=.45\linewidth]{graphics/DIRLAB_VFS_kp20.png} } \subfigure[EBF, $\gamma=2.0$]{ \includegraphics[width=.45\linewidth]{graphics/DIRLAB_VFS_kp10.png} } \end{center} \caption{\label{figVFC} Differences between normalized gradient fields (NGF) and edge-based fields (EBF) shown in a patient of the DIR-Lab dataset. The EBF is a vector field convolution field \cite{Li2007}. Results are shown using streamlines to ease visualization. (b) NGF field (c-f) EBF field with increasing levels of smoothing, smaller $\gamma$ indicates stronger smoothing. For high values of $\gamma$, NGF and EBF are similar. For stronger smoothing effect, EBF vectors are directed towards edges including in homogeneous regions, contrary to NGF. Near edges, EBF orientation is independent of contrast inversion and points towards edges.} \end{figure} \subsection{Vector Field Similarity} We consider a fixed image $I$ and a moving image $J$ defined on the image grid $\Omega$. The aim of the proposed image registration method is to find a transformation $\hat T$ such that: \begin{equation} \hat T = \argmax_{T \in \mathcal{T}} ~ \mathcal S\left(\mathbf D^I,\mathbf D^J\left(T\right)\right), \end{equation} where $\mathcal S$ is a similarity metric such as NMI or SSD, $\mathcal{T}$ the space of transformations, and $\mathbf D^I$ and $\mathbf D^J$ are {vector-valued} SR of $I$ and $J$. This is different from the method proposed in \cite{Wachinger2012} where {scalar} SR are considered. Several strategies can be considered to define $\mathbf D$, one of which is the MIND descriptor \cite{Heinrich2012}, which encodes local self-similarities. The NGF method is on the other hand expressed at the metric level $\mathcal S$ and describes the similarity between the normalized gradient fields of the fixed and the moving images. One of its formulation is: \begin{equation} \mathcal S_\text{NGF} := -\frac{1}{2}\int_{\mathbf x \in \Omega} \langle \mathbf n_\varepsilon\left(I\right), \mathbf n_{\varepsilon}\left(J\left(T\right)\right)\rangle ~\mathrm{d}{} \mathbf x, \end{equation} where $\mathbf n_\varepsilon(I)$ and $\mathbf n_\varepsilon(J)$ are normalized gradient fields of the fixed and moving images: \cite{Haber2006}: \begin{equation} \mathbf n_\varepsilon\left(I\right) :=\frac{\nabla I}{\sqrt{\nabla I^T \nabla I+\varepsilon^2}}, \end{equation} with $\varepsilon$ an estimate of the noise level of $I$. A main advantage of NGF over local descriptors such as MIND is that it encodes the orientations of image structures identified by the image gradient. In principle, normalized gradient vectors can be directly considered as a vector-valued structural representation rather than at the metric level. Here, we propose to use edge-based vector fields (EBF) normally used for image segmentation for this purpose, a technique we call Vector Field Similarity (VFS). EBF such as the popular gradient vector flow \cite{Xu1998} are smooth vector fields derived from edge information and oriented towards edges \cite{Xu1998}. EBF enable to extend the capture range of deformable models and to reduce sensitivity to noise through a regularization of the field orientations in homogeneous, edge-free regions (Fig. \ref{figVFC}c-f). They can also show some contour completion abilities in case of missing edges \cite{Jaouen2019,Jaouen2019a}. These properties are not only desirable for image segmentation with active contours, but also for image registration purposes and may help overcoming some of the limitations of previously proposed gradient-based alignment methods, such as NGF. For example, EBF orientations in homogeneous regions contribute favorably to image alignment, contrary to NGF where vectors point in random directions due to noise (Fig. \ref{figVFC}b). \subsection{Vector Field Convolution fields} \begin{figure}[h] \centering \subfigure[T1-weighted BrainWeb image.]{ \includegraphics[width=.45\linewidth]{graphics/T1} } \subfigure[T2-weighted BrainWeb image.]{ \includegraphics[width=.45\linewidth]{graphics/T2} } \subfigure[Noise level: $0\%$]{ \includegraphics[width=.9\linewidth]{graphics/T1vsT1_sig00mod} } \subfigure[Noise level: $9\%$]{ \includegraphics[width=.9\linewidth]{graphics/T1vsT1_sig09mod} } \caption{\label{figTranslationMono} Effect of noise on a self translation study along the coronal axis using the T1-weighted BrainWeb image shown in (a) for both normalized gradient (NGF) and vector field convolution similarity (VFS). The value is the average norm of the dot product, negated by convention to show attraction basins. VFS results are shown for several values of the nonlinear smoothing VFC parameter $\gamma$.} \end{figure} \begin{figure}[h] \centering \subfigure[Noise level: $0\%$]{ \includegraphics[width=.90\linewidth]{graphics/T1vsT2_sig00mod} } \subfigure[Noise level: $9\%$]{ \includegraphics[width=.90\linewidth]{graphics/T1vsT2_sig09mod} } \caption{\label{figTranslationMulti} Multimodal (multiparametric) translation study between the T1 and T2 images of the BrainWeb database. Due to contrast inversions between the modalities the NGF sign in (a) is reverted compared to Fig \ref{figTranslationMono}c. To ease and provide further elements of visual comparison, we negate the values for NGF in (b) scale all profiles between 0 and 1 .} \end{figure} In the present work, without loss of generality, we choose vector field convolution (VFC) \cite{Li2007} to generate robust and smooth EBF. VFC fields share several desirable properties with GVF fields such as large capture range while being more computationally efficient, requiring only $n$ convolutions with a kernel in $n$-dimensional images \cite{Xu2020}. This property is especially useful when considering large volumetric medical images. VFC fields also demonstrate superior robustness to impulse noise compared to GVF \cite{Li2007}. A VFC field $\mathcal F$ is expressed as: \begin{equation} \mathcal F = f * \mathcal K, \end{equation} where $f$ is an edge map typically derived from structural information like image gradient and $*$ is the convolution operation. In our experiments, we chose ${f=\left\|\nabla I\right\|^2}$ for simplicity, but more elaborate approaches such as edge maps based on local structure tensors \cite{Jaouen2014} or image pre-processing \cite{Bazan2007} can naturally be considered. $\mathcal K$ is a \textit{vector field kernel}, a vector kernel whose orientations point towards its center with decreasing magnitude $m$: \begin{equation} m = \frac{1}{r^\gamma+\epsilon} \end{equation} where $r$ is the distance to center \cite{Li2007}, $\epsilon$ a small positive value to avoid division by zero at the center, and $\gamma$ a fixed parameter that controls the rate of decrease. To illustrate the properties of VFC fields, Fig. \ref{figTranslationMono} shows a self translation study using the BrainWeb \cite{Cocosco1997} T1-weighted MRI (Fig. \ref{figTranslationMono}a) for two distinct noise levels ($0\%$ and $9\%$, corresponding to the minimum and maximum noise levels of the BrainWeb database). Results with increasing values for the smoothing parameter $\gamma$ are shown for VFS. The degree of co-alignment between the image and its translated version is measured as the average dot product between the fixed and the moving fields using either normalized VFC fields or NGF fields, i.e. perfect alignment is of magnitude $1$. To respect registration conventions, the negative of the similarity value is shown. Even in the idealized case of self-registration without noise (Fig. \ref{figTranslationMono}c), the attraction basin of the NGF becomes quickly non convex. This is due to the inversion of the gradient signs even at small transation shifts. This non-convexity increases with noise, narrowing the capture range of gradient similarity to a few voxels (Fig. \ref{figTranslationMono}d). The dependency on the sign of the vectors of NGF can be lifted by squaring the magnitude of the dot product, as proposed in previous related works \cite{Haber2006,Pluim2000}. However, squaring first-order gradient information leads to noise amplification. On the contrary, VFS alleviates this need while allowing tunable nonlinear smoothing to further reduce sensitivity to noise. VFS also more essentially provides some form of feature selection by accumulating the weight of relevant edges while smearing out noise \cite{Li2007}. The effect of VFS on image registration is mainly determined by the degree of smoothness of the EBF field. In the case of VFC, this smoothness is controlled by a unique parameter $\gamma$, which specifies the tradeoff beween weak edge preservation and noise reduction. The higher $\gamma$ the lesser smoothing of the field occurs, and VFC field similarity becomes similar to NGF field similarity. More elaborate EBF can also be considered depending on the context, such as generalized GVF fields \cite{Xu1998a} or fields dedicated to vector-valued images \cite{Jaouen2013,Jaouen2014} that may use additional parameters to better tune edge preservation (through e.g. gradient thresholding or nonlinear structure tensor smoothing). However in the present case of VFC-based VFS registration $\gamma$, is a unique additional parameter that needs be tuned to a specific registration application that act as a tradeoff between robustness to noise and preservation of local orientations of image structures. Fig. \ref{figTranslationMulti} shows a translation study similar to Fig. \ref{figTranslationMono} in a multimodal context, where the translated image is replaced by the T2-weighted BrainWeb image (Fig. \ref{figTranslationMono}b). Due to global contrast inversion between T1 and T2 weightings, the sign of the NGF is flipped with respect to Fig. \ref{figTranslationMono} (Fig. \ref{figTranslationMulti}a). Also, the maximum alignment value no longer achieves a magnitude of $1$ as the images are not identical. The lack of convexity of the attraction basin of the NGF becomes more prominent with added noise (Fig. \ref{figTranslationMulti}b), while on the contrary VFS consistently provides a smoother energy landscape for various values of $\gamma$. Contrary to the original NGF approach, which is formulated at the similarity metric level and where the alignment of gradient vectors are directly optimized based on dedicated alignment metrics, we consider EBF to be vector-valued structural representations of the underlying images, similarly to the philosophy behind (scalar-valued) entropy or Laplacian structural representations \cite{Wachinger2012} or the (vector-valued) MIND approach \cite{Heinrich2012}. This point of view enables the use of any other SM deemed relevant to the problem at hand to evaluate VFS, such as e.g. SSD, NMI or normalized cross correlation (NCC). \section{Evaluation process and datasets} The difficulty of validating image registration methods has been largely discussed and argumented by experts in the field in the last decade \cite{Murphy2011, Rohlfing2013,Viergever2016}. A consensus is that the preferred validation strategy should be to use dedicated datasets carefully annotated with dense point landmarks (of the order of several hundreds per organ), and to rely on average distance-based metrics such as mean target registration errors (TRE). This type of ground truth is naturally extremely difficult to produce and is, as of today, only available publicly for lung CT studies \cite{Castillo2009,Castillo2013,Murphy2011}. Due to lack of annotations, using surrogates to landmark-based validation is nevertheless often the only validation strategy available and should be done carefully. A study by Rohlfing showed that a great number of DIR studies published in top journals and international conferences did not respect validation standards, relying too heavily on validation surrogates such as unsupervised image similarity metrics or volumetric overlaps between insufficiently small anatomical structures \cite{Rohlfing2013}. These observations are still valid for many more recent studies. Another challenge with DIR validation is to demonstrate the superiority with respect to the state-of-the art. DIR pipelines contain a large quantity of components that can have a critical influence on the results \cite{Klein2010,Modersitzki2009,Tustison2014}, and success is most often achieved through intensive practice and extensive parameter exploration. Due to the size of the parameter space and to nonlinear interdependencies between these parameters, it is however extremely difficult to establish a fair ranking among different methods \cite{Murphy2011}. This task is nevertheless facilitated by active communities of software users such as \textit{elastix} \cite{Klein2010} or ANTs \cite{Tustison2014} and challenge organizers sharing best practice and pitfalls associated with various use cases for different anatomical locations, the most common of which being the brain and the lungs. To partly address these issues raised with the validation of image registration algorithms, we evaluated the proposed VFS-based registration approach using existing DIR pipelines that were previously validated by research groups on the same public image datasets that we used in our experiments when available. All else being equal, we evaluated the change in performance using VFS by substituting EBF-based SR to the intensity image. That is, instead of the image-based similarity metric $\mathcal S \left(I, J\right)$ used in a given pipeline, we averaged the same metric on the $n$ components of the EBF: \begin{equation} \mathbf S \left(\mathbf D^I, \mathbf D^J\right) = \frac{1}{n} \sum_{i=1}^n \mathcal S \left(D_i^I, D_i^J\right), \end{equation} where $D_i^I$ and $D_i^J$ are the i$^\text{th}$ components of $\mathbf D^I$ and $\mathbf D^J$ ($n=3$ in volumetric images). In this context, rather than absolute performance, the objective is to demonstrate that regardless the pipeline considered, substituting vector structural representation to the intensity image leads to improvedresults in several registration scenarios. In addition, this constrained validation setup can ease comparative evaluation by removing potential bias due to algorithm re-implementation. Such a choice is not optimal for VFS that could likely benefit from further optimization. In all our experiments, $\mathbf D^{I,J}$ are VFC fields computed using a discrete kernel support size of $100$ voxels. For practical reasons, we implemented our own version of VFC in C++ using ITK \cite{Avants2014}. A Matlab implementation provided by the authors of VFC is available online\footnote{http://viva-lab.ece.virginia.edu/pages/toolbelt.html}. For all datasets, the VFC exponent parameter $\gamma$ controlling the degree of smoothness of the EBF was empirically selected in the range $[2.5,4.5]$ so as to maximize the studied evaluation metric and is specified for each experiment in the next section. We used several publicly available datasets covering diverse anatomical locations including brain MRI and lung CT imaging, which are two common applications for image registration. We also propose the use of a publicly available abdominal MRI segmentation dataset which is also well-suited as a surrogate for the validation of multimodal image rgistration since it focuses on small anatomical structures (kidneys). The rationale behind the choice of the datasets was to cover a large variety of registration scenarios while encouraging reproducibility. More specifically, we used both Hammers\footnote{https://brain-development.org/brain-atlases/adult-brain-atlases/} \cite{Hammers2003} and IBSR18\footnote{https://www.nitrc.org/projects/ibsr} brain MRI datasets, the DIR-Lab\footnote{https://www.DIR-Lab.com/} thoracic CT datasets \cite{Castillo2009} as well as the CHAOS\footnote{https://chaos.grand-challenge.org/} abdominal multiparametric MRI segmentation dataset \cite{Chaos2020}, which we propose to use for the first time for image registration purposes. \subsection{\label{sec3a} Landmark-based validation on lung 4DCT images} The only public datasets providing dense landmark correspondences are the three DIR-Lab studies \cite{Castillo2009,Castillo2009a,Castillo2013}, and the EMPIRE10 challenge dataset \cite{Murphy2011}, which all concern lung 4DCT studies. The validation of EMPIRE10 results is indirect as it requires a written submission and a participation to the challenge website. In our experiments, we focused on the three DIR-Lab datasets, which provide access to $300$ landmarks' correspondences between the end-inhale and end-exhale phases for $20$ patient cases. The DIR-Lab datasets are divided in $3$ sub-datasets, hereafter referred to as studies DIR-Lab-4DCT-1 \cite{Castillo2009}, DIR-Lab-4DCT-2 \cite{Castillo2009a} and DIR-Lab-COPd \cite{Castillo2013} showing variations in displacement amplitudes between the two breathing phases. We evaluated VFS in the context of thoracic imaging using two existing dedicated registration pipelines: \begin{itemize} \item a conventional \textit{elastix} DIR pipeline, hereafter referred to as ELX, constituted of three consecutive stages (one affine and two B-spline stages using the NCC metric) that was primarily designed for the EMPIRE10 lung registration challenge where it achieved the second best ranking at the time of submission \cite{Murphy2011}. An \textit{elastix} parameter file\footnote{http://elastix.bigr.nl/wiki/index.php/Par0011} was released by the authors allowing for an objective comparison with VFS. The second DIR stage (ELX-2) is used to refine results from the first DIR stage (ELX-1) using lung masking. In the following, the corresponding VFS-based results are referred to as VFS-1 and VFS-2 respectively. \item the \textit{pTV} \footnote{https://github.com/visva89/pTVreg} pipeline based on isotropic total variation regularization \cite{Vishnevskiy2016}, which is currently the best performing method on the DIR-Lab datasets. The rationale behind pTV is to allow for sharp transitions in the deformation fields along sliding interfaces through total variation-based regularization of the control grid displacements of first order B-splines. Such transitions are common in the lung and impossible to render with $\ell_2$-based smoothness penalization. We found the mean TRE values reported in \cite{Vishnevskiy2016} to be slightly sub-optimal when running the algorithm and we therefore report two sets of measurements, referred to as pTV-1 for values reported in \cite{Vishnevskiy2016} and pTV-2 for the updated values. \end{itemize} In this experiment, the VFS parameter $\gamma$ was set to $\gamma=3$. \subsection{\label{sec3b} Subcortical volume overlaps in MRI} A major recommendation of Rohlfing's study \cite{Rohlfing2013} is that, in the absence of landmark-based correspondences, "\textit{only overlap of sufficiently local labeled ROIs could distinguish reasonable from poor registrations}", the author arguing that \textit{"smaller, more localized ROIs approximate point landmarks, and their overlap thus approximates point-based registration error"}. These principles have guided our validation strategy in the remainder of this section. The Hammers dataset \cite{Hammers2003} is a brain T1-weighted MRI dataset consisting of $30$ subjects that were manually segmented into $83$ regions. Similarly to section \ref{sec3a}, we compared our approach to an existing study by Qiao et al. \cite{Qiao2015} for which parameter files for are also available on the \textit{elastix} online database website \footnote{http://elastix.bigr.nl/wiki/index.php/Par0035}. We reproduced the results described in \cite{Qiao2015} by performing inter-subject registration for all patients, leading to 870 fixed-moving image pairs. For each subject, we studied the average Dice similarity coefficient (DSC) in the sixteen labeled subcortical regions (left and right hippocampus, amygdala, cerebellum, caudate nucleus, nucleus accumbens, putamen, thalamus and pallidum) after image registration with the $29$ remaining images. We focused on subcortical structures in agreement with the recommendations of Rohlfing \cite{Rohlfing2013}. Although the cerebellum arguably does not meet these recommendations due to its large volume, it was kept in our experiments for completeness. Left and right segmentations were merged into bilateral segmentations for clarity, leading to $8$ labeled regions. A similar study was also conducted on the IBSR18 brain dataset consisting of $18$ healthy brain MRI subjects. The VFS parameter $\gamma$ was set to $\gamma=4$. \subsection{\label{sec3c}Renal volume overlaps in multiparametric abdominal MRI} We are also interested in the validation of VFS in a multimodal context. As of today, the only public multimodal dataset dedicated to image registration is the RIRE database for rigid brain alignment\footnote{http://www.insight-journal.org/rire}. To diversify anatomical locations, we propose the use of the recent Combined Healthy Abdominal Organ Segmentation (CHAOS) database \cite{Chaos2020}, a multiparametric MRI dataset for abdominal organ segmentation. Abdominal MRI is a less studied and nevertheless challenging problem for image registration \cite{Carrillo2000}. The CHAOS database is constituted of a multiparametric MRI study showing the segmentation of four abdominal organs (liver, spleen, right and left kidneys) acquired with two different pulse sequences (T1-DUAL and T2-SPIR) for $20$ patients that were neither co-registered nor resliced into the same image sampling space. In addition to showing a less explored anatomical location, several characteristics of the CHAOS dataset suggest its potential interest as a surrogate for image registration validation. A major advantage is that the manual ground truth segmentation masks of the different organs were performed independently in the two modalities. The kidneys are also relatively small organs compared to the field of view (FoV), which follows the recommendations of \cite{Rohlfing2013}. Finally, the relative symmetry of the kidneys enable a bilateral evaluation of the registration performance across the FoV. For these reasons, we have studied the average Dice overlap measurements for the left and right kidneys in the CHAOS dataset as a new surrogate measurement for multimodal image registration accuracy. As previously, we used an existing registration pipeline based on \textit{elastix}\footnote{http://elastix.bigr.nl/wiki/index.php/Par0057} (referred to as ELX) for multiparametric MRI to evaluate VFS objectively. This pipeline was originally proposed in \cite{Jansen2019} for the registration of diffusion weighted MRI to abdominal dynamic contrast enhanced MRI for liver segmentation and tumor detection. It is a two-stage registration procedure (rigid then B-spline) and uses NMI as a similarity metric. The VFS parameter $\gamma$ was set to $\gamma=2.5$. \section{Results} \subsection{\label{sec4a} Landmark-based validation on lung 4DCT images} Table \ref{tab1} and \ref{tab2} show results for the ELX comparative study on the three DIR-Lab datasets. Only the results for the final B-spline stage are shown in table \ref{tab2} for clarity. A consistent improvement of TRE was observed for all patients with the simple substitution of intensity images for directional structural VFC-based representations. After the first stage, an average mean TRE of $3.34\pm1.77$ mm was achieved for ELX-1 against $2.23\pm1.06$ mm for VFS-1. Results were further improved with the second B-spline stages, with an average mean TRE of $2.17\pm1.04$ mm for ELX-2 (performing only slightly better than the first VFS stage) against $1.84\pm0.64$ mm after the second VFS stage. Table \ref{tab3} shows mean TREs for the DIRLAB-4DCT datasets using isotropic total variation regularization. Results were more contrasted based on the dataset considered, with significant improvement for VFS in $4$ out of $5$ cases of study \cite{Castillo2009}. The substitution for VFS sets two new record TRE scores in the DIR-Lab challenge leaderboard for the first two cases (mean TREs of $0.69$ mm and $0.67$ mm for cases \#1 and \#2 respectively). However, slightly lower performances than the original pTV were achieved for the second 4DCT dataset \cite{Castillo2009a}. The study \cite{Castillo2009} shows smaller average displacements prior to image registration than the other two \cite{Castillo2009a,Castillo2013}, which may explain the variations in performances. Nevertheless, we recall that these results were not optimized for VFS and that the only change made was to provide EBF components as substitutes for intensity images. \begin{table}[htbp] \centering \caption{\label{tab1}DIR-Lab-4DCT datasets \cite{Castillo2009,Castillo2009a}. Comparison with two-step \textit{elastix} intensity registration. Mean TRE (in mm) for 300 landmarks.} \begin{center} \begin{tabular}{c|c c c c c } \hline \textbf{Subject} & {Orig.} & {ELX-1} & {ELX-2} & {VFS-1} & {VFS-2}\\ \hline Study \cite{Castillo2009a}\\ \#1 & $3.89$ & $1.29$ & $1.07$ & $\mathbf{1.06}$ & $\mathbf{1.06}$\\ \#2 & $4.34$ & $1.55$ & $1.08$ & $\mathbf{1.02}$ & ${1.05}$\\ \#3 & $6.94$ & $2.37$ & $1.56$ & ${1.44}$ & $\mathbf{1.40}$\\ \#4 & $9.83$ & $2.80$ & $2.00$ & $\mathbf{1.86}$ & ${1.88}$\\ \#5 & $7.48$ & $3.25$ & $2.26$ & $\mathbf{2.07}$ & $\mathbf{1.98}$\\ \hline Study \cite{Castillo2009}\\ \#6 & $10.89$ & $3.60$ & $2.45$ & $2.87$ & $\mathbf{2.19}$\\ \#7 & $11.03$ & $5.04$ & $2.59$ & $3.49$ & $\mathbf{2.00}$\\ \#8 & $14.99$ & $7.40$ & $4.73$ & $4.33$ & $\mathbf{3.29}$\\ \#9 & $7.92$ & $3.00$ & $1.93$ & $1.98$ & $\mathbf{1.67}$\\ \#10 & $7.30$ & $3.06$ & $2.05$ & $2.20$ & $\mathbf{1.86}$\\ \hline \end{tabular} \end{center} \end{table} \begin{table}[htbp] \centering \caption{\label{tab2}DIR-Lab-COPd dataset \cite{Castillo2013}. Comparison with two-step \textit{elastix} intensity registration. Mean TRE (in mm) for 300 landmarks.} \begin{center} \begin{tabular}{c|c c c c c } \hline \textbf{Subject} & {Orig.} & {ELX-2} & {VFS-2}\\ \hline Study \cite{Castillo2013}\\ \#1 & $26.33$ & $13.59$ & $\mathbf{8.12}$ \\ \#2 & $21.79$ & $17.83$ & $\mathbf{6.74}$ \\ \#3 & $12.64$ & $5.40$ & $\mathbf{2.87}$ \\ \#4 & $29.58$ & $13.71$ & $\mathbf{11.03}$ \\ \#5 & $30.08$ & $14.20$ & $\mathbf{10.74}$ \\ \#6 & $28.46$ & $12.66$ & $\mathbf{7.07}$ \\ \#7 & $21.60$ & $7.26$ & $\mathbf{5.30}$ \\ \#8 & $26.46$ & $10.50$ & $\mathbf{8.02}$ \\ \#9 & $14.86$ & $6.43$ & $\mathbf{4.07}$ \\ \#10 & $21.81$ & $13.38$ & $\mathbf{10.75}$ \\ \hline \end{tabular} \end{center} \end{table} \begin{table}[htbp] \centering \caption{\label{tab3}DIR-Lab-4DCT datasets. Comparison with pTV intensity registration. mean TRE (in mm) for 300 landmarks}\begin{center} \begin{tabular}{c|c c c c c } \hline \textbf{Subject} & orig. & {pTV-1} & {pTV-2} & {VFS}\\ \hline Study \cite{Castillo2009a}\\ \#1 & $3.89$ & $0.76$ & $0.77$ & $\mathbf{0.69}$ \\ \#2 & $4.34$& $0.77$ & $0.75$ & $\mathbf{0.67}$ \\ \#3 & $6.94$ & $0.90$ & $0.93$ & $\mathbf{0.87}$ \\ \#4 & $9.83$ & $1.24$ & $1.26$ & $\mathbf{1.22}$\\ \#5 & $7.48$ & $1.12$ & $\mathbf{1.07}$ & $1.11$\\ \hline Study \cite{Castillo2009}\\ \#6 & $10.89$ & $0.85$& $\mathbf{0.83}$ &$0.95$\\ \#7 & $11.03$ & $\mathbf{0.80}$ & $\mathbf{0.80}$ & $0.87$\\ \#8 & $14.99$ & $1.34$ & $\mathbf{1.01}$ & $1.05$\\ \#9 & $7.92$ & $0.92$ & $\mathbf{0.91}$ & $0.98$\\ \#10 & $7.30$ & $\mathbf{0.82}$ & $0.84$ & $0.88$\\ \hline \end{tabular} \end{center} \end{table} \subsection{\label{sec4b} Subcortical volume overlaps in MRI} \begin{figure}[htbp] \subfigure[Hammers dataset]{\includegraphics[width=.95\linewidth]{graphics/boxPlot_Hammers.pdf}} \subfigure[IBSR18 dataset]{\includegraphics[width=.95\linewidth]{graphics/boxPlot_IBSR.pdf} } \caption{\label{figBox} Box plots for tissue overlap scores as measured by the DSC in the (a) Hammers and (b) IBSR18 datasets in $8$ subcortical structures. The middle line represents the median. Results are shown as colored pairs in each structure for intensity-based (left box) and VFS-based (right box) registration. } \end{figure} \begin{table*} \begin{center} \caption{\label{tab4}Hammers dataset - DSC after registration (in \%) averaged over $8$ labeled subcortical regions for each subject. Standard deviation is between brackets.} \begin{tabular}{l|c c c c c c c c c c c c|} \hline \textbf{Method} &\#1&\#2&\#3&\#4&\#5&\#6&\#7&\#8&\#9&\#10 \\ \hline \textbf{Qiao et al. \cite{Qiao2015} }& $76.8$ & $74.9$ & $76.9$ & $75.7$ & $77.2$ & $78.0$ & $74.4$ & $76.3$ & $71.6$ & $73.8$ \\ & ($9.5$) & ($13.8$) & ($11.8$) & ($12.8$) & ($10.3$) & ($10.3$) & ($14.9$) & ($12.6$) & ($15.2$) & ($13.2$) \\ \textbf{VFS} & $\mathbf{77.8}$ & $\mathbf{76.5}$ & $\mathbf{78.4}$ & $\mathbf{78.4}$ & $\mathbf{78.5}$ & $\mathbf{79.3}$ & $\mathbf{74.9}$ & $\mathbf{77.8}$ & $\mathbf{76.0}$ & $\mathbf{75.6}$ \\ & ($8.9$) & ($13.8$) & ($10.6$) & ($10.2$) & ($9.9$) & ($8.5$) & ($14.6$) & ($10.6$) & ($12.1$) & ($13.2$) \\ \hline &\#11&\#12&\#13&\#14&\#15&\#16&\#17&\#18&\#19&\#20 \\ \hline \textbf{Qiao et al. \cite{Qiao2015} }& $74.3$ & $76.5$ & $75.3$ & $77.2$ & $75.3$ & $75.2$ & $72.3$ & $76.1$ & $76.1$ & $76.6$ \\ & ($12.5$) & ($13.0$) & ($12.1$) & ($10.9$) & ($13.3$) & ($12.0$) & ($15.7$) & ($11.0$) & ($11.0$) & ($12.0$) \\ \textbf{VFS} & $\mathbf{78.6}$ & $\mathbf{77.7}$ & $\mathbf{78.7}$ & $\mathbf{77.4}$ & $\mathbf{76.6}$ & $\mathbf{78.1}$ & $\mathbf{76.1}$ & $\mathbf{76.7}$ & $\mathbf{79.2}$ & $\mathbf{77.6}$ \\ & ($9.0$) & ($13.8$) & ($9.1$) & ($11.4$) & ($13.5$) & ($10.3$) & ($11.7$) & ($10.0$) & ($8.8$) & ($12.6$) \\ \hline &\#21&\#22&\#23&\#24&\#25&\#26&\#27&\#28&\#29&\#30 \\ \hline \textbf{Qiao et al. \cite{Qiao2015} } & $81.0$ & $79.6$ & $78.5$ & $79.1$ & $79.2$ & $76.9$ & $78.9$ & $80.9$ & $75.4$ & $76.5$ \\ & ($8.8$) & ($8.9$) & ($8.5$) & ($10.5$) & ($9.6$) & ($13.4$) & ($11.3$) & ($10.0$) & ($11.2$) & ($12.2$) \\ \textbf{VFS} & $\mathbf{81.8}$ & $\mathbf{81.2}$ & $\mathbf{80.5}$ & $\mathbf{81.8}$ & $\mathbf{79.8}$ & $\mathbf{78.4}$ & $\mathbf{80.0}$ & $\mathbf{82.7}$ & $\mathbf{79.9}$ & $\mathbf{78.8}$ \\ & ($6.9$) & ($7.4$) & ($7.2$) & ($7.8$) & ($9.8$) & ($11.7$) & ($10.3$) & ($7.6$) & ($7.2$) & ($8.3$) \\ \hline \end{tabular} \end{center} \end{table*} \begin{table*}[htbp] \begin{center} \caption{\label{tab5}IBSR18 dataset - DSC after registration (in \%) averaged over $8$ labeled subcortical regions for each subject. Standard deviation is between brackets.} \begin{tabular}{l|c c c c c c c c c c c c|} \hline \textbf{Method} & \#1 & \#2 &\#3&\#4&\#5&\#6&\#7&\#8&\#9 \\ \hline \textbf{Qiao et al. \cite{Qiao2015} }& $75.0$ & $75.8$ & $69.9$ & $72.5$ & $74.0$ & $73.4$ & $75.7$ & $71.7$ & $75.6$ \\ & ($7.8$) & ($6.8$) & ($12.0$) & ($8.9$) & ($9.0$) & ($6.9$) & ($7.3$) & ($8.6$) & ($6.9$)\\ \textbf{VFS} &$\mathbf{76.8}$ & $\mathbf{77.6}$ & $\mathbf{73.8}$ & $\mathbf{75.6}$ & $\mathbf{75.2}$ & $\mathbf{76.2}$ & $\mathbf{76.4}$ & $\mathbf{75.7}$ & $\mathbf{77.4}$ \\ & ($9.6$) & ($8.2$) & ($11.6$) & ($10.4$) & ($10.1$) & ($7.7$) & ($9.8$) & ($9.2$) & ($9.3$) \\ \hline &\#10&\#11&\#12&\#13&\#14&\#15&\#16&\#17&\#18\\ \hline \textbf{Qiao et al. \cite{Qiao2015}} & $68.7$ & $71.8$ & $73.0$ & $63.5$ & $75.1$ & $67.7$ & $76.9$ & $75.6$ & $79.0$ \\ & ($11.7$) & ($8.8$) & ($10.1$) & ($18.2$) & ($7.6$) & ($12.5$) & ($9.3$) & ($8.5$) & ($7.7$) \\ \textbf{VFS} & $\mathbf{73.0}$ & $\mathbf{74.8}$ & $\mathbf{73.9}$ & $\mathbf{73.7}$ & $\mathbf{78.2}$ & $\mathbf{76.3}$ & $\mathbf{78.4}$ & $\mathbf{80.1}$ & $\mathbf{80.8}$\\ & ($14.0$) & ($11.1$) & ($10.6$) & ($12.0$) & ($8.6$) & ($9.6$) & ($9.2$) & ($7.1$) & ($5.9$) \\ \hline \end{tabular} \end{center} \end{table*} Table \ref{tab4} summarizes the scores obtained on the $30$ images, where a consistent improvement in volume overlap was achieved for all $30$ subjects. Fig. \ref{figBox} shows average DSC achieved in each region for the Hammers and the IBSR18 datasets. In the Hammers dataset, tissue overlap was improved across all regions using VFS, with most relative increase being achieved in the amygdala ($+3.97\%)$, hippocampus ($+3.46\%$) pallidum ($+1.83\%$) and thalamus ($+1.78\%$) (Fig. \ref{figBox}a). DSC improvements with VFS were more pronounced on the IBSR18 dataset with $+8.44\%$ in the nuclei accumbens, $+6.07\%$ in the amygdala, $+4.16\%$ in the pallidum, $+3.61\%$ in the putamen, $+3.59\%$ in the hippocampus and $+3.13\%$ in the thalamus (Fig. \ref{figBox}b). A reduction of the variability of the results was also observed for all subcortical regions. A likely explanation for the higher gains for VFS on the IBSR18 dataset is that the compared method by Qiao et al. \cite{Qiao2015} was optimized on the Hammers dataset. After averaging scores over all $870$ registrations and all labels, a global DSC of $78.5\%\pm10.0\%$ was achieved for VFS against $76.5\%\pm11.4\%$ for the baseline. The $p$-value of the corresponding Wilcoxon signed rank test was $p=0.012$, suggesting statistical significance. Table \ref{tab5} shows similar improvement of volumetric overlap scores on the IBSR18 dataset. \begin{figure*}[htbp] \includegraphics[width=\linewidth]{graphics/figChaos_mod} \caption{\label{figChaos1} Results of renal volume overlaps using the pipeline proposed in \cite{Jansen2019} for the $20$ cases of the CHAOS dataset. Results marked using a red or yellow arrow are failure cases for ELX or VFS respectively, characterized by a decrease of DSC after registration or a DSC inferior to $80\%$.} \end{figure*} \subsection{\label{sec4c}Renal volume overlaps in multiparametric abdominal MRI} Fig. \ref{figChaos1} shows renal volume overlaps for the $20$ images of the CHAOS dataset after registration with ELX or VFS. Despite the similarity in registration scenarios between \cite{Jansen2019} and the present study, ELX led to several failure cases that we identified as a decrease of renal overlap measurements after registration or a DSC value inferior or equal to $80\%$ ($10$ cases out of $20$). Using the same pipeline, VFS led to only $5$ failure cases. Average overlap DSC was $68.1\%\pm18.8\%$ before image registration and $76.9\%\pm17.8\%$ and $81.1\%\pm15.13\%$ for ELX and VFS respectively, suggesting again better registration accuracy achieved with the substitution of EBF-based structural representations for intensity images. \section{Discussion and Conclusion} We have proposed the use of new directional representations for image registration based on the similarity between regularized vector fields normally used in active contour segmentation, a technique we called vector field similarity. Results on a variety of registration scenarios (mono-modal inter-patient, multi-modal intra-patient) and anatomical locations show the potential advantage of such representations over the use of intensity images, with consistent improvement achieved through this substitution. As we adopt the point of view of structural representations, similarity can be measured using several distance metrics, both mono- and multimodal and be readily implemented and adapted into existing registration pipelines. The main disadvantage associated to vector-valued directional representations over scalar $n-$dimensional intensity images is that they are $n$ times more memory consuming, which translates into longer registration times. However, we observed that among conventional metrics, VFS guided by the NCC similarity metric achieved on average results superior to, or on par with NMI even for pipelines optimized for NMI (not shown in the paper). The less memory-demanding NCC could therefore be substituted to NMI for VFS-based pipelines, partly compensating this drawback for various pipelines relying on NMI. VFS was evaluated within existing registration pipelines to provide a fair and objective comparison with current state-of-the-art. On the other hand, VFS could likely benefit from additional parameter tuning. Regarding parameter optimization, we used VFC-based similarity that enabled us to extend the capture range while providing a good localization of the global minimum for all values of the smoothing parameter $\gamma$. Due to the simplicity of such a substitution, we hope this study will encourage the use of directional structural representations in cases where intensity-based registration does not seem to provide sufficient accuracy. Future investigations will be oriented towards the combination of directional structural representations with unsupervised deep learning-based registration. \bibliographystyle{IEEEbib}
{ "timestamp": "2021-12-01T02:27:49", "yymm": "2111", "arxiv_id": "2111.15509", "language": "en", "url": "https://arxiv.org/abs/2111.15509" }
\section{Introduction} \IEEEPARstart{I}{n} recent years, convolutional neural networks have achieved great success in image classification \cite{10.1145/3065386}, semantic segmentation \cite{7298965}, face recognition \cite{electronics9081188}, target tracking \cite{7780834} and target detection \cite{ZAFEIRIOU20151}. The algorithm based on convolutional neural networks has become one of the leading technologies in these fields. In order to further improve the performance of convolutional neural networks, researchers start with the network structure, from AlexNet \cite{10.1145/3065386} to VGGNet \cite{Simonyan2015VeryDC}, and then to the deeper ResNet \cite{7780459}, DenseNet \cite{8099726}, etc. Many effective solutions have also been put forward in other aspects such as data enhancement, batch normalization \cite{10.5555/3045118.3045167} and various activation functions. Loss function is an indispensable part of CNN model. It assists in updating the parameters of CNN model during the training phase. The traditional Softmax loss function is composed of softmax plus cross-entropy loss function. The output of the neural network passes through the softmax layer to get the posterior probability. Because of its advantages of fast learning speed and good performance, it is widely used in image classification. However, Softmax loss function adopts an inter-class competition mechanism, only cares about the accuracy of the prediction probability of the correct label, ignores the difference of the incorrect label, and can not ensure the compactness of intra-class and the discreteness of inter-class. L-Softamax \cite{10.5555/3045390.3045445} adds an angle constraint on the basis of Softmax to ensure that the boundaries of samples of different classes are more obvious. A-Softmax \cite{8100196} also improves Softamx, and proposes weights normalized and angular spacing to realize the recognition standard that the maximum intra-class distance is smaller than the minimum intra-class distance. In addition, AM-Softmax \cite{8331118} further improves A-Softmax. In order to improve the convergence speed, Euclidean feature space is converted to cosine feature space, and is changed to . Center Loss \cite{10.1007/978-3-319-46478-7_31} adds the constraints of sample features before the classification layer for the first time on the basis of Softmax (the mean square error between the sample features and the calculated class centers is used to restrict the intra-class distance and the inter-class distance), but the distance between similar classes cannot be well separated. The above improvement to Softmax Loss improves the classification accuracy in the field of face recognition (the inter-class distance is relatively close), but in general image classification (the inter-class distance is farther), the effect of Softmax Loss is still the best. For the cross entropy commonly used in the loss function, it mainly uses the information from the correct label to maximize the posterior probability of the sample, which largely ignores the information of the remaining incorrect labels. Therefore, Complement Objective Training \cite{2019arXiv190301182C} proposed a method of alternate training using correct label and incorrect label information, which not only improves the performance of the model, but also is more robust to single-step adversarial attacks, but does not consider the non-entropy-based complement objectives. Liang et al. \cite{9075079} proposed a new loss function Near Classifier Hyper-Plane (N-CHP) Loss under the new CNN training framework, so that the learned sample features have the minimum intra-class distance and are close to the classifier hyperplane. The learned knowledge is transferred to Softmax Loss by using Loss Transferring, which greatly improves the classification performance. However, when the classification task has a large number of classes, N-CHP Loss is easy to fuse the sample features, resulting in poor performance. From the perspective of maximize inter-class distance and minimize intra-class distance, PEDCC-Loss \cite{2019arXiv190200220Z}\cite{8933403} predefines evenly-distributed class centroids to replace the continuously updated class centers in Center Loss, so as to maximize the inter-class distance. Meanwhile, using AM-Softmax and mean square error (MSE) loss to restrict the compactness of feature distribution of intra-class and the discreteness of feature distribution of inter-class. PEDCC-Loss improves the classification accuracy in both face recognition and general image classification, but still contains the constraint of the loss function of the posterior probability, which makes the sample feature distribution still not in the optimal state. In the application of pattern recognition, the structure of convolutional neural network classifier generally includes convolutional layers and fully connected layers, in which convolutional layers are used to extract the features of input information and fully connected layers are used for classification. In the field of classification, convolutional neural networks mostly use end-to-end learning, that is, the output of the neural network is constrained with loss functions, and there are few or no constraints on the extracted features. This article proposes a Softmax-free loss function (POD Loss) based on predefined optimal-distribution of latent features for CNN classifier. On the one hand, the predefined evenly-distributed class centroids (PEDCC) is used, and the weight of the classification linear layer in the convolutional neural network (the weight is fixed in the training phase to maximize the inter-distance) is replaced, so as to optimize the distribution of latent features. On the other hand, by introducing the decorrelation mechanism, the improved Cosine Loss restricts the cosine distance between the sample feature vector and the PEDCC. At the same time, the correlations between the sample features are limited, so that the extracted latent features are the most effective. POD Loss discards the constraints on the posterior probability in the traditional loss function, and only restricts the extracted sample features to achieve the optimal distribution of latent features. Finally, the cosine distance is used for classification to obtain high classification accuracy. The location of POD Loss and the whole network structure are shown in Fig. 1. Section \uppercase\expandafter{\romannumeral3} gives the details of the method. \begin{figure*} \centering \includegraphics[width=0.9\textwidth]{Network_Structure.eps} \caption{POD Loss of CNN classifier. POD Loss is a combination of Cosine Loss and Self-correlation Constraint (SC) Loss, \textbf{x} is the normalized latent features of the sample, FC2 (PEDCC) is the linear classification layer with parameter solidification. POD Loss is completely a constraint on the optimal distribution of latent features of the samples.} \end{figure*} Our main contributions are as follows: \begin{itemize} \item The PEDCC we proposed before is adopted in our classification model, and only the output of the feature extraction layer in the convolutional neural network is restricted to achieve the optimal distribution of latent features. The Softmax loss function is discarded, the weight of the classification layer is fixed, and the cosine distance is used for classification. \item The output of the feature extraction layer is constrained. On the one hand, the improved mean square error (MSE) loss function is used to constrain the cosine distance between the sample features and the PEDCC class centroid. On the other hand, the correlation between the latent feature dimensions is minimized to improve the classification accuracy. \item For the classification tasks, experiments on multiple datasets were conducted. Compared with the traditional cross-entropy loss plus softmax, and the typical Softmax related loss functions AM-Softmax Loss, COT-Loss and PEDCC-Loss, the classification accuracy of POD Loss is obviously better, and the training of network is easier to converge. \end{itemize} \section{Related works} \subsection{Loss function of classification} In the field of classification, there are many different loss functions in convolutional neural networks for end-to-end learning. For multi-classification problems, Softmax loss function is generally selected, which has good performance and is easy to converge. The Softmax loss function is as follows: \begin{normalsize} \begin{equation} L_{Softmax}=\frac{1}{N}\sum_{i} -log{\frac{e^{z_{y_i}}}{\sum_{j}e^{z_j}}} \end{equation} \end{normalsize} where $N$ represents the number of samples, $z_{y_i}$ represents the output value of the last fully connected layer of the correct class $y_i$, and $z_j$ is the output value of the last fully connected layer of the $j$-th class. Therefore, $z_{y_i}=W^{T}_{y_i}x_i$, that is , $z_{y_i}=||W^{T}_{y_i}||\cdot||x_i||\cdot cos(\theta_{y_i})$, $W^{T}_{y_i}$ is the corresponding weight in the fully connected layer and $x_i$ represents the input feature of the $i$-th sample. Therefore, the Softmax loss function becomes the following formula: \begin{normalsize} \begin{equation} L_{Softmax}=\frac{1}{N}\sum_{i} -log{\frac{e^{||W^{T}_{y_i}||\cdot||x_i||\cdot cos(\theta_{y_i})}}{\sum_{j}e^{||W^{T}_j||\cdot||x_i||\cdot cos(\theta_j)}}} \end{equation} \end{normalsize} In general image classification tasks, there is no doubt about the performance of Softmax Loss, but for face recognition, due to the small inter-class distance, its classification accuracy is not satisfactory. L-Softmax increases the coefficient $m$ before the angle $\theta$, m is a positive integer, and the cosine function is monotonically decreasing in the range of $0$ to $\pi$. Therefore, $cos(m\theta) \leq cos(\theta)$. In this way, the inter-class distance learned by the model is larger, and the intra-class distance is smaller. The greater the value of $m$, the greater the difficulty of learning. A-Softmax is similar to L-Softmax in that it increases the angular spacing and normalizes the weight $||W||$. Based on this, AM-Softmax proposes to change $cos(m\theta)$ to $(cos\theta - m)$, changes the multiplication in the formula to addition, and also normalizes the input feature $||x||$. Compared with A-Softmax, the formula is simpler in form and calculation. The final AM-Softmax is written as: \begin{small} \begin{equation} L_{AM-Softmax}=-\frac{1}{N}\sum_{i} log{\frac{e^{{s\cdot}({\cos\theta_y}_i-m)}}{e^{{s\cdot}({\cos\theta_y}_i-m)}+\begin{matrix} \sum_{j=1,j\ne y_i}^c e^{{s\cdot}{\cos\theta_j}}\end{matrix}}} \end{equation} \end{small} Center Loss is no longer limited to the constraints of neural network output. On the basis of Softmax, an additional mean square error (MSE) loss function is added to calculate the sample feature and the class center feature of the sample to reduce the intra-class distance and increase the inter-class distance. The class feature centers are continuously updated during the process of network training. \begin{normalsize} \begin{equation} L_{Center}=L_{Softmax}+\frac{\lambda}{2}\sum^M_{i=1}||\bm{x_i}-\bm{c_{y_i}}||^2 \end{equation} \end{normalsize} $M$ represents the number of samples, $\bm{x_i}$ is the input feature of the $i$-th sample, $\bm{c_{y_i}}$ represents the central feature of the class to which the $i$-th sample belongs, which is continuously updated during the learning process after initialization. $\lambda$ represents the weighting factor. Cross entropy mainly uses the information from the correct label to maximize the possibility of data, but largely ignores the information from the remaining incorrect labels. COT believes that in addition to correct labels, incorrect labels should also be used in training, which can effectively improve the performance of the model. The training strategy is to alternate training between the correct label and the incorrect labels. The constraint of the correct label is the commonly used cross entropy, and the constraint of the incorrect labels is the complement entropy expression as follows: \begin{normalsize} \begin{equation} C(\hat{y}_{\bar{c}})=-\frac{1}{N}\sum^N_{i=1}\sum^K_{j=1,j \neq g}(\frac{\hat{y}_{ij}}{1-\hat{y}_{ig}})log(\frac{\hat{y}_{ij}}{1-\hat{y}_{ig}}) \end{equation} \end{normalsize} where $N$ is the number of samples, $K$ is the number of classes, $\hat{y}_{ij}$ represents the predicted probability of the incorrect label $j$ of the $i$-th sample, and $\hat{y}_{ig}$ represents the predicted probability of the correct label $g$ of the $i$-th sample. When the predicted probabilities of all labels are equal, the entropy will be maximized, so the entropy makes $\hat{y}_{ij}$ approximate to $\frac{1-\hat{y}_{ig}}{K-1}$, essentially offsetting the predicted probability of incorrect labels as $K$ increases. PEDCC-Loss \cite{2019arXiv190200220Z}\cite{8933403} proposes predefined evenly-distributed class centroids instead of the continuously updated class centers in Center Loss, and uses the fixed PEDCC weight instead of the weight of the classification linear layer in the convolutional neural network to maximize the inter-class distance. At the same time, the constraint of latent features is added (a constraint similar to Center Loss is added to calculate the mean square error (MSE) loss of sample features and PEDCC center), and AM-Softmax is also applied. This method makes the distribution of intra-class features more compact and the distribution of inter-class features more distant. The overall system diagram is shown in Fig. 2. The PEDCC-Loss expression is as follows: \begin{small} \begin{equation} L_{PEDCC-AM}=-\frac{1}{N}\sum_{i} log{\frac{e^{{s\cdot}({\cos\theta_y}_i-m)}}{e^{{s\cdot}({\cos\theta_y}_i-m)}+\begin{matrix} \sum_{j=1,j\ne y_i}^c e^{{s\cdot}{\cos\theta_j}}\end{matrix}}} \end{equation} \end{small} \begin{normalsize} \begin{equation} L_{PEDCC-MSE}=\frac{1}{2N}\sum_{i=1}^N{\left \| \bm{x_i}-\bm{pedcc_{y_i}} \right \|}^2 \end{equation} \end{normalsize} \begin{normalsize} \begin{equation} L_{PEDCC-Loss}=L_{PEDCC-AM}+\lambda\sqrt[n]{L_{PEDCC-MSE}} \end{equation} \end{normalsize} where $N$ is the number of samples, $\lambda$ is a weighting coefficient, and $n \geq 1$ is a constraint factor of $L_{PEDCC-MSE}$. $\bm{x_i}$ represents the input feature of the ith sample (normalized), and $\bm{pedcc_{y_i}}$ represents the PEDCC central feature of the class to which the sample belongs (normalized). \begin{figure*} \centering \includegraphics[width=1\textwidth]{PEDCC-Loss_Network_Structure.eps} \caption{The PEDCC-Loss of CNN Classifier \cite{8933403}.} \end{figure*} \subsection{Loss function for latent features in self-supervised learning} In self-supervised learning, Barlow Twins \cite{2021arXiv210303230Z} proposes an innovative loss function, adding a decorrelation mechanism to the loss function to maximize the variability of representation learning. Barlow Twins Loss is written as: \begin{normalsize} \begin{equation} L_{BT}=\sum_{i}(1-C_{ii})^2+\lambda\sum_{i}\sum_{j \neq i}C_{ij}^2 \end{equation} \end{normalsize} Where $\lambda$ is a weighting coefficient, which is used to weigh the two items in the loss function, and $C$ is the cross-correlation matrix of the output features of the sample and its enhanced sample under the same batch of two same network. The addition of the latter redundancy reduction term in the loss function reduces the redundancy between the network output features, so that the output features contains the non-redundant information of the sample, and achieves a better feature representation effect. On this basis, VICReg \cite{2021arXiv210504906B} combines the variance term with a decorrelation mechanism based on redundancy reduction and covariance regular term. The covariance criterion removes the correlation between the different dimensions of the learned representation, and the purpose is to spread information between dimensions to avoid dimension collapse. The criterion mainly punishes the off-diagonal terms of the covariance matrix, which further improves the classification performance of features. In this article, PEDCC is also used to generate the predefined evenly-distributed class centroids, instead of the continuously updated class centers in Center Loss, and the solidified PEDCC weights replace the weights of the classification linear layer to maximize the inter-class distance. On the one hand, Cosine Loss calculates and constrains the distance between the sample feature and the central features of the PEDCC. On the other hand, similar to self-supervised learning, a decorrelation mechanism is introduced. The self-correlation matrix of the difference between the sample features and the central features of PEDCC in each batch is calculated, and the correlation between any pair of features is restricted to improve the classification accuracy. In the task of image classification, the method in this article no longer restricts the posterior probability of neural network output, but restricts the latent features extracted from sample information, and realizes the optimal distribution of latent features for classification. \section{Method} \subsection{Cosine Loss} The mean square error (MSE) loss function can be used to constrain the distance between the sample feature and its class PEDCC feature center. In this article, the PEDCC center and the sample feature vector are normalized before calculation, so the MSE Loss expression and its derivation are as follows: \begin{normalsize} \begin{equation} L_{PEDCC-MSE}=\frac{1}{2N}\sum_{i=1}^N{\left \| \bm{x_i}-\bm{pedcc_{y_i}} \right \|}^2 \end{equation} \end{normalsize} \begin{normalsize} \begin{equation} =\frac{1}{2N}\sum_{i=1}^N( ||\bm{x_i}||^2+||\bm{pedcc_{y_i}}||^2-2\bm{x_i} \cdot \bm{pedcc_{y_i}})^2 \end{equation} \end{normalsize} \begin{normalsize} \begin{equation} =\frac{1}{N}\sum_{i=1}^N(1-cos{\theta_{y_i}}) \end{equation} \end{normalsize} where $N$ represents the number of samples. It can be seen that the loss function is essentially a constraint on the cosine distance between the sample feature and the center of the PEDCC. Taking $\theta_{y_i}$ as the independent variable to derive the derivative of $(1-cos{\theta_{y_i}})$, the derivative is $sin{\theta_{y_i}}$, as shown in Fig. 3. It can be seen from Fig. 3 that in the range of $0^{\circ}$ to $90^{\circ}$, the derivative value becomes larger and larger, and in the range of $90^{\circ}$ to $180^{\circ}$, the derivative value becomes smaller and smaller, and the derivative value is always less than or equal to $1$ in the whole range of $0^{\circ}$ to $180^{\circ}$. \begin{figure} \centering \includegraphics[width=0.5\textwidth]{sin.eps} \caption{The change of $sin{\theta_{y_i}}$ within the range of $0^{\circ}$ to $180^{\circ}.$} \end{figure} $\theta_{y_i}$ represents the angle between the sample feature and the predefined center feature of the $i$-th class. The angle between the sample feature point and the predefined center of the class to the origin must also be in the range of $0^{\circ}$ to $180^{\circ}$. In the process of network learning, what we want is let $\theta_{y_i}$ approach $0^{\circ}$ as quickly as possible. Therefore, the faster the falling speed of $\theta_{y_i}$, the better, that is, the greater the derivative value, the better, and $(1-cos{\theta_{y_i}})$ does not completely conform to the mathematical formula we want. Based on above, Cosine Loss is proposed, whose expression is as follows: \begin{normalsize} \begin{equation} L_{Cosine}=\frac{1}{N}\sum_{i=1}^N(1-cos{\theta_{y_i}})^2 \end{equation} \end{normalsize} where $N$ represents the number of samples, and $cos{\theta_{y_i}}$ is the cosine value of the angle between the sample feature and its own predefined central feature. Using $\theta_{y_i}$ as the independent variable to derive the derivative of $(1-cos{\theta_{y_i}})^2$, the derivative is $2\cdot(1-cos{\theta_{y_i}}) \cdot sin{\theta_{y_i}}$, as shown in Fig. 4. It can be seen from the figure that the derivative value becomes larger and larger in the range of $0^{\circ}$ to $120^{\circ}$, and the derivative value reaches $1$ around $65^{\circ}$ and the maximum value reaches about $2.5$. In network learning, Cosine Loss is easy to converge and converges faster. Softmax improved loss AM-Softmax Loss, PEDCC-Loss containing MSE Loss and Cosine Loss comparison experiment results are given in Section \uppercase\expandafter{\romannumeral4}. \begin{figure} \centering \includegraphics[width=0.5\textwidth]{Two_1-cos_sin.eps} \caption{The change of $2\cdot(1-cos{\theta_{y_i}})\cdot sin{\theta_{y_i}}$ within the range of $0^{\circ}$ to $180^{\circ}.$} \end{figure} \subsection{Selection of latent feature dimension} For a given type of dataset, \cite{9444709} gives the theorem: For arbitrarily generated $k$ point $a_i(i=1,2,...,k)$ evenly-distributed on the unit hypersphere of $n$ dimensional Euclidean space, if $k \leq n+1$, such that $\langle a_i, a_j \rangle=-\frac{1}{k-1}, i \neq j$. Corresponding to the convolutional neural network, k evenly-distributed PEDCC class centroids can be obtained when $k$(number of classes) $\leq n$(feature dimension)$+1$ is satisfied. Different feature representations can be regarded as different knowledge of the images. The more comprehensive the knowledge, the better the classification performance, and the multi-feature dimension is indeed easier to find a hyperplane to separate images of different classes. However, too many feature dimensions will cause the classifier to emphasize the accuracy of the training set too much, and even learn some wrong or abnormal data, resulting in over-fitting problems. Therefore, under the premise of $k \leq n+1$, selecting the appropriate feature dimension is also crucial to the classification performance. In the experiments of this article, Section \uppercase\expandafter{\romannumeral4} gives the selection details between the number of classes and the dimension of features. \subsection{Decorrelation between latent features} The features of predefined evenly-distributed class centroids are irrelevant, but the sample features during the training process only approximate the features of the class centroids to which they belong, which can not be completely equal. Therefore, there is always a certain correlation between the features. Meanwhile, under the premise of ensuring that the number of center points $k$(the number of classes) and the space dimension $n$ satisfy $k \leq n+1$, $n$ is generally taken greater than $k$. For example, when the number of classes $k$ is $10$, $n$ is $256$. When the network training is over, due to the effect of loss function, the features of the training samples are basically distributed in the $k-1$ dimensions subspace \cite{9444709}. From this point of view, it seems that the remaining $n-k+1$ dimensions are useless, but the fact is that if $n$ is set to $k-1$ at the beginning, the classification accuracy drops significantly, which indicates that these extra dimensions play a role in optimizing classification features during the training process. In this case, in order to further improve the utilization of all latent features, constraining the correlation between the dimensions is proposed. For example, for the image classification of cats and mice, if the correlation between the dimensions is not restricted, one of the learned features may represent the body size and the other dimension represents the facial contour size. Obviously, these two features have a strong correlation, which means a large body must have a large facial contour. For classification, the classification performance based on the two features is very similar or even the same as that based on one of the two features. Therefore, the resources occupied by one of the dimensions are wasted. When the constraint of the correlation between the dimensions is added, the two features learned in the above example may be hair color and body size, which are almost irrelevant. Compared with the classification based on a single feature, it is obvious that the combination of the two is better for classification and achieves the purpose of making full use of features. Barlow Twins Loss adds a decorrelation mechanism to reduce the redundancy between network output features, so that the output features contain non-redundant information of samples. Barlow Twins uses a cross-correlation matrix and punishes the non-diagonal terms of the calculated cross-correlation matrix. Our method draws on this decorrelation mechanism, but instead of using the cross-correlation matrix, the self-correlation matrix of the difference between the latent features of the samples and the predefined central features is used. The non-diagonal terms are also punished to constrain the correlation between the different dimensions of the latent features of samples. Therefore, Self-correlation Constraint Loss is proposed: \begin{normalsize} \begin{equation} L_{SC}=\sum_{i}^n{\sum_{j \neq i}^n{R_{ij}^2}} \end{equation} \end{normalsize} \begin{normalsize} \begin{equation} R=\frac{1}{B-1}{(X-X_{pedcc})(X-X_{pedcc})^T} \end{equation} \end{normalsize} where $n$ represents feature dimension, $R$ represents the self-correlation matrix of the difference matrix formed by the difference between the sample features and the predefined central features. The element in the $i$-th row and $j$-th column of the self-correlation matrix is the correlation coefficient between the $i$-th column and the $j$-th column of the difference matrix. $B$ is the number of samples in each batch. $X$ is the normalized feature matrix of $B$ samples in Fig. 1, where each column vector represents a sample. $X_{pedcc}$ is the PEDCC matrix corresponding to the sample feature. \subsection{POD Loss} On the one hand, Cosine Loss is used to constrain the distance between the sample feature and its class PEDCC feature center, and on the other hand it accelerates the convergence speed of network training. At the same time, Self-correlation Constraint (SC) Loss is defined as the constraint on the correlation between the different dimensions of sample latent features, that is, the decorrelation term, which includes punishing the off-diagonal terms of the self-correlation matrix of the difference between the sample features and the predefined central features. The final expression of POD Loss is: \begin{normalsize} \begin{equation} L_{POD}=L_{Cosine}+\lambda{L_{SC}} \end{equation} \end{normalsize} where $L_{Cosine}$ is the cosine loss function, $L_{SC}$ is the loss function that restricts the correlation between feature dimensions, and $\lambda$ is the weighting coefficient. For some experiments, the adjustment of $\lambda$ may improve the classification performance. \section{Experiments and results} Experiment is implemented using Pytorch1.0 \cite{pytorch}. The network structure used in the whole experiment is ResNet50 \cite{7780459}. The datasets used include CIFAR10 \cite{Krizhevsky2009LearningML}, CIFAR100 \cite{Krizhevsky2009LearningML}, Tiny ImageNet \cite{5206848}, FaceScrub \cite{7025068} and ImageNet \cite{5206848} dataset. In order to make the network structure more suitable for the image sizes of different datasets, some modifications are made to the original ResNet50 structure. For CIFAR10 (images size is $32 \times 32$), CIFAR100 (images size is $32 \times 32$) and Tiny ImageNet (images size is $64 \times 64$), the convolution kernel of the first convolution layer is changed from the original $7 \times 7$ to $3 \times 3$, and the step size is changed to $1$. The maximum pooling layer in the second convolutional layer is also eliminated (except Tiny ImageNet). The above changes are not made for ImageNet. In addition, the number of feature dimensions input by the PEDCC layer in the network structure varies with the number of classes of these datasets. All experimental results are the average of three experiments. \subsection{Experimental datasets} CIFAR10 dataset contains $10$ classes of RGB color images, and the images size is $32 \times 32$. There are $50,000$ training images and $10,000$ test images in the dataset. CIFAR100 dataset contains $100$ classes of images, the images size is $32 \times 32$, a total of $50,000$ training images and $10,000$ test images. FaceScrub dataset contains $100$ classes of images, the images size is $64 \times 64$, there are $15896$ training images and 3896 test images in the dataset. Tiny ImageNet dataset contains $200$ classes of images, the images size are $64 \times 64$, there are a total of $100,000$ training images and $10,000$ test images. For these datasets, standard data enhancement \cite{10.1145/3065386} is performed, that is, the training images are filled with $4$ pixels, randomly cropped to the original size, and horizontally flipped with a probability of $0.5$, and the test images are not processed. ImageNet dataset contains $1000$ classes of images. The images size is not unique, and the height and width are both greater than $224$. There are $1282166$ training images and $51000$ test images in the dataset. For this dataset, the training images are randomly cropped to different sizes and aspect ratios, scaled to $224 \times 224$, and flipped horizontally with a probability of $0.5$. The test images are scaled to $256$ proportionally with a small side length, and then the center is cropped to $224 \times 224$. For the above datasets, in the training phase, using the SGD optimizer, the weight decay is $0.0005$ (ImageNet is $0.0001$), and the momentum is $0.9$. The initial learning rate is $0.1$, and a total of $100$ epochs are trained. At the $30th$, $60th$, and $90th$ epoch, the learning rate drops to one-tenth of the original, and the batchsize is set to $128$ (ImageNet is $96$). \subsection{Experimental results} \subsubsection{Selection of feature dimensions} According to the theoretical analysis of feature dimensions in Section \uppercase\expandafter{\romannumeral3}, comparison experiments are carried out on the selection of multiple different feature dimensions for datasets of different classes. Table \uppercase\expandafter{\romannumeral1} shows the classification performance under different feature dimensions on CIFAR10 dataset. Within a certain range, increasing the feature dimension will get better classification performance, but when the number of features reaches a certain scale, the performance of the classifier is declining. The feature dimensions with the best performance on different datasets under multiple experiments are shown in Table \uppercase\expandafter{\romannumeral2}. For CIFAR10 dataset, $10$ evenly-distributed class centroids are predefined in a $256$ dimensions hyperspace. For CIFAR100, FaceScrub and Tiny ImagNet dataset, several evenly-distributed class centroids of corresponding classes are respectively predefined in a $512$ dimensions hyperspace. For ImageNet dataset, the top $30$ classes, the top $100$ classes, the top $200$ classes, the top $500$ classes, and all the $1000$ classes are selected for experiments. The optimal feature dimensions are $256$, $512$, $512$, $1024$ and $2048$. \begin{table}[h] \caption{Classification accuracy (\%) under different feature dimensions on CIFAR10 dataset} \centering \setlength{\tabcolsep}{13mm}{ \begin{tabular}{cc} \toprule Feature Dimension & Accuracy(\%) \\ \midrule 9 & 92.71 \\ 32 & 93.51 \\ 64 & 93.77 \\ 128 & 93.98 \\ 256 & \textbf{94.31} \\ 512 & 94.25 \\ 1024 & 93.82 \\ \bottomrule \end{tabular}} \end{table} \begin{table}[h] \caption{The number of classes and the optimal feature dimensions of different datasets} \centering \setlength{\tabcolsep}{4.5mm}{ \begin{tabular}{ccc} \toprule Dataset & Number of Classes & Feature Dimension \\ \midrule CIFAR10 & 10 & 256 \\ CIFAR100 & 100 & 512 \\ FaceScrub & 100 & 512 \\ Tiny ImageNet & 200 & 512 \\ ImageNet(30) & 30 & 256 \\ ImageNet(100) & 100 & 512 \\ ImageNet(200) & 200 & 512 \\ ImageNet(500) & 500 & 1024 \\ ImageNet(1000) & 1000 & 2048 \\ \bottomrule \end{tabular}} \end{table} \subsubsection{Role of Cosine Loss} Comparative experiments on Softmax, AM-Softmax Loss$(s=5, m=0.25)$, COT, PEDCC-Loss$(s=10,m=0.5)$ and Cosine Loss are conducted on CIFAR100 dataset and FaceScrub dataset. The experimental results are shown in Table \uppercase\expandafter{\romannumeral3}. Among the five loss functions, Cosine Loss has the highest classification performance. In the network training process, the convergence speed of AM-Softmax Loss, PEDCC-Loss and Cosine Loss is shown in Fig. 5. As can be seen from Fig. 5, only Cosine Loss training has a faster and more stable convergence speed. \begin{table}[h] \caption{Comparison of classification accuracy (\%) of multiple losses on CIFAR100 and FaceScrub dataset} \centering \setlength{\tabcolsep}{5.8mm}{ \begin{tabular}{lcc} \toprule \diagbox{Loss}{Dataset} & CIFAR100 & FaceScrub \\ \midrule Softmax Loss & 73.80 & 89.10 \\ \makecell[l]{AM-Softmax Loss \\ $(s=5, m=0.25)$} & 73.03 & 91.67 \\ COT & 74.03 & 90.40 \\ \makecell[l]{PEDCC-Loss \\ $(s=10,m=0.5)$} & 75.58 & 90.98 \\ Cosine Loss & \textbf{76.23} & \textbf{92.11} \\ \bottomrule \end{tabular}} \end{table} \begin{figure*} \begin{subfigure}{.32\textwidth} \centering \includegraphics[width=6cm,height=5cm]{AM-Softmax_TrainingandValidationloss.eps} \caption{AM-Softmax Loss} \label{sf1} \end{subfigure} \begin{subfigure}{.32\textwidth} \centering \includegraphics[width=6cm,height=5cm]{PEDCC-Loss_TrainingandValidationloss.eps} \caption{PEDCC-Loss} \label{sf2} \end{subfigure} \begin{subfigure}{.32\textwidth} \centering \includegraphics[width=6cm,height=5cm]{Cosine_Loss_TrainingandValidationloss.eps} \caption{Cosine Loss} \label{sf3} \end{subfigure} \caption{Variation of classification accuracy of AM-Softmax Loss, PEDCC-Loss and Cosine Loss with epoch during training.} \label{fig5} \end{figure*} \subsubsection{Role of SC Loss} The variance of the $2048$ dimensions feature eigen-vectors (normalized) before the fully connected layer of the network is respectively calculated after POD Loss and Cosine Loss training, as shown in Table \uppercase\expandafter{\romannumeral4}. The variance of eigen-vectors after POD Loss training is smaller than that of Cosine Loss, indicating that the existence of SC Loss can balance the energy of the features and gain some benefits for subsequent classification. \begin{table}[h] \caption{Variance of feature eigen-vectors of Cosine Loss and POD Loss} \centering \setlength{\tabcolsep}{8mm}{ \begin{tabular}{cc} \toprule Loss Function & Variance of Feature Eigen-vectors \\ \midrule Cosine Loss & 1.82e-8 \\ POD Loss & \textbf{1.53e-8} \\ \bottomrule \end{tabular}} \end{table} Comparative experiments of Cosine Loss and POD Loss have also been carried out on some datasets, and the experimental results are shown in Table \uppercase\expandafter{\romannumeral5}. The experimental results show that POD Loss with SC Loss is better than a single Cosine Loss in classification accuracy on multiple datasets. \begin{table}[h] \caption{Comparison of classification accuracy (\%) of Cosine Loss and POD Loss on different datasets} \centering \setlength{\tabcolsep}{5mm}{ \begin{tabular}{lcc} \toprule \diagbox{Dataset}{Loss\\ Function} & Cosine Loss & POD Loss \\ \midrule CIFAR10 & 93.83 & \textbf{94.31} \\ CIFAR100 & 76.23 & \textbf{77.70} \\ Tiny ImageNet & 62.01 & \textbf{62.16} \\ FaceScrub & 92.11 & \textbf{93.01} \\ \bottomrule \end{tabular}} \end{table} \subsubsection{POD Loss} Comparative experiments of Softmax Loss and POD Loss are conducted on the above datasets. The experimental results are shown in Table \uppercase\expandafter{\romannumeral6}. Experimental results show that the classification performance of POD Loss is higher than that of Softmax Loss on multiple datasets, and the classification accuracy is much higher than Softmax Loss on some datasets. Experiments on the ImageNet dataset show that with the increase of the number of classes in the dataset, the classification accuracy of POD Loss is gradually approaching Softmax Loss. The reason lies in the structure of the network itself (the maximum output feature dimension of ResNet50 is $2048$, the ratio of the feature dimension to the number of large classes is much smaller than the ratio of the feature dimension to the number of small classes, and there are fewer or no extra dimensions to benefit classification). So, on large-class datasets, the advantages of POD Loss over Softmax Loss are limited. \begin{table}[h] \caption{Comparison of classification accuracy (\%) of Softmax Loss and POD Loss on different datasets} \centering \setlength{\tabcolsep}{5mm}{ \begin{tabular}{lcc} \toprule \diagbox{Dataset}{Loss\\ Function} & Softmax Loss & POD Loss \\ \midrule CIFAR10 & 93.71 & \textbf{94.31} \\ CIFAR100 & 75.56 & \textbf{77.70} \\ Tiny ImageNet & 60.29 & \textbf{62.16} \\ FaceScrub & 89.10 & \textbf{93.01} \\ ImageNet(30) & 79.86 & \textbf{85.40} \\ ImageNet(100) & 78.25 & \textbf{82.06} \\ ImageNet(200) & 82.06 & \textbf{83.08} \\ ImageNet(500) & 82.14 & \textbf{82.24} \\ ImageNet(1000) & 75.65 & \textbf{75.71} \\ \bottomrule \end{tabular}} \end{table} On the other hand, in the process of network training, the convergence speed of POD Loss and Softmax Loss is shown in Fig. 6. As can be seen from Figure 6, POD Loss training converges faster. \begin{figure*} \begin{subfigure}{.5\textwidth} \centering \includegraphics[width=8cm,height=6cm]{Softmax_Loss_TrainingandValidationloss.eps} \caption{Softmax Loss} \label{sf4} \end{subfigure} \begin{subfigure}{.5\textwidth} \centering \includegraphics[width=8cm,height=6cm]{POD_Loss_TrainingandValidationloss.eps} \caption{POD Loss} \label{sf5} \end{subfigure} \caption{Variation of classification accuracy of Softmax Loss and POD Loss with epoch during training.} \label{fig6} \end{figure*} \subsubsection{Discussion on the last classification method} The constraint on latent features only solves the optimization problem of latent feature distribution, and the pattern classification method needs to be matched later. On the one hand, the POD Loss proposed in this article includes Cosine Loss that constrains the latent features and the cosine distance between the PEDCC centroids. When POD Loss converges, the cosine distance between the latent feature vector and the PEDCC centroid of the correct label reaches the minimum, and the cosine distances between the latent feature vector and the PEDCC centroids of the incorrect labels reach the maximum in a uniform state. The classification method is: \begin{normalsize} \begin{equation} I=\mathop{argmax}\limits_{i=1,2,...,k}{(\bm{x}\cdot \bm{pedcc_i})} \end{equation} \end{normalsize} \begin{normalsize} \begin{equation} =\mathop{argmax}\limits_{i=1,2,...,k}{(||\bm{x}|| \cdot ||\bm{pedcc_i}|| \cdot cos\theta_{\bm{x},\bm{pedcc_i}})} \end{equation} \end{normalsize} \begin{normalsize} \begin{equation} =\mathop{argmax}\limits_{i=1,2,...,k}{(cos\theta_{\bm{x},\bm{pedcc_i}})} \end{equation} \end{normalsize} where $k$ represents the number of categories, $\bm{x}$ is the sample feature, $\bm{pedcc_i}$ is the center of the PEDCC of the $i$-th class, and $cos\theta_{\bm{x},\bm{pedcc_i}}$ represents the cosine value of and . On the other hand, under the premise that the features of samples satisfy the Gaussian distribution, the mean and covariance of each PEDCC are different, and each class has different class conditional probability densities, so Gaussian Discriminant Analysis (GDA) method can also be used for classification. Therefore, this article conducts a comparative experiment on the two classification methods of cosine distance and GDA based on POD Loss. The experimental results on multiple datasets are shown in Table \uppercase\expandafter{\romannumeral7}. It can be seen from Table \uppercase\expandafter{\romannumeral7} that the classification method of cosine distance is better than GDA on multiple datasets. Therefore, cosine distance is adopted as the final classification method in this article. \begin{table}[h] \caption{Comparison of classification accuracy (\%) of two classification methods} \centering \setlength{\tabcolsep}{2mm}{ \begin{tabular}{lcc} \toprule \diagbox{Dataset}{Method} & Cosine distance & Gaussian discrimination analysis \\ \midrule CIFAR10 & \textbf{94.31} & 94.23 \\ CIFAR100 & \textbf{77.70} & 73.37 \\ Tiny ImageNet & \textbf{62.16} & 50.09 \\ \bottomrule \end{tabular}} \end{table} \section{Conclusion} For convolutional neural network classifier, this article proposes a Softmax-free loss function (POD Loss) based on predefined optimal-distribution of latent features. The loss function discards the constraints on the posterior probability in the traditional loss function, and only constrains the output of feature extraction to realize the optimal distribution of latent features, including the cosine distance between sample feature vector and predefined evenly-distributed class centroids, the decorrelation mechanism between sample features, and finally the classification through the solidified PEDCC layer. The experimental results show that, compared to the commonly used Softmax Loss and its improved Loss, POD Loss achieves better performance on image classification tasks, and is easier to train and converge. In the future, new loss function based on latent feature distribution optimization will be further studied from the perspective of distinguishing classification features and representation features, so as to further improve the recognition performance of the network. \ifCLASSOPTIONcaptionsoff \newpage \fi \bibliographystyle{IEEEtran}
{ "timestamp": "2021-12-01T02:25:37", "yymm": "2111", "arxiv_id": "2111.15449", "language": "en", "url": "https://arxiv.org/abs/2111.15449" }
\section{Introduction} \label{Intro} Drought is one of the most widespread and frequent natural disasters in the world, with profound economic, social, and environmental impacts~\citep{keyantash2002quantification}. Unlike other natural hazards, droughts are a gradual process, often have a long duration, cumulative impacts, and widespread extent~\citep{below2007documenting}. Climate change is expected to increase the area and population affected by soil moisture droughts and also the probability of extreme drought events comparable to the one of 2003 across Europe~\citep{IPCC2021, samaniego2018droughteurope}. Therefore, it is a critical scientific task to understand better possible changes in drought frequency and intensity under varying climate scenarios~\citep{King2020}. Drought is commonly classified into four categories: meteorological, agricultural, socioeconomic, and hydrological. In this study, we focus on agricultural drought since it has a considerable impact on human population evolution~\citep{LloydHughes2013}. Agricultural droughts can be quantified as a ``deficit of soil moisture relative to its seasonal climatology at a location''~\citep{sheffield2007characteristics}. A low \gls{smi} in the root zone is a direct indicator of agricultural drought and inhibits vegetative growth, directly affecting crop yield and therefore food security~\citep{keyantash2002quantification}. The physical processes involved in drought depend on complicated interactions among multiple variables and are spatiotemporally highly variable. This behavior makes droughts hard to predict, classify, and understand~\citep{below2007documenting}. However, recently, machine learning (ML)-based methods have demonstrated their ability to capture hydrological phenomena well, e.g., rainfall-runoff~\citep{kratzert2018rainfall} and flood~\citep{le2019application}. ML has also been applied to drought detection but relied on relative indices as labels due to the lack of ground truth data~\citep{BELAYNEH201637, Shamshirband2020, FENG2019303}. Using such statistically derived labels can lead to unreliable detection of droughts in climate model projections and, accordingly, an inaccurate estimation of the impacts of future climate change~\citep{VicenteSerrano2010}. Therefore, we compare several ML algorithms in their ability to classify droughts based on agriculturally highly relevant soil moisture. A future goal is to provide an ML-based drought classification for climate projections under various scenarios. While we do not yet operate on climate model output from the Coupled Model Intercomparison Project (CMIP6)~\cite{Eyring2016}, in this work, we nevertheless showcase that drought classification is possible with the variables available in the output of CMIP6 climate projections, and thus it is promising to further pursue the goal. \section{Data Preparation} \begin{figure} \centering \includegraphics[ height=3.7cm, keepaspectratio ]{img/spearman_correlation} \includegraphics[ height=4.5cm, keepaspectratio ]{img/time_with_folds_v2.png} \caption{ {\it Left:} Time-lagged Spearman correlation between the selected ERA5-Land input variables and the target variable SMI over 24 months. {\it Right:} Time series of \gls{smi} from 1981-2018 from the Helmholtz dataset. The shaded area shows the standard deviation across different locations. Red dotted lines show the split points for the time-based split into $k$ folds, and the green dashed line shows the binarization threshold for drought events. Shown above is the frequency of the positive class (drought events) per fold. } \label{fig:lagged-input-correlation_and_class-imbalance} \end{figure} Low soil moisture levels depend on various meteorological input variables and the soil type. Retrieving accurate SMI ground-truth data is therefore complicated: Spatially-continuous soil moisture data on a resolution smaller than 0.25 degree is only available from satellite observations or model simulations. Satellite observations are exclusively available for recent years, include only the top few centimeters of the soil, and have data gaps due to unfavorable data retrieval conditions such as snow or dense vegetation~\citep{Dorigo2017}. Therefore, we select modeled SMI data as the ground-truth label. Due to SMI data availability, the selected experiment region is Germany. The data is limited to January 1981 to December 2018 by the availability of an overlapping period from both ERA5-Land and the SMI data. All datasets used in this study are freely available. The target variable SMI is derived from the German drought monitor uppermost 25cm of soil data as \gls{smi} labels~\citep{zink2016german}, which is generated by the hydrological model system based on data from about 2,500 weather stations~\cite{Samaniego2010, Kumar2013}. Figure~\ref{fig:lagged-input-correlation_and_class-imbalance} shows the SMI distribution over time and the chosen binarization threshold. We use monthly time-series of 12 selected variables from the ERA5-Land reanalysis, e.g., pressure, precipitation, temperature (see Table~\ref{tab:era5-variables}). We selected ERA5-Land due to its higher resolution compared to ERA5 (9 km vs. 31 km) and its consequently better suitability for land applications~\citep{MuozSabater2021}. To isolate the causal effects on SMI and avoid short-cut learning, we do not include potential confounding factors such as evaporation, runoff, and skin temperature. We also deliberately restrict the input variables to those commonly available in the latest generation of climate models to enable the transfer of the trained models to data directly obtained from climate model simulations. Land use and vegetation type data based on the MODIS (MCD12Q1) Land Cover Data is used as an input feature, represented as soil type fractions~\citep{friedl__mark_mcd12q1_2019}. \textbf{Interpolation and Label Derivation} Drought is an extreme weather event. Extreme events occur at the tail of variable distributions. Thus, we chose a classification setting with the tail of the SMI distribution as labels instead of a regression setting. The input data is re-gridded to the ERA5-Land regular latitude-longitude grid ($0.1^\circ \times 0.1^\circ \approx (9km)^2$). In this paper, we follow the drought classification from the German and U.S. drought monitors~\citep{svoboda2002drought, zink2016german} using an \gls{smi} threshold of $0.2$. \textbf{Dataset Split} As seen in Figure~\ref{fig:lagged-input-correlation_and_class-imbalance}, the \gls{smi} values for the same location exhibit a noticeable but declining correlation for lags up to 6 month. A simple random split over data points could therefore lead to data leakage, where memorizing \gls{smi} values from training and simple interpolation can lead to erroneously good results. Thus, we opt for a modified $k$-fold time-series split. First, we evenly determine $k-1$ split times to create $k$ time intervals. For the $k$th split, we train on folds $\{1, \ldots, k\}$, validate on fold $k+1$ and test on $k+2$. This split enables us to better assess parameter stability over time, mimicking increasing climate projection length. We decide to use $k=5$ as a good compromise between a sufficient number of folds for a robust performance estimate and large enough folds with multiple years of data to account for seasonal and interannual effects. Figure~\ref{fig:lagged-input-correlation_and_class-imbalance} shows the resulting folds separated by red dotted lines. Note that the drought sample availability (positive class) varies between folds from 12\% to 28\% \section{Methodology} \label{Methodology} We frame drought classification as a binary classification problem given climate, land use as well as location data. Since the \emph{memory-effect}~\citep{Kingtse2011} is suspected of playing an essential role in the development of droughts, we frame the problem as a \emph{sequence classification}: The models use a window of the last $k$ months of climate input data for the current location to predict the drought label at the current time step. In addition to the climate variables, we also provide a positional and seasonal encoding as input features: For the positional encoding, we directly use the latitude \& longitude grid values. A 2D circular encoding considers the seasonality based on the month of the year (\textit{month}). $ s = \left[ \cos \left(2 \pi \cdot \frac{\textit{month}}{12}\right); \sin \left(2 \pi \cdot \frac{\textit{month}}{12}\right) \right] $, where $[\cdot;\cdot]$ denotes the concatenation. Besides using the location as an input feature, we do not explicitly include inductive biases for spatial correlation. Due to missing values and a non-rectangular shape of the available data area, simple grid-based methods such as a 2D-CNN are not directly applicable. The exploration of methods for irregular spatial data, such as those described in~\citep{DBLP:journals/spm/BronsteinBLSV17}, will be a focus of future work. \textbf{Addressing Imbalanced Data} In the entire dataset, examples for the drought class account for 18\% of the total samples. We address this class imbalance by adding class weights proportional to the inverse class frequency during training and using an appropriate metric, PR-AUC, during evaluation. \textbf{Input Sequence Length} The determination of a suitable sequence length is based on the Spearman correlation of the climate variables and the target \gls{smi} variable and the lagged correlation of the \gls{smi} variable, as both shown in Figure~\ref{fig:lagged-input-correlation_and_class-imbalance}. Cyclical and non-cyclical decaying dependencies are considered, and both are indeed observed. Therefore, we select a window size of six months for our models, which in line with the period commonly used on monthly mean data by other drought indices such as the \gls{spi}~\citep{mckee1993relationship}, and the \gls{spei}~\citep{VicenteSerrano2010}. \textbf{Models} \label{Models} We investigate support vector machines (SVMs) (\textbf{M1}) with linear kernels as well as an MLP model which receives the flattened window as a single large vector as input (\textbf{M2}), which we denote by \texttt{dense}. To investigate whether an explicit inductive bias for sequential data is beneficial, we also include two main sequence encoders to obtain a representation of the input sequence for the sequence prediction. The \texttt{cnn} model (\textbf{M3}) applies multiple 1D convolutional layers before aggregating the input sequence to a single vector representation by average pooling. The \texttt{lstm} model (\textbf{M4}) uses multiple LSTM layers and the final hidden state as the sequence representation. For both sequence encoders, the drought classification is obtained by a fully connected layer on top of this representation. \textbf{Experimental Setup} We use \texttt{sklearn}~\citep{pedregosa2011scikit} for the SVM, and implement the other models in \texttt{tensorflow}~\citep{ https://doi.org/10.5281/zenodo.4758419}. To reflect the considerable class imbalance, we choose the area under the precision-recall curve (PR-AUC) as evaluation measure, which does not neglect the model's performance for the minority/positive class, i.e., droughts. For hyperparameter optimization we use \texttt{ray tune}~\citep{liaw2018tune} with random search instead of grid search due to its higher efficiency~\citep{bergstra12}. The best hyperparameters are selected per fold according to validation PR-AUC on the fold's validation data, and we report test results of the corresponding model trained across five different random seeds. The climate variables of the dataset are normalized to $[0, 1]$. \section{Results and Discussion} \begin{figure} \centering \includegraphics[height=3.4cm,keepaspectratio]{img/results_pr-auc.pdf} \includegraphics[height=3.4cm, keepaspectratio]{img/ablation_resolution_pr-auc.pdf} \caption{\textit{Left:} Results on PR-AUC of the different models on the test dataset across five different random seeds for drought classification using a window of six months. \textit{Right: } Ablation study:Inference on models trained on high resolution given input with decreasing resolution. Evaluation on five different random seeds using a window of six months.} \label{fig:results2} \end{figure} \textbf{Model Comparison} The resulting architectures were selected based on the validation PR-AUC on the second fold to account for a large variety of drought causes in the training data. The resulting hyperparameters are listed in Table~\ref{hyperparams}. The results are shown in Figure~\ref{fig:results2}, and Table~\ref{tab:results}. We observe that the PR-AUC is larger than the class frequency of the positive class indicating that the models indeed learned a non-trivial relation between the input variables and the target. The results for the F1 score can be found in Figure~\ref{fig:results1} with F1 scores larger than 0.5. Moreover, the performance varies for different folds, highlighting the challenging setting of a time-based split, where distributions can differ between different folds. There is no clear winner between the architectures: all models except the linear SVM perform comparable across folds. In particular, we do not observe a significant difference between models with an explicit inductive bias for sequential data. Since the utilized SMI data describes only the uppermost 25cm of the soil, the suspected memory effect might be more prominent in deeper soil layers. Our initial data analysis supports this, with the correlation of the input variables with the target being strongest close in time, cf. Figure~\ref{fig:lagged-input-correlation_and_class-imbalance}. \textbf{Ablation: Coarsening the Data Resolution} As an important future application of our models is on simulated climate data from climate models, we investigate further how the performance is affected by changing the resolution from the original $0.1^\circ$ to a coarser spatial resolution. The horizontal resolution of CMIP6 models varies from around 0.1$^{\circ}$ to 2$^{\circ}$ in the atmosphere~\cite{Cannon2020}. Given the regional restriction of our input data, we restrict the ablation study to a range of 0.1$^{\circ}$-1.0$^{\circ}$ with 0.1$^{\circ}$ steps. The architecture performing best on 0.1$^{\circ}$ is used in inference to calculate the results on the coarser resolutions without re-training. On the right-hand side of Figure~\ref{fig:results2} we visualize the results of the resolution ablation. In general, we observe a negative correlation between resolution and performance. The LSTM architecture is most affected by this but also generally shows the noisiest results overall. Overall, the models trained on 0.1$^{\circ}$ input data show satisfactory performance when applied to coarser input data without dedicated training. This promising result indicates that it is possible to predict drought events under varying future climate scenarios with models trained on fine-grained drought labels. \section{Summary and Outlook} We summarize our contributions as follows: (1) We are the first to compare several ML models in their capability of classifying agricultural drought in a changing climate based on soil moisture index (SMI). We use ground truth data from a hydrological model and intentionally restrict the climate input variables to those available in the newest generation of CMIP6 climate models. We also include land use information. (2) We provide an ablation study regarding a transfer to coarser input data resolution, demonstrating that the model capabilities are transferable to lower resolution when trained in higher resolution. In future work, we plan to use climate model output as input data for our algorithm to produce drought estimates under varying future scenarios. This will facilitate the transfer from learning on real input data to input data obtained from simulations. Apart from feeding the location information encoded as an additional input feature to the model, we plan to add location-aware models motivated by the strong regional correlation of the input variable as seen in Figure~\ref{fig:spatial-corr}. Additionally, we plan to investigate other ground truth labels, e.g., SMAP~\citep{smapL3} and expand the study region globally. Overall, we consider our study as an important step towards machine learning-based agricultural drought detection. With our intentional restriction to variables available in climate models, we pave the way towards application on simulated data, thus facilitating the investigation of agricultural droughts in a changing climate. \begin{ack} The work for this study was funded by the European Research Council (ERC) Synergy Grant “Understanding and Modelling the Earth System with Machine Learning (USMILE)” under Grant Agreement No 855187. This manuscript contains modified Copernicus Climate Change Service Information (2021) with the following dataset being retrieved from the Climate Data Store: ERA5-Land (neither the European Commission nor ECMWF is responsible for any use that may be made of the Copernicus Information or Data it contains). Ulrich Weber from the Max Planck Institute for Biogeochemistry contributed pre-formatted MCD12Q1 MODIS data. SMI data were provided by the UFZ-Dürremonitor from the Helmholtz-Zentrum für Umweltforschung. The computational resources provided by the Deutsches Klimarechenzentrum (DKRZ, Germany) were essential for performing this analysis and are kindly acknowledged. \end{ack}
{ "timestamp": "2021-12-01T02:25:38", "yymm": "2111", "arxiv_id": "2111.15452", "language": "en", "url": "https://arxiv.org/abs/2111.15452" }
"\n\\section{}\n\\label{}\n\n\n\n\n\n\n\\section{}\n\\label{}\n\n\n\n\n\n\n\n\\section{Introduction}(...TRUNCATED)
{"timestamp":"2021-12-01T02:27:11","yymm":"2111","arxiv_id":"2111.15496","language":"en","url":"http(...TRUNCATED)
"\\section{Background and Considerations}\n\\label{sec:background}\nIn this section, we revise sever(...TRUNCATED)
{"timestamp":"2021-12-01T02:25:38","yymm":"2111","arxiv_id":"2111.15451","language":"en","url":"http(...TRUNCATED)
"\\section{Introduction}\n\\noindent As a decentralized and distributed public ledger, blockchain te(...TRUNCATED)
{"timestamp":"2021-12-01T02:25:34","yymm":"2111","arxiv_id":"2111.15446","language":"en","url":"http(...TRUNCATED)
"\\section{Introduction}\nOver the last few decades, control of the discretisation error generated b(...TRUNCATED)
{"timestamp":"2021-12-01T02:27:36","yymm":"2111","arxiv_id":"2111.15504","language":"en","url":"http(...TRUNCATED)
"\\section{Introduction}\n\nModern explorations of possible mission scenarios to the Ice Giants Uran(...TRUNCATED)
{"timestamp":"2021-12-01T02:27:07","yymm":"2111","arxiv_id":"2111.15494","language":"en","url":"http(...TRUNCATED)
"\n\\section{Introduction}\nRational extended thermodynamics (RET) is a theory applicable to nonequi(...TRUNCATED)
{"timestamp":"2021-12-01T02:27:02","yymm":"2111","arxiv_id":"2111.15492","language":"en","url":"http(...TRUNCATED)
End of preview. Expand in Data Studio

No dataset card yet

Downloads last month
8